Time after time, new innovations do not wait for human society to figure out how we’re going to integrate it. There’s a reason these things are called “disruptive” in business jargon. They shake the box.
For example, Photoshop has had AI powered tools in it for almost a decade now, and nobody’s making that a front-and-center issue. Grammerly has been around for a long time too, and nobody’s pointing at that in panic either.
But AI that can believably regenerate somebody’s voice, or study a few thousand images and make a new image that resembles them in style, and suddenly it’s important.
It’s not that the tools can do it. It’s a matter of degree. This shows us that the problem isn’t that the AI can do it – it’s that the sudden advances have taken us by surprise, and we realize that as a society we have been so busy trying to figure out how we CAN do it, that we haven’t stopped to think about whether we SHOULD.
Getty Sues Everybody
The lawsuit by Getty Images against the creators of Midjourney and Stable Diffusion claim that these tools store images and paste together parts to make new images like an electronic collage.
This is not even remotely how they work. Instead, a special kind of deep learning neural net is trained on the images, producing what is essentially a complex formula with hundreds of mllions of parameters that the AI generation tools use to create new images.
In my opinion these lawsuits will fail immediately on expert testimony because of this gross basic misunderstanding of the technology. Images are not being copied, are not being stored in the database. If they were, you would need thousands of terrabytes to store the data. As it is, Stable Diffusion can generate images on a dataset as small as 2.7 Gb. They don’t even make SD cards or flash drives that small anymore.
A further complication is that in Europe, as in the United States, datamining is legal, so after the question of copying is set aside (to reiterate, it’s not copying, it’s using the images to train a neural network), then it there’s a very good chance that the law suits will fail on the scanning without permission issue as well, because protecting from analysis is not a legal right any copyright holder anywhere in the world enjoys. If it were, simply observing an image displayed on the internet and having any kind of opinion about it would be a crime.
The images are being reduced to parameters in a very complex equation with hundreds of millions of parameters. Datamining isn’t illegal. Training neural networks on material you don’t own isn’t illegal either. Copyrights aren’t being directly violated, because you couldn’t bring up an exact copy of anything the neural nets are trained on if you tried (though you can get close). And, you can’t copyright a style, or a composition, or a color scheme. All that’s left is Right to Publicity, and the responsibility for that falls on the users of the tools, not the tools’ makers.
That doesn’t leave much meat left on the bone.
It’s Just Going Sideways
And sure enough, this is exactly how the law suits are shaking out. Sarah Silverman et al. tried to sue OpenAI for reading their stuff and incorporating that knowledge into their ChaptGPT model. The only problem was that they couldn’t make ChatGPT spit out exact copies of their manuscripts. The New York Times tried the same thing, and had the same problem . Why does this matter? Because in order for the courts to offer anything to the plaintiffs, first there must be a viable record of wrongdoing. It’s impossible for the courts to proceed on the basis of being butt-hurt alone. There have to be provable damages. The court runs on two things above all else: monetary damages, and proof of injury. The New York Times — and Sarah Silverman, and the handful of artists trying to sue Midjourney — haven’t established either one. Even to argue undue restraint of trade, the “right to publicity” argument, they have to show exactly how they’ve been hurt by the AI’s, and none of them can demonstrate this. These cases have been largely thrown out because of these lacks, and all that’s left is the damages from restraint of trade, which none of them can clearly demonstrate.
In my opionion, the writers and artists suing are the victims of class action ambulence chaser lawyers. If they win, mostly the lawyers get the money. And companies like Getty Images are only suing because they want to make their own generative AI service based on Getty Images licensed images and sell that as a service. When you can download Stable Diffusion and SDXL for free, why would anybody care?
The Right to Publicity
What remains appears to be Right to Publicity violations – the recognizability of artist styles, or celebrity faces, which have traditionally been treated by the courts as the responsibility of the individuals using the tools, and not the makers of the tools themselves. As a user, it is my responsibility not to try to sell AI generated images that simulate the style of Salvadore Dali, Chris Claremont or Michael Whelan and sell them with the claim that they are by the original artist.
Finally, if I happen to produce output that resembles one of those artists, how much can the original artist claim they have been damaged by such a production when human artists imitate the style of other artists all the time? Cases where one artist considers themselves damaged by someone else emulating their style are virtually nonexistent, and I could find no examples. Certainly apart from being grumpy about it, few can actually demonstrate in real numbers that their business is being negatively affected by it, if any. Greg Rutowski comes to mind, and even he is circumspect about it. He’s concerned, but he’s not losing his shit over it.
Sue the Tool User, Not the Tool Maker
Think about it for a moment: if they can stop Stable Diffusion and Midjourney for being able to replicate the style of other artists, then they should be able to stop all word processors for being able to output written pieces that emulate the style of other writers. Oops, I accidentally wrote a story in the style of Roger Zelazny, they’ll be coming for my copy of Windows Notepad now… Saxaphones should be outlawed because it is possible for another player to use one to replicate the style of Kenny G … Do you see the fallacy here? It’s not clear cut at all, and is in fact a matter of degree, which makes it a purely subjective call. In point of fact, those bringing these amorphous law suits not based on any established rule of law fail to inform the court as to why the existing protections against copyright infringement are insufficient and why the makers of tools are suddenly liable when they never were before now.
In any case, it’s too late to stuff the genie back in the bottle. AI powered art tools are here. It’s what we do next, to find ways to understand and integrate the new tools, that will define the new landscape.
It Feels Wrong, But Why?
And yet, one way or the other, we still have the same situation. Stable Diffusion, underlying technology for all the successful AI image generation tools, is open source. That makes it very hard to unmake, and even harder to undistribute. Additionally, while it’s obvious that disruptive technology is generally created for the primary purpose of eventually making money, it’s doing so here without breaking the law in any obvious way.
And THAT’S where the problem lies. The ability to replicate somebody’s artistic style to produce specific results is the part that’s disruptive. It makes it harder (and I know I’m preaching to the choir here) for artists to get paid for their work and to have the value of their work respected. Artists instinctively know this, but they don’t have much of defense for what’s happening to them, and this makes them feel like victims, and in a real way, they are.
Artists gotta eat. And pay rent. And visit the doctor. And initially, tools that do work they can do are going to break things.
But as with the invention of the camera, and the music synthesizer, artists will adapt their workflows to include the new tools, and those that do will have an incredible competitve edge.
And those that don’t — or can’t — will suffer for it, and as with any new technology, there isn’t a lot we can do to change that, except maybe help them avoid having their stuff analyzed for neural networks, or helping them learn how to use the new tools. The legal questions won’t be resolved soon enough to matter.
Nobody likes to be hit in the face with some new career-threatening problem that they didn’t see coming, and it’s hard to say that three years ago anybody saw this as an impending storm on the horizon. That’s why it feels wrong. It’s doing something with people’s artwork and photographs that nobody saw coming and for which the standard rules for intellectual property offer no protection whatever Whatever is going to happen as a result of this new technology is just going to happen, long before we figure out something practical to do about it, if we figure out anything at all..
Can Anything Be Done?
I can’t imagine how one would unexplode the hand grenade this represents, given that it takes ten to fifteen years to resolve landmark cases in court. By that time, the technology will have evolved well beyond its current state and likely built into practically everything.
The Getty lawsuit against Midjourney, Stable Diffusion et al. will likely fail on the merits because they don’t fully understand what they’re suing over, and they appear to be trying to claim rights they don’t actually have, but it’ll take years to even get that far. They can start their lawsuits over again and file new cases, but that starts the clock over from scratch.
Nor can they simply use the DMCA and have the source libraries removed from the web (I can’t imagine on what grounds they would do this, because the DMCA only applies to finished works, not tools for making them). Using DMCA’s on this stuff is like a perpetual unwinnable game of whack-a-mole even if somehow you could make it work.
So, I’m going to estimate ten to fifteen years to see anything on this, assuming there isn’t some sort of settlement. Considering Getty is looking for a couple of trillion dollars in damages, and they know they’ll never get that, it seems to me that they’re trying to just scare the ever-loving crap out of the defendants in court, going after settlement money so as to look good to their shareholders. They don’t give a crap about setting a legal precedent. There will be nothing upon which to base new case law, no judgment to cite, and the end result will be money changes hands (if it even gets that far). Once the lawsuits are over, the tools will just chug along as always, completely undeterred.
And the Getty lawsuits are the best shot at this there is.
A Note about Glaze and Nightshade
Both of these anti-AI image mangler apps attempt to “poison” AI by either adding small non-zero numbers to the latent image before passing it to the diffuser, or by adding “phantom” data to the image to fool the training step for the graphical models into thinking that a picture of a cat is, in fact, a dog. Neither of these really do what they claim to do. Both were developed in the “publish or perish” academic environment, by professors who only understand in general how their anti-AI tools work, and both were built on the efforts of unpaid graduate students who did the actual work. The effectiveness and quality of the results are, therefore, about what you’d expect.
Remember that the point of these tools is not to help artists protect their work. The point of the tools is to advance the reputation and standing of the professors involved, and few have the technical prowess to demonstrate that they don’t, in fact, work, except in a cleanroom setting where the variables of the test can be strictly controlled. They were both built to test against Stable Difussion 1.5, which is at this writing two full generations of technology behind that in most common use today. Moreover, the way Nightshade works focuses on token frequency in LAION and LAION tagging, which has been irrelevant for a while now.
Both rely on adding informational noise to the image to create the impression that the image is in a different style than it really is, or contains a different subject than it really does. Both, however, require that a model be trained on a heavy diet of the adulterated images before the trained model will exhibit the desired properties, i.e., to screw up the art styles or the content portrayal. Trust me when I say this: unless you are one of the most prolific artists in the world, and have the time to adulterate everything you’ve ever done over the years and re-upload adulterated versions of what you’ve made, you’re not going to have any affect at all on the training of new models. Heaven knows, after being trained on literally billions of images, you’re not going to have any affect at all on Midjourney or any of the other similar generative AI systems. That ship sailed literally years ago.
Most importantly, there is no evidence, apart from extremely narrowly defined tests in carefully controlled environments, that either Glaze or Nightshade work at all. I can’t stress this enough. You are far better off learning and growing as an artist and creating new art than you are hoping that magical fairy dust will protect your old work. The time to set all that up was before any of the major models were built, and anybody with a home computer can train a LoRa on your work and completely bypass whatever effects of either of these tools have.
I’m sorry if this is disappointing, but if it’s going to be stopped by the global community, there must be a plan put into motion that works. Intellectual property rights laws and right to access as they stand now simply don’t cover it. The next step is a concensus on what to do, but good luck reaching one. Humans have always acted as individuals. Given a population of a sufficient size, and a given stimulus, they will not choose to do a certain specific thing in response to that stimulus. They will do all the things.
That, to me, is what makes the arguments against generative AI art so frustrating. If AI art can’t be copyrighted, as many claim, then what rights are being taken from actual artists? There’s nothing to recover, because by that definition AI art has no intrinsic value. It’s all doublethink gobbledegook.
Anything that a human can imagine will eventually be made or built or invented, and sometimes by multiple people at the same time. I believe that AI art tools on this scale were inevitable. It’s how we use them and what we do next that matter.
These images, by the way, were all generated, by me, using a Stable Diffusion. I used Google to do image searches for each of them and I can confirm that they are not other people’s images. They’re unique as far as I can tell. If you find any of these images and it’s older than the copy posted here, let me know and I’ll take down my copy and reexamine my life.
They’re meant as computer wallpaper. If you see one you like, click on the image to zoom in, then right-click and “Save As.”
I used to work for Technicolor Videocassette back in the day. We’re talking about 1990 here. Back when the videocassette was king, and the Intel 486 pretty much ruled the world. In those days Technicolor made about 70% of all the VHS cassettes in the world (including all of Disney’s stuff).
Anyway, we were flying out to Westland, Michigan every week, doing our development work on their videocassette packaging and shipping pipeline that they were integrating with Walmart for what’s called “JIT” delivery (“Just In Time”) services. This meant they’d get the order for the specific video tapes, pick them from inventory and ship just the ones the store had requested.
So we were working on a database driven system that fed a monstrous device called the A-Frame, which was little more than a big conveyor belt from which video cassettes were picked from the stacks standing along its pathway. The cassettes would be selected by computer, be popped off the bottom of the stack, hit the belt, and end up in a box at the end. It made the job of finding the cassettes in the warehouse for each order moot, and saved a lot of steps for the people who had to run around and fill the individual orders. We had spent months on the project, and were working in a small long room with windows in one side that had previously been a shop floor production office. We called the thing The Aquarium because it resembled nothing more than a big fish tank, about the same proportions and glass on the one side, you get the idea.
We had sort of a pointy-haired IT manager, who shall remain nameless (partly because I don’t remember his name, so it’s just as well). We put a sign in the window of the Aquarium that said Do Not Tap On Glass, just like you’d see at a pet store, but when he saw it he didn’t get it at we had to explain it to him. Not the brightest crayon in the box, this guy.
The real story was the database server. Today everybody talks about SQL servers, and they’re commonplace, but back then it was brand new and nobody really had a good handle on what they could do and how they worked – except this one guy in his early 20’s we’d hired away from Microsoft, because he was an expert in SQL. You pronounce it “sequel”, but back then nobody could agree on how it was pronounced, and this ex-Microsoftie called it “Squirrel”. It was as apt as any other pronunciation, and we liked the confused expressions people got when we talked about it in front of them, being the incurable geeks that we were, and so for us, it stuck.
Then came the problem of connecting the SQL server to the A-Frame. In those days we had pretty bad networking. The best you could get was something called ARCnet, and the cards cost about $300 each, and that was in 1990 dollars. They failed a lot, and these days your average cable modem outperforms it by about ten to one or more. So to cover the great distances involved in the warehouse where we were, we needed something better. There was no wifi then, but there was optical fiber.
This was the glass stuff. It was expensive, and fragile. Once a forklift ran over a cable and broke the fragile strands, and a thousand dollars worth of this glass cable had to be restrung. Finally, the networking problems and the SQL server and the A-Frame were all connected together, and we ran our first communications test. We all held our breaths, and sent the message from the control station. The A-Frame responded.
We had been working for months getting to that point. you never saw a bunch of programmers whooping and hollering with excitement as we did that morning.
While all this was going on, the Pointy Haired IT manager happened by and asked what all the commotion was about.
“The Squirrel’s up on glass in the aquarium!” we happily exclaimed.
Mr. Manager just looked quizzically confused, and not wanting to admit that he had no idea what we were talking about, gave us a vague, slightly open-mouthed smile, and excused himself.
The main problem with developing working warp drive apparently isn’t the math. We’ve figured that part out. What we need, though, is an unimaginably monumental supply of energy to power the thing.
The spokesman for CERN’s ALPHA experiment—Jeffrey Hangst of Aarhus University, Denmark—says that trapping these atoms was a bit of an overwhelming experience:
What’s new about Alpha is that now we’ve managed to hold on to those atoms. We have a magnetic bowl, kind of a bottle, that holds the antihydrogen […] For reasons that no one yet understands, nature ruled out antimatter. It is thus very rewarding, and a bit overwhelming, to look at the ALPHA device and know that it contains stable, neutral atoms of antimatter.
Well now we’re one step closer. At CERN, scientists have successfully captured antihydrogen and can hold atoms of it for study in a magnetic bottle. They know they’ve got antihydrogen, because when they release it, the expected annihilation takes place.
You’ve just gotta see this.
Why have I been writing about leaps in scientific knowledge and technology lately?
Because I feel that Humanity is reaching for its future with both hands, and that if we can solve the mysteries of the universe, it’ll make it easier to solve the problems of your everyday garden variety human beings as individuals. It is an exciting time to be alive. We are on the verge of a new frontier, and it all begins right here, right now. Our perspective and perceptions are shifting as our awareness and understanding of the very nature of reality itself expands.
On seeing the Enterprise’s warp engine while visiting the set of Star Trek: The Next Generation (where he would briefly play himself in the 1993 episode Descent, Part I), Stephen Hawking smiled and said: I’m working on that.
I feel like a kid on Christmas morning. I can hardly wait to see what’s under the tree.
I wrote this thing ages ago for a commercial project for the now-defunct subsidiary of Sony, Sony Development. We were trying to make a giant pinball machine where you tilted the entire machine to play. To test the physical controller hardware as they worked the kinks out of the design, they needed a little 3D engine to hook up to them so they could see what it would do. So in about a week, I wrote one.
It’s a little odd as engines go in that it loads Lightwave 6.x (or greater) scene and model files and renders them, and then lets you fly a camera around and look at them. It lights the scene according to whatever lights you put in the scene, but all lights are translated as point lights. I never got spotlights or area lights working. It does respect global ambience settings in the scene, though, as well as maintain the hierarchical relationship between all the scene elements (i.e., parenting of scene elements is preserved at runtime.
It eventually ended up being listed in the news section on the now defunct Flay.com, one of the world’s more important Lightwave 3D web sites, and OpenGL.Org also had my listing. I even found a web site in Japan that linked to the original page. Too bad I can’t read Japanese! The engine has been downloaded tens of thousands of times since I posted it after SIGGRAPH 2001.
The engine does do texture maps, but only UV textures, and there are a few ways to apply the textures in Lightwave that don’t actually work. The best approach seems to be to convert whatever conventional texture mapping you might have on your models into UV maps using the “Make UVs” tool in the “Map” toolset in modeler. Since the loader doesn’t handle DMAP chunks, models using cylindrical or spherical mapping need to have the vertices split at the seam, or you’ll get mapping errors.
The source code will compile under either Windows, using Microsoft Visual C++ 6.x or greater, or under Linux using GCC. Yup, it’s cross-platform code!
Download the source code, binaries and sample data here. It’s pretty tiny by modern standards – only 3 megs, even though it includes all the model files and textures and whatnot that you get with it. It’s a fairly modest example of a 3D engine. Once I got the object and scene loaders working, the rest of the engine was done in about five days. It does give some good example code for reading objects in native Lightwave LWO2 format, though. By the way, in the ‘credit where credit is due’ department, I started with the example ‘C’ loader code written by Yoshiaki Tazaki at D-Storm.
Once you’ve gotten it to compile (it shouldn’t be difficult if you know how to use the compiler at all), run it by giving a parameter of either a model file or a scene file. If you give it a scene file as a parameter, it’ll assume all the assets are right there in the same directory with you, even if the scene file says otherwise. If you give it a model file as a parameter, it’ll just load the model file and let you spin it around and look at it from different angles. If you can’t compile the project or don’t want to bother, binary executables are included for both Linux and Windows.
A comment: this project was set up to compile from KDevelop in versions prior to 2.x. If your version is more recent than that, you’re going to have a few problems getting to compile as a project using KDevelop. I’ll may revisit this and make a newer version with new project files (thought I can’t promise when.)
Interestingly, the Linux version runs significantly faster than the Windows version does, even though it’s exactly the same code. I think Linux just works better from the standpoint of interfacing the OpenGL API with the hardware. I know I could do a lot more about optimizing the rendering pipeline, though. Right now the only thing I do is sort the polygons by material; this cuts down on having to use the GL material commands for every single darned polygon, and it sped things up a lot. It’s still not a really quick engine as engines go, but it’s quicker than it first was. I never even implemented tri-strips, and that would have sped it up at least double.
I’ve absolutely got to offer a caveat here as well: I wrote this engine as an exercise, and I stopped before I finished it. There are leftovers and leavings of various ideas in it that I never implemented. The object and scene loading classes themselves are fairly clean, however, and I did my best to keep that functionality as encapsulated as possible so they could be reused by somebody else if needed.
Could I write the same code now? No. If you don’t use linear algebra for 3D for a few years, you forget how. Could I learn to write the same code now? Absolutely. I did it before. I can do it again.
Update: It Runs on a Raspberry Pi
My Raspberry Pi 4 running OpenGL code I wrote over 20 years ago and ported to the Pi in August of 2016. The fastest of these windows is running 120 frames per second, and the CPU is barely warm to the touch.
For a lark, I decided to try compiling this on a Raspberry Pi, and to my great surprise, apart from a small tweak to one of the headers, it worked! Thinking on it, the Raspberry Pi is actually much more powerful than the big bruiser of a desktop machine I developed it on in the first place, yet the computer is no bigger than a pack of cards and draws only about 15w of power. The lightbulb in your refrigerator, if you still have one that isn’t LED based, probably draws more.
I start another life drawing class in about a week, and I’m brushing up on my Maya chops. Soon I’ll be able to run with the big dogs. In the meantime, check out the code page – I’ve finally fixed the problem with the nGene source code that prevented it from compiling cleanly on Linux using KDevelop 2.0! Unfortunately, I waited so long that KDevelop 3.0 has been out for months now. Ah, well, it may work as is, but if it doesn’t, I’ll fix it.
What a busy year it’s been. It seems like it’s all gone by in a blur.
GREEN: The NFL didn’t respond to an interview request for this story. In court papers it has argued that it doesn’t have to observe state labor laws. Attorney Dennis Vacco represents a company that manages the cheerleading team for the Buffalo Bills, one of the clubs facing a lawsuit. Vacco contests cheerleaders’ claims that they’re forced to work hundreds of hours for free.What Causes the Condition?Drinking too much alcohol is thought to be a cause of the condition because the effects of the alcohol prevent your throat muscles from relaxing properly during the night, meaning they have to work harder to force the air out. Reducing the amount of alcohol you drink can help you to prevent the condition, as well as other sleep disorders such as restless legs syndrome and sleep paralysis, from occurring. Ideally, you should avoid drinking alcohol for up to 4 hours before you go to bed so that your body can overcome the effects of the substance.First, your body’s systems and senses shut down. Many experts agree that the sense Fake Oakleys of touch is the last to go, along with your hearing, but you wouldn’t guess that by looking at the skin, which goes all corpsey long before it should. The cheap nfl jerseys lungs can indeed give off an audible death rattle, and we can even tell approximately how much time is left by the distinctive tone of the rattle. Also, much like pancakes, you give off a sweet smell the result of acidosis, or the breakdown of internal sugars. Those hospital cats that can predict when patients die? They don’t have Miriam Black psychic powers; they simply have a Cheap Jerseys from china heightened sense of smell.This scenario has played out over and over in other industries. Foreign companies do well at the top of the market, typically because they start out having superior quality, technology or brand recognition. Local wholesale nfl jerseys Chinese competitors do well in the middle and lower segments, typically by focusing on lower cost, greater localization and faster turnarounds.Rivers, who threw for a season low 178 yards against the Broncos, undoubtedly will need to improve upon that figure if the Chargers are to have a chance against the Falcons. San Diego ranks 12th in the league in passing yards per game (263.3) but just 22nd in rushing yards per game (91.8).Agency officials say that 19 of 29 public easements for beach access in Malibu remain closed. Cheap Jordan Sale Some property owners and residents have tried to deter the public from going to the beach by hiring security guards, putting up fake no parking signs, painting curbs red and locking or blocking access ways.
Smith was very classy during a very hard time. He ended up getting a shot with the new team in Kansas City and he’s made the most of it. If he didn know what was on that tape, he a liar. I just saying it. Nine years after the NFL relocated a game to the UK, the world’s richest sporting league is on a mission to broaden its horizons and grow its fan base outside America. From 2018, it will stage at least four games in London each season, which Deloitte estimates will boost the city’s economy by 58 million each year.. The men’s contest begins at midday. The event will also be Cheap Jerseys shown on ESPN at 3pm Eastern, and will be available to watch online.. Another cause that Joe Montana supports is awareness of American blood pressure. When, in 2002, Joe Montana was diagnosed with high blood pressure, he became one of 58 million Americans with the disease. He http://www.cheapnfljerseyssu.com opted out of playing his last year at Miami and entered himself into the NFL draft. Seizing on the opportunity to get a quality quarterback that they could slowly groom for the starting position the Pittsburgh Steelers selected Ben Roethlisberger with the 11th pick in the first round of the 2004 NFL draft and the rest, as they say, is history.. We could, if asked, diagram our own shoulders and knees right down to our slowly shredding cartilage. For the lucky ones, after four years everything simply hurts; for the others, the body quits. Many readers today might be Knockoff Oakleys wondering what Cannon was referring to. Off season jobs? Before professional athletes became routinely wealthy, that was standard: A ballplayer had to work a second job in the off season to support himself and his Cheap Jerseys family. Win and move on.11. The Lions won a nail biter. Karageorge’s parents filed a missing person report Wednesday evening, and his mother, Susan Karageorge, told police he has had several concussions and a few spells of being extremely confused, according to the report. Wednesday he texted a message that cited the concussions and said, “I am sorry if I am an embarrassment.”. Gross margin percentage for our retail operation segment was 45.9% compared to 48.2% in the previous year’s quarter that was due to increased promotional activity in that segment. SG expenses increased $17.6 million to 30.2% of net sales from 27.5%. The excitement of it. The intensity of the battles. In part, the AFL has revamped the draft as a spectacle at the same time that they’ve introduced the academy/father son Cheap Jerseys from china bidding system into draft day. On the whole, the bidding and matching system worked well and gave the draft a new plotline, as we wondered if the Swans would match the Bulldogs’ bid for Andrew Dunkley’s boy (theydidn’t)..
The nGENE
Here’s the source code and compiled binaries for my little OpenGL engine, which I have named the “nGene” after a suggestion by a coworker.
It’s a little odd as engines go in that it loads Lightwave 6.x (or greater) scene and model files and renders them, and then lets you fly a camera around and look at them. It lights the scene according to whatever lights you put in the scene, but all lights are translated as point lights. I never got spotlights or area lights working. It does respect global ambience settings in the scene, though, as well as maintain the hierarchical relationship between all the scene elements (i.e., parenting of scene elements is preserved at runtime.
To clarify the copyright status of the nGene, it’s open source and licensed under the LGPL, meaning you can use this code for your commercial projects if you wish, without having to worry about having to release the code for it or your own project along with the compiled form. By all means, steal the parts you like and toss them into your project if you think it’ll help. It’s why I wrote it in the first place. Note that I’m not responsible for the results, i.e, if it breaks, you get to keep all the pieces.
If you do download it, note that you’ll be in good company – the nGene has been downloaded over a quarter million times since I originally posted it.
Special thanks to gifted artist and animator Eric Estrada, currently a lighter at Dreamworks, for the 3D scan of his head.
It texture maps, but only UV textures, and there are a few ways to apply the textures in Lightwave that don’t actually work. The best approach seems to be to convert whatever conventional texture mapping you might have on your models into UV maps using the “Make UVs” tool in the “Map” toolset in modeler. Since the loader doesn’t handle DMAP chunks, models using cylindrical or spherical mapping need to have the vertices split at the seam, or you’ll get mapping errors. Also, I never got around to writing the polygon smoothing algorithm, so for now it’s flat shaded only.
The source code will compile under either Windows, using Microsoft Visual C++ 6.x or greater, or under Linux using GCC. Yup, it’s cross-platform code!
Download the source code, binaries and sample data here. It’s about 3 megs because of all the model files and textures and whatnot that you get with it. I wouldn’t get too excited if I were you – once I got the object and scene loaders working, the rest of the engine was done in about five days. It does give some good example code for reading objects in native Lightwave LWO2 format, though. By the way, in the ‘credit where credit is due’ department, I started with the example ‘C’ loader code written by Yoshiaki Tazaki at D-Storm.
UPDATE: I’ve only just now gotten around to fixing a problem with the project files that kept you from using KDevelop 2.1. I know 3.0 is out, that’s next, but at least this version works in Linux now. It’s a tarred, gzipped archive.
Once you’ve gotten it to compile (it shouldn’t be difficult if you know how to use the compiler at all), run it by giving a parameter of either a model file or a scene file. If you give it a scene file as a parameter, it’ll assume all the assets are right there in the same directory with you, even if the scene file says otherwise. If you give it a model file as a parameter, it’ll just load the model file and let you spin it around and look at it from different angles. If you can’t compile the project or don’t want to bother, binary executables are included for both Linux and Windows.
Interestingly, the Linux version runs significantly faster than the Windows version does, even though it’s exactly the same code. I think Linux just works better from the standpoint of interfacing the OpenGL API with the hardware. I know I could do a lot more about optimizing the rendering pipeline, though. Right now the only thing I do is sort the polygons by material; this cuts down on having to use the GL material commands for every single darned polygon, and it sped things up a lot. It’s still not a really quick engine as engines go, but it’s quicker than it first was.
I wrote this engine as an exercise, and I stopped before I finished it. There are leftovers and leavings of various ideas in it that I never implemented. The object and scene loading classes themselves are fairly clean, however, and I did my best to keep that functionality as encapsulated as possible so they could be reused by somebody else if needed. So don’t cringe when you read the code. You’ve been warned.
It was used by the UCLA Laboratory of Neuro-Imaging – here is the testimonial letter I received from Craig Schwartz:
E-MAIL: craig@nospamplease.loni.ucla.edu
X-Mailer: ELM [version 2.5 PL2]
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Mailer: ELM [version 2.5 PL2]
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Dear Gene,a few weeks ago you helped me with nGene – which I’ve been using to debug a small java library which creates LWO files as output. Although the contributed ModelViewer module did not have everything I wanted, and was unable to display the largest of my test data sets, it did enough (supported by nGene) that I was able to use it to keep my coding, thereby contributing significantly to my successful project. Many thanks! Craig SchwartzUCLA Laboratory of Neuro-Imaging
Recent Comments