Dean Siren asks:
"When will mainstream moviemakers, such as Lucasfilm, finally replace their render farms and Renderman with a GPU (Geforce or Radeon) and Cg based renderer? Would the savings in equipment cost and rendering time be worth the learning curve? Is anyone developing such an app? We've had the tech for years with video games, but the art form hasn't really been tried. Is anyone working on this now?" An interesting thought, and it puts an interesting spin on the old computers-will-replace-actors argument. It also means good planning
ahead of time, since there will be no "post-production" stage where you can clean up the mistakes, and perform the minute adjustments needed to make things
just right. Do you think such an art form will ever catch on in Hollywood, or will small shops have to be the ones to pioneer this before others follow suit?
"There's a forum called Machinima whose main idea is that not only should the final rendering of a movie be generated in real time, but so should the animation, implying that computer animation should be performed, maybe even improvised, live by motion captured voice actors. Accomplishing this goal would require replacing not only Renderman but Maya and Softimage as well. A developer named Strange Company took the challenge and started writing an app in this direction called Lithtech Film Producer (interview here). They even made easy porting a high priority. But they soon realized that they were tiny and the project was huge so they quit. But the idea of improv animation is full of potential."
real time? (Score:1)
Replace actors with computers? (Score:2, Funny)
Modeled voice synthesis? (Score:2)
what's the point?? (Score:2, Insightful)
Re:what's the point?? (Score:1)
We'd be able to see a lot more computer generated worlds created by people with a small budget. We'd be able to see a lot of creativity expressed that would otherwise be suppressed for lack of funds.
We'd be able to see a lot of gay CG cowboys eating pudding.
Re:what's the point?? (Score:1)
Using a render farm has a lot of plus's, look at some of the LOTR movies that show them animating orcs with real humans, id love to see them (machina or whatever) do this with 1/10th the quality with a bunch of geforce cards using cg. As far as I understand it, the animators and modelers will have pretty nice machines. I was able to play on an SGI cluster, it used IRIX, and they used lightwave to model / animate. Once they animated this though, they would just send the render job to the rest of the machines and they would crunch away at the numbers for a while. Im assuming that when it is rendering, it is using cpu power and not the video card, but i may be wrong. I really don't think if you took a bunch of geforce chips and did something fancy with them, you could even come close to the quality in the movies mentioned in the above post..
Not with curent technology (Score:1)
HUH? (Score:5, Insightful)
In a nutshell, this topic makes zero sense.
Nobody is going to drop PRman for Cg anytime soon. Why? Because they have two different target markets and address two different needs.
Talk to somebody like ILM or PIXAR thats doing renderings that take 70 hours a frame (like some of the frames for Toy Story II did) and talk about real time cards. They have a good laugh and say "go away kid".
Can these cards handle anti-aliasing like RM can? No.
Can these cards handle DOF like RM can? No.
Can these cards to programmable shading like RM can? No.
These cards are designed to do graphics real time with the best quality they can squeeze out while still hitting their timing targets. RM is meant to get the best possible quality - and who cares about time?
This is a silly pointless discussion. Yes, in 10 or 20 years maybe the hardware will be there, but it isn't now and you sound silly making speculations like these.
Re:HUH? (Score:1)
5 years (for TS2 quality)
Re:HUH? (Score:2)
It's coming, but having a card that can swallow that kind of BW and not burst into flames is still a ways off.
But when it arrives i'll be the first person in line
How about a Beowulf cluster of AGP GeForces? (Score:2)
It's coming, but having a card that can swallow that kind of BW and not burst into flames is still a ways off.
A basic PCI bus can carry 128 MB per second (33 MHz * 32 bits/cycle = 1 Gb/s), and there exist double-speed and 64-bit variants of PCI. The faster 4x AGP runs at 1 GB per second [tomshardware.com]. If each frame requires 1 GB of data transfer using a PS2-like approach of bringing in each set of textures and then rendering the corresponding triangles, you get 1 fps. Render this on a cluster of 24 machines, and you get the 24 fps of 35mm cinema.
Re:How about a Beowulf cluster of AGP GeForces? (Score:2, Insightful)
That's great... except where do you get those textures? You have to calculate them, most times frame by frame. The bleeding edge currently in CG animation is fur and hair modelling -- see Sully's fur in Monsters Inc. for an example of last year's Neat Thing. That's all sub-pixel stuff, even at 6000 x 4000 pixels resolution (70mm, not 35mm). Working out the mathematical dynamics of Sully's hair (collision, wind motion etc.) sometimes took minutes per frame.
Most top-notch cinema animation uses ray-trace in the mix of tools, especially for lighting effects, and no existing GPU can run a raytracer real-time, and especially not at 4k x 3k x 48 bpp.
The renderfarms you're talking about replacing with a 24-machine Beowolf cluster consists of four hundred or more Sun workstations, each hammering away 24/7/365. The producers have to allocate CPU time to various segments of the movie just like live-action movie producers allocate studio time or cash budgets. The directors have to cheat all the time to stay within that budget.
Your suggested system might be suitable for TV -- Max Headroom, maybe, with plastic hair and shiny suits, but not for the big screen, and not to compete in today's CG blockbuster film market.
Re:HUH? (Score:1, Insightful)
Are you aware that there's a active underground "Demo scene" of programmers/artists who make cool looking presentations? Some of the demos I've seen are impressive (and hypnotic) considering that the graphics and music are produced real time. I'm sure other
Demos (Score:1)
Re:HUH? (Score:4, Insightful)
which is just plain stupid. The never will. What film makers need is not even from the same planet as what gamers need.
The idea though is just plain old.
It's called puppetry. Real Time Animation is another word for Digital Puppetry. Check out the performance group D'Cukoo (or whatever the fuck they're called). They did this kind of stupid shit many years ago wit ha digital puppet named Rigby (if I remember correctly).
I have no idea what the
It worked the first time (-1, Facetious) (Score:2)
Then again, I'm a Luddite who really, genuinely prefers cel animation, and if it ever dies out completely, I'm going to take it up for spite.
Re:HUH? (Score:2)
However, Realtime CGI may well be a viable art form all on it's own. Think live-stage-play with CGI screen output. You could have a troupe of mocap artists / puppeteers manipulating a CGI scene, working with live music and improvising during a performance.
There's a couple kids' shows that have live CGI hosts. They look like crap for the most part (lousy framerate and awful mocap performers), but the potential is probably in there somewhere.
Re:HUH? (Score:3, Informative)
I have no idea, and while slashdot certainly murdered this topic in the headline, doing production quality rendering using hardware acceleration is a huge HUGE BIG MASSIVE fucking deal and not many people seem to realize it yet.
Nobody is going to drop PRman for Cg anytime soon. Why? Because they have two different target markets and address two different need
People used PRman in the first place because of its speed and quality. Cg has one down pretty easily, the quality is something that isn't that much harder. Rendering in hardware DOES NOT have to be realtime in order to be beneficial.
Can these cards handle anti-aliasing like RM can? No.
Not in realtime, not yet, but it doesn't matter, since anti-aliasing is not only becoming a very high priority on 3d card makers' lists, but anti-aliasing can be done by simply rendering the samer frame multiple times and blending them together until the actual card has high quality AA enabled, which should be in the next generation.
Can these cards handle DOF like RM can? No.
PRman does a depth based DOF which can be done in post with a z-buffer. If that isn't high quality enough the frame can also be renderered in sections, and/or multiple frames can be rendered with slight offsets etc etc. There are dozens of ways to make it work.
Can these cards to programmable shading like RM can? No.
Fuck yeah they can! That's the whole point. Where do think these shader languages came from? Large shaders can always be broken down and rendered in passes.
These cards are designed to do graphics real time with the best quality they can squeeze out while still hitting their timing targets. RM is meant to get the best possible quality - and who cares about time?
No there cards are designed to render images quickly with quality as a second priority to time. There is a difference. You are implying that they will reduce the quality to hit realtime framerates, which is not true. PRman (if that is what you are referring to by RM) was used and still is used because of its quality and speed, speed being a very high consideration, with quality taking precedent. Speed is everything. Speed breeds quality.
This is a silly pointless discussion. Yes, in 10 or 20 years maybe the hardware will be there, but it isn't now and you sound silly making speculations like these.
This is about as important as discussions on 3D come. This is as huge as anything that has happened to the 3D industry. This is revolution over evolution. This is the next big step that 3D will take after the invention of gourad shading, phong shading, Renderman, and hardware acceleleration. This will start to happen by the end of the year, not in 20 years. 3D is great now, but it is about to get really really good.
Re:HUH? (Score:2)
Is this the right way to go? (Score:2, Insightful)
CGI not appropriate for the living (Score:4, Insightful)
I've always felt that actors, even if clad in rubber suits like in Predator, look far better and more realistic than CG graphics. I also feel that CG should just be for the background, or other special effects, never for characters.
I couldn't agree more. I'm really baffled at the constant attempts to shove CGI down our throats. You really can't help but cringe in those scenes in AOTC when Anakin is riding some beast (both in the field and in the gladiator arena). I mean, it's so obviously a CGI effect. It just doesn't move right. And this is LucasFilm -- CGI doesn't get better than that.
With all the time and money they've spent on trying to improve CGI motion, I would think it could be better spent on developing more realistic and movable costumes. I'm not trolling -- I really want to know if anyone thinks that CGI living creatures have realistic motion.
As far as I'm concerned, CGI has its place. And it's not for recreating living creatures.
GMD
Re:CGI not appropriate for the living (Score:4, Interesting)
Even if such creatures are extinct or never existed in the first place?
I have to disagree that the arena beasts in Ep. II seemed unreal; the cat-like creature seem rather convincing to me. Also, understand that CG is only the most recent "nonliving" technology to do FX. King Kong, et al, were stop motion, and I found them more convincing than Godzilla, which was a man in a costume.
The only shortcoming of CG right now, IMNSHO, is in modeling human motion and expression. But this is only because we, as humans, have much more experience observing each other than animals, so we tend to be more discriminating. In time, we will learn enough about out physiology to model our actions convincingly up close; CG can already do so at long distance in "crowd scenes."
Re:CGI not appropriate for the living (Score:2)
GMD> As far as I'm concerned, CGI has its place. And it's not for recreating living creatures.
Wormwood> Even if such creatures are extinct or never existed in the first place?
Yes, I mean anything that is alive in the movie.
I have to disagree that the arena beasts in Ep. II seemed unreal; the cat-like creature seem rather convincing to me.
You're welcome to your opinion. Actually, I was really referring to the "bronco riding" scenes. That's what really stood out for me.
Also, understand that CG is only the most recent "nonliving" technology to do FX. King Kong, et al, were stop motion, and I found them more convincing than Godzilla, which was a man in a costume.
Oh come on. Godzilla was made by Japanese studios on a shoestring budget. And although those movies have been exported all over the world, they are really intended for a Japanese audience. Japanese just don't care about super-realism. Just look at that kaboki (sp?) theatre! Guys in dark clothing move life-sized wooden puppets around in a play. That's not even remotely realistic and the Japanese don't care. That's not important to them.
You wanna see what Hollywood can do with guys in suits, watch Aliens (the 2nd one) tonight. You can't tell me those beasties don't scare the poo oughta your booty!
In time, we will learn enough about out physiology to model our actions convincingly up close
You may well be correct. But until that day I just don't want to see any more products from Hollywood's "learning curve".
Again, this is just my opinion. If you like CGI creatures, your opinion is equally valid.
GMD
Not kabuki theatre (Score:2)
Just look at that kaboki (sp?) theatre! Guys in dark clothing move life-sized wooden puppets around in a play.
Okay, my bad. That's not kabuki theatre. Does anyone know what the hell the name of these Japanese puppet plays is?
GMD
Re:CGI not appropriate for the living (Score:2)
Show me a beast in a Star Wars episode that moves naturally and is not a puppet. One of the reasons recent dynosaur movies look so good is that the studios payed scientists a pretty penny to look into how animals move. And verified the CGI results with them.
Have you heard of Lucas doing this even once? You gotta be dreaming.
Re:CGI not appropriate for the living (Score:3, Insightful)
The most recent attempts at CGI for living creatures has been fairly out of whack... I'm not an expert, I won't even begin to guess why.
But go back awhile. Hell, go back 10 years. Watch Jurassic Park again -- Spielburg mixed CGI and more traditional FX extremely well there. The first scene with a brontosaurus grazing on treetops is just amazing. Some of the later scenes with the smaller dinosaurs "moving like a flock of birds" and the T-Rex attacking them are also well done. In fact, a lot of the robotics just looks bad - like the sick triceratops.
It may be because we've never seen these creatures alive and the human interaction with them was nil (all interaction shots were done w/ robotics AFAIK). But it's an indication that it can be done right at least.
We'll eventually get CGI for living creatures down pat... but not yet.
Re:CGI not appropriate for the living (Score:2)
But you forgot about Blizzard! (Score:1)
Re:But you forgot about Blizzard! (Score:2)
If you haven't seen Warcraft III yet (though by the content of your post, it sounds like you have)... buy it when it comes out! It's awesome.
Better yet, just look at their trailers on Blizzard's web site.
is this a practical alternative? (Score:1)
Deep thoughts (Score:1)
Good Idea (Score:1)
Right now the render times for movies is months, even if your Geforce4 chokes horribly on each and every frame because they're so big and you only get maybe 1 frame per second. Thats 2 seconds of video per box per minute.
Spread that across a render farm of 30 boxes and you get "realtime" rendering which would make life for the animators much nicer I would think.
Re:Good Idea (Score:2, Informative)
I think that GPUs (at least, nowadays) are too focused on the tasks they have to perform (working with relatively little polygons, applying a few "small" textures etc., all of this in a very short time) to be useful in a totally different task like animation, where you have to work with huge amount of polygons, with complex textures etc., even with special software.
You also have to add the fact that many GPUs have specialyzed "special effects" built in, like light effects and similar stuff that may look great in a game, but are totaly useless in a movie, as they would be too standard and not-so-effective, so you will only be able to use the standard features of your GPU and still use the CPU for most of the work on special effects etc.
Even worse, video cards are more and more focused on speed rather than quality, and this is not going to help when making a movie.
Of course they could have some board specifically designed for the tasks they need, and this would surely improve the time needed to render a movie, but I'm not so sure whether it would be worth the cost.
Will not happen anytime soon.. (Score:4, Informative)
When there's something you want to change in your hardware-based rendering, what are you going to do, re-fab the silicon and solder it in?
Re:Will not happen anytime soon.. (Score:1)
Re:Will not happen anytime soon.. (Score:3, Informative)
You could start with an architecture similar to Andrew Huang's five-or-so-year-old Tao [mit.edu] reconfigurable computing platform, with pipelining de-emphasized. system speed approximately doubled, and (possibly) multi-ported memory added.
-jhp
Re:Will not happen anytime soon.. (Score:3, Informative)
Come to think of it, I believe these miracle shaders have something to do with the "Cg" language this article just happened to be about. What a coincidence.
Re:Will not happen anytime soon.. (Score:2)
Have you heard of "programmable hardware shaders"?
Have you heard of the term "Turing Completeness" ? - as in, "programmable hardware shaders are not Turing Complete" ?
Re:Will not happen anytime soon.. (Score:2)
The hardware is just accelerating the process by doing most of the grunt work of multiply-adds & shifting bits around, on a dedicated chip with 4 or 8 pipelines using multiple vector ALUs each, fed by 10 or 20 GB/s of local bandwidth. And gfx hardware is increasing in speed & complexity a lot faster than CPU hardware.
Think of it as a massive SIMD array coupled to a Turing-complete CPU, and you'll have the right idea.
Re:Will not happen anytime soon.. (Score:2)
Sorta true. Though definitely not like a massive SIMD array, the control structure is far more complex and loosely-coupled than that.
Because the CPU is not in the datapath you get a few problems that you don't get with a plain general purpose CPU:
- Limited precision of intermediate results -> restricted space of implementable algorithms
- Very restricted data addressing modes -> you need to build lookup-tables at run-time which can eat into your performance for certain algorithms
- Difficult to implement conditional tests
So, yes, because you start with a Turing-Complete CPU in the system, the system stays Turing-Complete, that's trivially obvious. However, not all algorithms can be implemented on the devices mentioned in the article, which was the point I was trying to make.
Future hardware is a different matter - both precision and programmability are increasing in leaps and bounds. This is pretty traditional for graphics hardware, and has been so since the start of the field - machines get more and more programmable until they start to become fully programmable, then the lowest levels get replaced by faster, fixed-function hardware, then the cycle repeats itself for those levels. Newman was talking about it in the mid-70s, and it's been pretty much the case from then on.
We are getting to the point now where we will be able to put enough flexibility and functionality on a chip to run Renderman pretty much natively on the graphics chip, but not with the stuff the article was talking about. I'd give it 18 months to two years.
Re:Will not happen anytime soon.. (Score:2)
- Limited precision of intermediate results -> restricted space of implementable algorithms
Granted, for today's hardware at least. Although float precision (coming with NV30 & maybe R300) still isn't up to the accuracy levels required by some algorithms, it's good enough for the great majority, and certainly enough to make nearly anything possible, if not perfect. In any case, way better than the 8/9/10/12 bit integer hardware out there now.
- Very restricted data addressing modes -> you need to build lookup-tables at run-time which can eat into your performance for certain algorithms
Yeah, but performance isn't really the issue, so much; chances are it's still going to be way faster than executing on a CPU. And if not, well, GPUs are increasing speed at 3x the rate of CPUs...
- Difficult to implement conditional tests
Harder, yes, but possible using the stencil buffer. The compiler can take care of implementing it. Perhaps inefficient, but see above point.
There have been numerous papers on running Renderman shaders in hardware. (Very) simple ones are possible with just register combiners, but the upshot was that, with dependant texture lookups & float pixel support (available now, & end of the year), full Renderman support was possible.
I'd guess that SIGGRAPH this year will see a few very interesting RT shader examples running on "unannounced' hardware, and by next year it'll pretty much be a done deal. In 18 months to two years, it should be more than possible - I'd expect it to have hit mainstream.
Dire Straits (Score:2)
"I want my, I want my, I want my mtv..."
Now *there* is high tech animation.
Seriously though, the geometry nor resolution of even the most cutting edge graphics cards are anywhere near the level required to produce the high quality images, especially an image that wouldn't turn to crap on a typical movie screen. For the mainstream this just wouldn't cut it... Imagine the jaggedness and polygon count on your monitor scaled to a theater screen.... scary.
And for the people who would appreciate this sort of thing and would enjoy watching or seeing what they can do with a restriction on polygons and resolutions, there is always the demo scene dedicated to showing off what they can do at any level between all processor load to entire system in realtime. For movies I remember watching a couple of films written as Quake demos, I presume this is still happening somewhere on some level.
This appeals to some people, but those people are already served...
Re:Dire Straits (Score:1)
Digital Theatre (Score:2, Insightful)
Re:Digital Theatre (Score:2)
Not really programmable enough yet. (Score:4, Insightful)
Stuff like DX8/9, which the gfx chip companies design to, is a very very small subset of what Renderman specifies. I suppose in theory you could build a tool that split shader work between the main CPU and the gfx card, but, I really don't think it would be worth the effort.
That's not to say that future hardware won't be able to do this kind of thing, but I'm not going to violate any NDAs on Slashdot
Come back and ask the question again in 18 months or so.
SORRY: TOY STORY 3 WILL NOT BE SHOWN (Score:4, Funny)
What the hell? (Score:4, Insightful)
Replace Renderman with a fuckin' PC video card? Maybe if the folks at LucasArts were weaned on paint thinner.
This sounds like your typical PC blowhard who believes his DVD player, Playstation, telephone, and eventually his computer will be replaced by a graphics accelerator.
Hey, you might need some justification for dropping $400 on that latest waffle iron from ATi, but you'll get none here.
And as for "improv animation," blow it out your ass. The reason that company quit is that it looks like shit. The closest you're going to get to that is games like Samba de Amio and Dance Dance Revolution.
Lastly, Mr. Dean Siren, what's your relationship with Strange Company and Machinima? Cause this sounds an awful lot like a puff piece from a PR flack...
There's more to an actor than being on stage (Score:2, Insightful)
Part of the reason is that people wish they could be like that. Who will be able to live vicariously through a computer program (slashdot crowd excluded).
Jason
Cg is not just real-time (Score:1, Informative)
So it is in fact conceivable that we can see professional pre-rendered animations done using Cg.
Looks like you people don't know what you're talking about. GPUs and shader languages are independant.
Regards, Guspaz.
Re:Cg is not just real-time (Score:1)
They'll probably transition to it, but very slowly (Score:2, Interesting)
The immediate advantage in Cg is allowing independent film makers to make special effects more easily and faster than before. It helps the push towards giving computer animating power to the masses. But this doesn't mean that computers will replace actors anytime soon. Think of what will happen to Entertainment Tonight and Access Hollywood!
I see this as beeing.... (Score:2)
Take a real-time rendering system and a complex 3D matrix plotter and combine them and you can have real-time digital actors modeled by RL people.
Add a lot of CPU power and a genetic algoritm and the computer should be able to, after some time gathering information, immitate the "recorded" actor much like voice recognition learns your voice patters.
Re:I see this as beeing.... Woops (Score:2)
Farms Still Needed (Score:1)
So we want renderfarms in theatres? (Score:2, Interesting)
Even rendering the sound in realtime just doesn't sound feasable. Csound couldn't do a whole orchistra with voice modeling and effects in realtime on even most nice clusters...
Moreover, will the audience care? It's not like the CG actors are going to 'screw up' so it's not interesting like seeing a play. I personally don't see the point.
Well, the answer to the question of the topic is......
Never, or when StarTrek and Holodecks become reality. It's just not feasable, and with 40-70 hours a frame for current movie renders, you can't move that into 29.97 times per second for the sake of it being realtime..
Using Cg/GeForce/Radeon doesn't imply Improv (Score:4, Insightful)
However, this doesn't imply that the rendering by the graphics card will be real-time. Renderings per frame may drop to minutes instead of hours, but it probably won't be interactive. Also, the same amount of work by artists tweaking animation and doing post production still applies. Basically, graphics hardware will replace 1 portion of the pipeline, not the entire thing. It will probably be many years before hardware can generate really convincing photorealistic images at interactive rates (don't listen to the marketer speak of graphics IHV's!)
Post-production will always exist, it's not like it was invented with CGI. They use post-production techniques on live-action film sequences as well, why would it be any different if the CGI was generated in real-time (like camera photography already is).
Re:Using Cg/GeForce/Radeon doesn't imply Improv (Score:2)
You have no idea how wrong that statement is. The first half, that is. The second half is perfectly accurate.
People in the visual effects/animation business have something they call "Blinn's Law", which is the flip-side of Moore's Law. It states: Renders will always take the same amount of time. It's true. On average, computing frames for Monsters, Inc took about the same time as they did for Luxo, Jr. The reason why this is the case is that audience's expectations increase at about the same rate as the power of hardware. Yes, eventually we may well have prman in your graphics card, but by then, the CG films of the turn of the century will look quaint at best.
Hell, Toy Story already looks quaint.
Applications (Score:2)
Um, so that's all I can think up. I'm goin' to get some chili. (Another reason for virtual videoconferencing....)
Re:Applications (Score:2)
I'm having even more horrible images of hundreds of (lit, wooden) torches being waved from side to side in the air during the lute solo.
Damn you!
Improv CG's been an art form for years. (Score:1)
you better remind yoruself [hornet.org].
The 4k demo contests have always been the pinacle (IMO) of art as not only did you have a visual experience but the wonderment of how much was packed into a 4k executable. It was art in design and programming.
And all done with typical PC hardware. No fancy render farms. Hell, FC's Second Reality ran on a 386!
And now look torwards all the work being done with Flash, especially with respect to animation. But I think the author of this post means to focus on realistic animation.
I fondly remember the TOYS demo (Score:1)
Does he know the difference? (Score:1)
nonsense (Score:1)
Won't work until there's *real* VR (Score:1)
Now, real-time rendering, even if it wasn't production-quality, could change this. Just giving the actors decent HUDs so that they could actually talk to the CG creatures would help a lot. Real force-feedback stuff would mean that they could actually touch each other. This is what we need for cg to replace real actors. Then it would really be the Future, what with Virtual Light and etc. being reality. (And we'll have flying cars, damnit.)
It's not improv (Score:1)
Never Be Replaced... (Score:2, Interesting)
Computer graphics and rendered animation isn't replacing live human actors. If motion capturing and voice over is used you're still going to see the actor/actresses unique style in the finished product... I'm picturing some of the characters in Shrek/Toy Story(2), and how they were obviously very digital, besides of course the voice overs... If motion capturing is used, the emphasis will be squarely back on the actor animating the character. If Jim Carey was the actor behind a character in one of these new mtion captured productions he would be instantly recongnizable because he is such an animated person to begin with, and if the digital character is animated by his motion captured movements and vocalized by his voice overs it's would be 100% classic Carey, and wouldn't come close to putting human actors out of work -- if anything this would force the actors to develop new strenghts and talents to make their animated characters -- which *they* animate through motion capture suits, come to life!
When a gpu can.... (Score:2)
When a gpu can handle 20+ million polly's with 4k textures on them... and 600+ MB scene files.
And 2gb of system ram.
If you look at what a cpu based render has to handle and all the files it has to generate, it would have to be an extremely specialized machine that would cost an extreme amount of money. I would rather throw my money at more dual 2.2 Ghz P4 rackmounts.
They prove they work. And they are standard hardware. So anybody that makes software will support them.
It is all pretty much a pipe dream to get realtime renders at the quality needed for film. As soon as that happens, I am out of a job. The amazing thing about CG Studios is that they keep raising the bar as hardware comes out... so the faster the machines the heavier the scene is.
Its not that artists are getting much better as much as machines are able to handle more.
-Tim
CRASH!! "I'm sorry but.." (Score:1)
Or Better yet
"Part of our cluster is now out... but instead of compromising the movie and showing this prerendered reel, we will show the movie at 1 frame/sec for the rest of the movie, LoTR will now finish in 27 days..."
Come on people, we can't even get digital screens and real THX in all theatres, let alone renderfarms, Machiimia is crazy...
Not viable (Score:2)
Yes, I've heard him talk, and I know he's not addressing exactly what this article talks about. What I'm saying is that the task of making computer animation truly realistic is more difficult than we are capable of, using the most advanced tools available today. That, to me, means that it's much, much more difficult to do it in real time using algorithms and hardware that is much less sophisticated.
Can you do something cartoon like? Certainly. Look at Clippy. Can you make it believable and real? No.
Missing persons (Score:1)
Getting closer, but still have a long way to go (Score:1)
For one thing - programmable shading. Programs like PRMan and BMRT support programmable shaders - which are incredibly important for photo-realistic effects. They also are expensive in terms of processing, which is why realtime is going to have some problems with them.
Another thing is resolution. I don't remember what resolution the images need to be for film, but I think that it's pretty high. More pixels = more processing.
Realtime effects for games are getting to be stunning, but motion pictures are another thing entirely.
not near the same quality (Score:1)
Cg quality is no where near to close the quality of modern ray traced images and movies. When Cg can produce an image like this [irtc.org], then maybe things will change.
Already being Tried! (Score:2, Informative)
I read this and immediately remembered when Brian Henson (Jim Henson's son, of Muppets fame) came and gave a talk at my school last year.
One of the things "The Creature Shop," the company he runs, is working on, is digitally animated puppets which are played in real-time the way that a normal puppet would be. He didn't give too many technical details then, but I found this press release, check it out:t ml [henson.com]
http://www.henson.com/company/press/html/060601.h
Sure, if you want a crappy movie (Score:2)
It would probably work for making some crappy saturday morning cartoon (I could've sworn I saw one once that was doing something like that), but for good quality animations, you need good animators.
What's more interesting is the work for physics-based animation. Again, you won't get good movies out of it, or even "realistic" human characters, but it will be a big advance for games -- though I doubt it will make a dent in the demand for good animators.
They have, where it makes sense. (Score:5, Insightful)
It's not at all clear to me that Cg provides any advantage over OpenGL used from C/C++ for the sort of work that the high-end studios do.
The vanilla CPUs in render farms and the software renderers that run on them could be replaced by hardware rendering for the lower-quality work, but never for the highest. First, the render farm doesn't need the real-time facility of the GPU - the part the GPU does best, and the part that contributes most of the cost to the GPU. The render farm just needs to render a frame to disk, and can do this more cost-effectively with a software renderer and a general-purpose computer. Second, the GPU isn't as extensible as the software renderer, because it's cast in silicon. There will always be an effect you want that the GPU can't handle. And then, the GPU is built to render video fast, and trades off many aspects of the rendering algorithm that we really want when we render to film.
You will, however, see all of the studios buy arrays of GPUs for making rushes. These are less-than-full-quality playbacks that they use to review the animator's work-in-progress before final rendering. If we got some really fast programmable gate-arrays, or GPUs with documented and programmable microcode, we could use them as a GPU is used, but in a way that might support the highest-quality rendering.
Pixar tried to make high-speed hardware for years, and we always found it to be a losing game. I wrote microcode for one of these beasts, a parallel bitslice engine that inspired today's MMX instructions. We could not keep up with the development of vanilla CPUs, and the CPUs ended up being more cost effective.
Re:They have, where it makes sense. (Score:3, Informative)
But when done, the system takes over the GL display and the frame sections are copied off of the GL buffer on to disk... at least in Maya... if I remember correctly.
For the most part, we will do those for animation checks, but every night the animator will still have a flip rendered of their work at that stage. The nice thing about only doing one movie at a time is that all the renderboxes are dedicated to what stage of production you are in, so the artists can get actual renders back instead of hardware approximations.
Also due to the way we do mouths, we need flips to see mouth animation on the veggie characters.
-Tim
This is old hat, it is puppetry. (Score:4, Interesting)
Real-time rendering of CG puppets has been done by Brad de Graf, now at Dotcomix [dotcomix.com] and several other people over the years; but it's never been easy or particularly successful.
Real-time capture of data for later non-real-time rendering is much more common. Graham Walters and I did the Waldo puppet [umbc.edu] for The Jim Henson Hour back in 1988. One might also consider the motion-capture technology now widely used in visual effects production as a type of whole-body puppetry -- the robots in the latest Star Wars movies are animated by having people perform the parts, and then capturing that motion.
There may be a future in multi-track puppetry; where you can lay down a track at a time, each pass recording a few more paramters until you get the whole sequence done. This would be of course analogous to multi-track audio recording. But recording a whole complex character in real time would mimic puppetry with all of its limitations and flaws, but more expensively.
thad
Rustboy (Score:2, Interesting)
Evercrack? (Score:2)
It's the people acting out part and taking quests that could be entertaining, if viewed from afar. Also since the world is so big you could have an interactive component for the "viewer" to play god and jump about watching what everbody is doing....
An Odd Lack of Vision Here (Score:5, Insightful)
Remarkable.
Technically savvy poeple, of all people, should realize that simply because Farscape-style special effects cannot be done in realtime today with today's low end consumer graphics GPUs doesn't mean the concept of 'live performance animation' as such is flawed at all.
First, much lower quality 'live performance' animation is possible with today's consumer hardware, and the improv aspect alone makes it an art form worth persuing in and of itself. The possiblity for algorithmic and technical enhancements that could be driven, or at least explored, by such an art form make it a worthwhile endeavor as well.
Second, in another 5 or 10 years (at most) it will almost certainly be possible to do live performance, farscape quality digital animations (assuming the technological development of the computer hasn't been brought to a standstill through stupid legal 'innovations' like DRM and Palladium). While movie makers would likely simply add this to their set of tools and not replace post-production entirely, the ability to create 'live theatre' digital productions and interactive, perhaps even submersive, two or multi-way environments if not completely synthetic realities is an intriguing one, to say the least, and certainly a worthwhile endeavor whether or not Hollywood can make use of the technque in their movie productions. Indeed, such systems could well render the movie as obsolete as the live stage play is today: in other words, no longer the main popular attraction, but a continuing artform valid in its own right, if no longer the center of public attention.
8 years ago I was at the U of Illinois' virtual reality lab and had an opportunity to play around with some of simulations they run, including one which allows the viewer to explore a three dimensional (submsersive) grey-scale view of the mega-structure of galaxies in the universe (to study large scale structures such as strings of galaxies, etc.).
8 years later I can explore the universe in living color on my GNU/Linux box running Celestia, in 1920x1200 24-bit color, in realtime. While it isn't submersive 3-d VR just yet, it is much higher resolution and full color, and while I can't explore the farthest reaches of the universe, I can explore the immediate galactic neighborhood in incredible detail (much greater than the old simulation allowed). All of this on a $400 Nvidia card, running a free operating system on commodity hardware.
So, in other words, dismissing this possibility simply because you can't do it with perfect, photo-realistic effects today shows a remarkable lack of vision, and a blindness to similiar leaps in technology that we've all beeen taking for granted for the last decade or two. We will be able to do this sort of thing, photorealistically, much sooner than most people probably realize, and the art form can be persued long before the final polish is available.
I don't see any lack of vision here ... (Score:2)
The thing is, that the original poster has a tone of "we've been doing real time animation for quite some time" and asking why movies studios are not going in that direction. The answer to that is pretty obvious too.
Re:An Odd Lack of Vision Here (Score:2)
Think of something between a theatrical play and a puppet play, photorealistic enough that you can't tell the digital actor from the real thing, broadcast across the net in realtime.
The digital equivelent of a stage play, but with all the special effects possibilities of a full length motion picture.
It is a new art form, with parts taken from the stage, from computer CG, and from Hollywood, and could result in some very interesting work.
Dialing for Shrek... (Score:2)
This is a neat idea. (Score:2, Insightful)
The point isn't to render a movie, the point is to use your computer like a canvas to paint on. Only instead of making a picture, you make an animation. Maybe you use one computer (or many) to control it and then feed all the control data to a main computer system that renders it in real time for the audience to see it. Maybe you've got 10 people controling monsters, beasts, and other imaginary characters, with people doing their voices (and probably also controlling the facial animations at the same time. Think like how TV is done, with make camera men , a control booth that splices all the sound together from different sources, the guy who's job it is to overlay different titles on the screen and do transistions between show segments. Just replace the 'real' life actors with computer generated ones.
It would be easier to do a cartoon style show because people prefer actual actors to computer generated ones.
As you mentioned, games already do this ... (Score:2)
Are you listening, game developers?
ANIMATE YOUR CUT-SCENES WITH THE GAME'S OWN GRAPHICS ENGINE WHENEVER POSSIBLE!
Sure it has. The demo scene has been around for decades. First they were doing 3D w/o graphics hardware assistance at all on 286's, then 386's, 486's, 586's, Amiga's, etc. Nowadays, the demo scene seems much smaller, but they do use the 3D graphics cards to make much more elaborate demos. Funny, however -- they don't *seem* that much more impressive than they did. (I've probably just been jaded by modern games. And I'm probably not the only one, which might explain the smaller demo scene.)Re:As you mentioned, games already do this ... (Score:2)
For a game like Age of Empires, with no 3D game engine, that's not much of an option, but for something like Deus Ex, it works wonderfully.
Blizzard has been an exception. They do a very nice job on their cutscenes -- even nice enough that they warrant the DVD included with the Collector's Editions.But you'll notice that they're animated. So far, most the live action cut-scenes that I've seen lately have been crap. (`Red Alert 2' did a pretty good job, however -- far better than `Tiberian Sun'.) Not just that the acting and actors were crap, but the compression they've used is usually very noticable.
Warcraft III seems to use a pre-rendered intro (and possibly endgame), and in-game-engine cinematics. The intro you see once, the in-game-engine cinematics you see every single mission. And the in-game-engine cinematics are just perfect for the story.Warcraft III also fits on one CD. You don't think that would happen if they used pre-rendered cinematics between every mission, do you?
This takes me back (Score:2, Informative)
Anyways, the other really big thing were the motion-captured, live 3D actors. They'd project an avatar of someone up onto a big screen, and have them try to talk to hold conversations with you and the like. It was actually kind of annoying.
=Brian
Simpsons Gag? (Score:2)
There's more to do than rendering (Score:2, Funny)
Maybe In X-Years (Score:2)
On a simple level, look at how much staggeringly more powerful your PC is than the one you used to run Word Perfect 5.1 for DOS on. "Imagine how quickly your Word Processors will run in the exciting future when PCs break the 1 GHz mark." What, your Word Processor got slower because it's now bundled with a million other tasks that might only be used by 1% of users, 1% of the time but are still considered essential enough for everyone to have a [relatively] recent version?
Now the same's true for 3D graphics. You can probably render all of Tron in real time now. You just wouldn't want to because Luxor Junior came along and raised the bar, then Toy Story, Shrek, Final Fantasy and so on. While modern hardware can do the slow tasks of ten years ago in real time, todays slow tasks are even slower than the ones from ten years ago as we come up with new concepts like trying to make milk and the surface of the skin handle lighting properly.
So, don't get your hopes up for doing the latest movie on your PC in ten years time - the movie guys will always want to push for better algorigthms, new techniques and need more power to do them. What you will see - and what you're already seeing - is having close to the power of the renderers of maybe five-ten years ago on your home PC. That being the case, you'll probably see Matrix quality games in another five years or so.
As others have pointed out, digital real time puppetry exists already, just not of the same quality as the pre-rendered stuff. But that's another topic.
Machinima isn't about replacing render farms! (Score:3, Interesting)
The person who posted the question misunderstands the purpose of the concept of Machinima, and all the higher-moderated comments lambasting the idiocy of "replacing a render farm with a video card" are missing the intent of the question, as a result.
Machinima means real-time 3D movies. "Real-time 3D" in this case means using a rendering engine capable to putting the 3D content on the screen fast enough to appear animated. This usually means a game engine, like Quake, Unreal or Lithtech. I think the Machinima.com website blows (terrible design and layout, no helpful information, not even decent forums), so I won't link to it.
Those two points are about as far as machinima creators will agree. The "point" of Machinima as a form of filmmaking can be one or more of (depending on whom you're asking):
Machinima is about all of these things, and perhaps even others. Right now, it's a niche art form, due to the high technical barrier to entry (catch #1), and the typical desire for large amounts of custom content that a budding director wants, but as a technical person often can't provide himself (catch #2). This kills a machinima film in two ways: an artist would rather model his own models for his own traditional 3D movies, and and no-one wants to download 500MB of new content to watch a single 500K movie.
I think a live (improv, as the poster put it) animation production might be very interesting. It could also suck. Machinima, like hammers, guns and DeCSS, is just a tool. What matters is what you do with it.
Digital Puppetry (Score:2, Interesting)
http://www.illclan.com/
-B
Yeah... (Score:2)
OpenGL uses a client-server rendering model. But I don't think you can spread the rendering over a bunch of clients. It's gonna take more than one computer to render even one frame of the next Pixar movie (whatever that may be).
The point of GPUs is to offload stuff from the CPU so that it is free to do stuff like AI and physics. In animated movies you don't really care where the calculations take place as long as they finish relatively quick.
This is already happening.. (Score:2)
sampling (Score:2)
The one thing, however, that I see blocking the use of GPUs for general-purpose high-quality rendering is sampling (the technique of avoiding aliases by low-pass filtering the scene at various stages of rendering). All of the GPUs I have seen are limited to dumb box filtering of texture and pixel samples. (i.e. calculate the color at several points inside a region and average the results). The best software renderers do a much more careful job of surpressing high frequencies while keeping the good low frequencies. (e.g. using a several-pixel-wide Gaussian or windowed sinc filter). While these methods are more computationally expensive than the box, they are much better low-pass filters. It makes good sense to choose them for final high-quality rendering.
High depth (10-16 bits/component) framebuffers are another necessity, but I hear they will be available in hardware very soon...
Realtime and offline rendering ARE converging (Score:5, Informative)
The current generation of cards do not have the necessary flexibility, but cards released before the end of the year will be able to do floating point calculations, which is the last gating factor. Peercy's (IMHO seminal) paper showed that given dependent texture reads and floating point pixels, you can implement renderman shaders on real time rendering hardware by decomposing it into lots of passes. It may take hundreds of rendering passes in some cases, meaning that it won't be real time, but it can be done, and will be vastly faster than doing it all in software. It doesn't get you absolutely every last picky detail, but most users will take a couple orders of magnitude improvement in price performance and cycle time over getting to specify, say, the exact filter kernel jitter points.
There will always be some market for the finest possible rendering, using ray tracing, global illumination, etc in a software renderer. This is analogous to the remaining market for vector supercomputers. For some applications, it is still the right thing if you can afford it. The bulk of the frames will migrate to the cheaper platforms.
Note that this doesn't mean that technical directors at the film studios will have to learn a new language -- there will be translators that will go from existing langauges. Instead of sending their RIB code to the renderfarm, you will send it to a program that decomposes it for hardware acceleration. They will return image files just like everyone is used to.
Multi chip and multi card solutions are also coming, meaning that you will be able to fit more frame rendering power in a single tower case than Pixar's entire rendering farm. Next year.
I had originally estimated that it would take a few years for the tools to mature to the point that they would actually be used in production work, but some companies have done some very smart things, and I expect that production frames will be rendered on PC graphics cards before the end of next year. It will be for TV first, but it will show up in film eventually.
John Carmack
Replace actors? (Score:2)
The motion picture has not replaced the stage.
The television has not replaced film.
The record has not replaced concerts.
In fact, I don't know of any new artistic form that has replaced another. Computer generated characters are different from live actors, and always will be.
Improv Animation (Score:2)
On the other hand, if you don't have decent script, dialogue, and direction (ie, ATOTC), even with the best of digital and a cast of good actors you might as well save your money and go home...
game video boards inadequate for movies (Score:2)