Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Software

Improv Animation as an Art Form? 244

Dean Siren asks: "When will mainstream moviemakers, such as Lucasfilm, finally replace their render farms and Renderman with a GPU (Geforce or Radeon) and Cg based renderer? Would the savings in equipment cost and rendering time be worth the learning curve? Is anyone developing such an app? We've had the tech for years with video games, but the art form hasn't really been tried. Is anyone working on this now?" An interesting thought, and it puts an interesting spin on the old computers-will-replace-actors argument. It also means good planning ahead of time, since there will be no "post-production" stage where you can clean up the mistakes, and perform the minute adjustments needed to make things just right. Do you think such an art form will ever catch on in Hollywood, or will small shops have to be the ones to pioneer this before others follow suit?

"There's a forum called Machinima whose main idea is that not only should the final rendering of a movie be generated in real time, but so should the animation, implying that computer animation should be performed, maybe even improvised, live by motion captured voice actors. Accomplishing this goal would require replacing not only Renderman but Maya and Softimage as well. A developer named Strange Company took the challenge and started writing an app in this direction called Lithtech Film Producer (interview here). They even made easy porting a high priority. But they soon realized that they were tiny and the project was huge so they quit. But the idea of improv animation is full of potential."

This discussion has been archived. No new comments can be posted.

Improv Animation as an Art Form?

Comments Filter:
  • by Anonymous Coward
    well the advantage of the render farm is that they work in parallel to produce the final result. You had better be able to use a lot of GPU's in parallel to get all that out of one box in real time. Or mayby you could if you put GeForce 4s in all the boxes in the render farm. Though that wouldnt help costs or heat issues any.
  • But will they go on strike when we turn them off?
    • Anyone know of research into modeled voice synthesis? I don't mean wavetables and phoneme - I mean physics modeling of the windpipe/vocal cords/chest/nasal interactions that determin voice tone and inflection. Code up an engine to do this realtime, which you could train to take inflectual cues from a reference voice, and you could have voice impersonations, just as they've proposed to graft a digital makeover onto actors. Or you could apply this technology to games, and make a killing...
  • what's the point?? (Score:2, Insightful)

    by BeazleyR ( 179862 )
    If all animations were live with done with motion capture, what is the point of even making them into animations. Tell me how you could do movies like Ice Age, Toy Story, and A Bug's Life with motion capture. Animation takes time and talented people. There are many interesting animations, with animals and the like which could not be motion captured. A trend like this would be horibble.
    • The point isn't to replace high end animation, but to allow the little guy to quickly make computer animated movies. Sort of like low budget "indie" movies compared to Hollywood blockbusters.

      We'd be able to see a lot more computer generated worlds created by people with a small budget. We'd be able to see a lot of creativity expressed that would otherwise be suppressed for lack of funds.

      We'd be able to see a lot of gay CG cowboys eating pudding.

    • Yea, I really don't see much of a point in this either. From what I know, the current way of doing things works, and there are some pretty amazing animated movies out. Im actually a bit confused as to what they want to happen. Do they want me to go to an animated movie, and offscreen someplace there are real actors moving around, and the computer is rendering this in real-time, showing me it up on the screen. That is stupid, and I hope im just confused.

      Using a render farm has a lot of plus's, look at some of the LOTR movies that show them animating orcs with real humans, id love to see them (machina or whatever) do this with 1/10th the quality with a bunch of geforce cards using cg. As far as I understand it, the animators and modelers will have pretty nice machines. I was able to play on an SGI cluster, it used IRIX, and they used lightwave to model / animate. Once they animated this though, they would just send the render job to the rest of the machines and they would crunch away at the numbers for a while. Im assuming that when it is rendering, it is using cpu power and not the video card, but i may be wrong. I really don't think if you took a bunch of geforce chips and did something fancy with them, you could even come close to the quality in the movies mentioned in the above post..
  • I don't think this will be accomplished with current technology. While realtime rendering is very advanced, to get the detail and control over an animated sequence such as you see in Toy Story or in Star Wars is still way in the future. I don't think the GeForce is up to it. You would still need some massively parallel processing to be able to do tat kind of imagery in real time.
  • HUH? (Score:5, Insightful)

    by furiousgeorge ( 30912 ) on Thursday June 27, 2002 @04:12PM (#3781630)
    Is it a slow news day or what???

    In a nutshell, this topic makes zero sense.

    Nobody is going to drop PRman for Cg anytime soon. Why? Because they have two different target markets and address two different needs.

    Talk to somebody like ILM or PIXAR thats doing renderings that take 70 hours a frame (like some of the frames for Toy Story II did) and talk about real time cards. They have a good laugh and say "go away kid".

    Can these cards handle anti-aliasing like RM can? No.
    Can these cards handle DOF like RM can? No.
    Can these cards to programmable shading like RM can? No.

    These cards are designed to do graphics real time with the best quality they can squeeze out while still hitting their timing targets. RM is meant to get the best possible quality - and who cares about time?

    This is a silly pointless discussion. Yes, in 10 or 20 years maybe the hardware will be there, but it isn't now and you sound silly making speculations like these.
    • I agree with everything you wrote.. except your time estimate.

      5 years (for TS2 quality)
      • nah - wish i could agree with ya, but don't see it happening. TS2 quality was in the order of a GIG of data to crunch for the average frame........ not even talking about the really killer frames, plus loading textures, yadda yadda yadda.

        It's coming, but having a card that can swallow that kind of BW and not burst into flames is still a ways off.

        But when it arrives i'll be the first person in line :)
        • It's coming, but having a card that can swallow that kind of BW and not burst into flames is still a ways off.

          A basic PCI bus can carry 128 MB per second (33 MHz * 32 bits/cycle = 1 Gb/s), and there exist double-speed and 64-bit variants of PCI. The faster 4x AGP runs at 1 GB per second [tomshardware.com]. If each frame requires 1 GB of data transfer using a PS2-like approach of bringing in each set of textures and then rendering the corresponding triangles, you get 1 fps. Render this on a cluster of 24 machines, and you get the 24 fps of 35mm cinema.

          • If each frame requires 1 GB of data transfer using a PS2-like approach of bringing in each set of textures and then rendering the corresponding triangles, you get 1 fps. Render this on a cluster of 24 machines, and you get the 24 fps of 35mm cinema.

            That's great... except where do you get those textures? You have to calculate them, most times frame by frame. The bleeding edge currently in CG animation is fur and hair modelling -- see Sully's fur in Monsters Inc. for an example of last year's Neat Thing. That's all sub-pixel stuff, even at 6000 x 4000 pixels resolution (70mm, not 35mm). Working out the mathematical dynamics of Sully's hair (collision, wind motion etc.) sometimes took minutes per frame.

            Most top-notch cinema animation uses ray-trace in the mix of tools, especially for lighting effects, and no existing GPU can run a raytracer real-time, and especially not at 4k x 3k x 48 bpp.

            The renderfarms you're talking about replacing with a 24-machine Beowolf cluster consists of four hundred or more Sun workstations, each hammering away 24/7/365. The producers have to allocate CPU time to various segments of the movie just like live-action movie producers allocate studio time or cash budgets. The directors have to cheat all the time to stay within that budget.

            Your suggested system might be suitable for TV -- Max Headroom, maybe, with plastic hair and shiny suits, but not for the big screen, and not to compete in today's CG blockbuster film market.

    • Re:HUH? (Score:1, Insightful)

      by DLWormwood ( 154934 )
      This isn't pointless. True, Hollywood and "big" production houses won't do real-time animation. But, indie and performance artists may try this.

      Are you aware that there's a active underground "Demo scene" of programmers/artists who make cool looking presentations? Some of the demos I've seen are impressive (and hypnotic) considering that the graphics and music are produced real time. I'm sure other /.'s can give some links to sites about the scene...
      • Yea, the demos are great, but really, it can't catch on for the mass public and theatres. I love demos, but who will watch a 2 and a half hour demo and pay 8 dollars to do so...

      • Re:HUH? (Score:4, Insightful)

        by kwashiorkor ( 105138 ) on Thursday June 27, 2002 @04:47PM (#3781847)
        The initial question is:
        When will mainstream moviemakers, such as Lucasfilm, finally replace their render farms and Renderman with a GPU (Geforce or Radeon) and Cg based renderer?
        which is just plain stupid. The never will. What film makers need is not even from the same planet as what gamers need.

        The idea though is just plain old.

        It's called puppetry. Real Time Animation is another word for Digital Puppetry. Check out the performance group D'Cukoo (or whatever the fuck they're called). They did this kind of stupid shit many years ago wit ha digital puppet named Rigby (if I remember correctly).

        I have no idea what the /. editors saw in this post. It must truly be a slow news day.
    • When Max Fleischer's studios did this with cel animation 'way back in the '20s and '30s, they called it Rotoscope. :) I seem to remember seeing a really cool short with Cab Calloway drawn as a dancing figure doing his famous shuffle and singing "Saint James Infirm'ry Blues."

      Then again, I'm a Luddite who really, genuinely prefers cel animation, and if it ever dies out completely, I'm going to take it up for spite.
    • Once again an interesting story is butchered beyond all recognition by the submittor and / or editors. The *replacement* of conventional CGI by realtime proposed in this story is just plain Drano-drinking stupid.

      However, Realtime CGI may well be a viable art form all on it's own. Think live-stage-play with CGI screen output. You could have a troupe of mocap artists / puppeteers manipulating a CGI scene, working with live music and improvising during a performance.

      There's a couple kids' shows that have live CGI hosts. They look like crap for the most part (lousy framerate and awful mocap performers), but the potential is probably in there somewhere.

    • Re:HUH? (Score:3, Informative)

      by donglekey ( 124433 )
      Is it a slow news day or what???

      I have no idea, and while slashdot certainly murdered this topic in the headline, doing production quality rendering using hardware acceleration is a huge HUGE BIG MASSIVE fucking deal and not many people seem to realize it yet.

      Nobody is going to drop PRman for Cg anytime soon. Why? Because they have two different target markets and address two different need

      People used PRman in the first place because of its speed and quality. Cg has one down pretty easily, the quality is something that isn't that much harder. Rendering in hardware DOES NOT have to be realtime in order to be beneficial.

      Can these cards handle anti-aliasing like RM can? No.

      Not in realtime, not yet, but it doesn't matter, since anti-aliasing is not only becoming a very high priority on 3d card makers' lists, but anti-aliasing can be done by simply rendering the samer frame multiple times and blending them together until the actual card has high quality AA enabled, which should be in the next generation.

      Can these cards handle DOF like RM can? No.
      PRman does a depth based DOF which can be done in post with a z-buffer. If that isn't high quality enough the frame can also be renderered in sections, and/or multiple frames can be rendered with slight offsets etc etc. There are dozens of ways to make it work.

      Can these cards to programmable shading like RM can? No.

      Fuck yeah they can! That's the whole point. Where do think these shader languages came from? Large shaders can always be broken down and rendered in passes.

      These cards are designed to do graphics real time with the best quality they can squeeze out while still hitting their timing targets. RM is meant to get the best possible quality - and who cares about time?

      No there cards are designed to render images quickly with quality as a second priority to time. There is a difference. You are implying that they will reduce the quality to hit realtime framerates, which is not true. PRman (if that is what you are referring to by RM) was used and still is used because of its quality and speed, speed being a very high consideration, with quality taking precedent. Speed is everything. Speed breeds quality.

      This is a silly pointless discussion. Yes, in 10 or 20 years maybe the hardware will be there, but it isn't now and you sound silly making speculations like these.

      This is about as important as discussions on 3D come. This is as huge as anything that has happened to the 3D industry. This is revolution over evolution. This is the next big step that 3D will take after the invention of gourad shading, phong shading, Renderman, and hardware acceleleration. This will start to happen by the end of the year, not in 20 years. 3D is great now, but it is about to get really really good.
  • I'm wondering what the final result will look like. I've always felt that actors, even if clad in rubber suits like in Predator, look far better and more realistic than CG graphics. I also feel that CG should just be for the background, or other special effects, never for characters. It's hard for me to 'suspend my disbelief' when I'm watching a scary movie and a computer generated villain walks on the screen.
    • by GuyMannDude ( 574364 ) on Thursday June 27, 2002 @04:24PM (#3781707) Journal

      I've always felt that actors, even if clad in rubber suits like in Predator, look far better and more realistic than CG graphics. I also feel that CG should just be for the background, or other special effects, never for characters.

      I couldn't agree more. I'm really baffled at the constant attempts to shove CGI down our throats. You really can't help but cringe in those scenes in AOTC when Anakin is riding some beast (both in the field and in the gladiator arena). I mean, it's so obviously a CGI effect. It just doesn't move right. And this is LucasFilm -- CGI doesn't get better than that.

      With all the time and money they've spent on trying to improve CGI motion, I would think it could be better spent on developing more realistic and movable costumes. I'm not trolling -- I really want to know if anyone thinks that CGI living creatures have realistic motion.

      As far as I'm concerned, CGI has its place. And it's not for recreating living creatures.

      GMD

      • As far as I'm concerned, CGI has its place. And it's not for recreating living creatures.

        Even if such creatures are extinct or never existed in the first place?

        I have to disagree that the arena beasts in Ep. II seemed unreal; the cat-like creature seem rather convincing to me. Also, understand that CG is only the most recent "nonliving" technology to do FX. King Kong, et al, were stop motion, and I found them more convincing than Godzilla, which was a man in a costume.

        The only shortcoming of CG right now, IMNSHO, is in modeling human motion and expression. But this is only because we, as humans, have much more experience observing each other than animals, so we tend to be more discriminating. In time, we will learn enough about out physiology to model our actions convincingly up close; CG can already do so at long distance in "crowd scenes."
        • GMD> As far as I'm concerned, CGI has its place. And it's not for recreating living creatures.

          Wormwood> Even if such creatures are extinct or never existed in the first place?

          Yes, I mean anything that is alive in the movie.

          I have to disagree that the arena beasts in Ep. II seemed unreal; the cat-like creature seem rather convincing to me.

          You're welcome to your opinion. Actually, I was really referring to the "bronco riding" scenes. That's what really stood out for me.

          Also, understand that CG is only the most recent "nonliving" technology to do FX. King Kong, et al, were stop motion, and I found them more convincing than Godzilla, which was a man in a costume.

          Oh come on. Godzilla was made by Japanese studios on a shoestring budget. And although those movies have been exported all over the world, they are really intended for a Japanese audience. Japanese just don't care about super-realism. Just look at that kaboki (sp?) theatre! Guys in dark clothing move life-sized wooden puppets around in a play. That's not even remotely realistic and the Japanese don't care. That's not important to them.

          You wanna see what Hollywood can do with guys in suits, watch Aliens (the 2nd one) tonight. You can't tell me those beasties don't scare the poo oughta your booty!

          In time, we will learn enough about out physiology to model our actions convincingly up close

          You may well be correct. But until that day I just don't want to see any more products from Hollywood's "learning curve".

          Again, this is just my opinion. If you like CGI creatures, your opinion is equally valid.

          GMD

          • Just look at that kaboki (sp?) theatre! Guys in dark clothing move life-sized wooden puppets around in a play.

            Okay, my bad. That's not kabuki theatre. Does anyone know what the hell the name of these Japanese puppet plays is?

            GMD

      • I mean, it's so obviously a CGI effect. It just doesn't move right. And this is LucasFilm -- CGI doesn't get better than that.

        Show me a beast in a Star Wars episode that moves naturally and is not a puppet. One of the reasons recent dynosaur movies look so good is that the studios payed scientists a pretty penny to look into how animals move. And verified the CGI results with them.

        Have you heard of Lucas doing this even once? You gotta be dreaming.

      • As far as I'm concerned, CGI has its place. And it's not for recreating living creatures

        The most recent attempts at CGI for living creatures has been fairly out of whack... I'm not an expert, I won't even begin to guess why.

        But go back awhile. Hell, go back 10 years. Watch Jurassic Park again -- Spielburg mixed CGI and more traditional FX extremely well there. The first scene with a brontosaurus grazing on treetops is just amazing. Some of the later scenes with the smaller dinosaurs "moving like a flock of birds" and the T-Rex attacking them are also well done. In fact, a lot of the robotics just looks bad - like the sick triceratops.

        It may be because we've never seen these creatures alive and the human interaction with them was nil (all interaction shots were done w/ robotics AFAIK). But it's an indication that it can be done right at least.

        We'll eventually get CGI for living creatures down pat... but not yet.
      • This is, of course, why LoTR is done with a mix of real and animated work - people were surprised when they heard that Jackson was getting lodges built for the halls of Rohan, and wanted to know why they didn't just CGI it. There seems to be a lack of understanding that for lots of material, models, sets, and so on produce vastly superior results, usually at a much lower cost.
    • Blizzard makes all-cgi cutscenes that are better than 99% of all movies out there. The infusion of character into their creation is unbelievable. Blizzard games are worth buying just for their cinematics. It seems more realistic, because they've created a whole new world, and have made that world vibrant with life.
      • I don't think you can compare Blizzard cutscenes to real CGI movies... except maybe their newest creation, Warcraft III. The cutscenes in Starcraft are rediculously cartoonish, and those in Diablo II, while better, are still not up to par.

        If you haven't seen Warcraft III yet (though by the content of your post, it sounds like you have)... buy it when it comes out! It's awesome.

        Better yet, just look at their trailers on Blizzard's web site.
  • It doesn't seem to me that this would be a practical alternative. The only advantage that I can see of real-time processing is savings in start-up capital. Lucasarts and others already have spent the mass $ for their rendering farms, so what advantages would they get from switching to real-time rendering?
  • I'm reminded of the classic SNL piece.. Deep Thoughts by jack handy... or in this case by Cliff
  • I think rather than having impromptu animation, why not write a rendering engine that would take advantage of the awesome GPUs out there instead of using all processor to get the job done.

    Right now the render times for movies is months, even if your Geforce4 chokes horribly on each and every frame because they're so big and you only get maybe 1 frame per second. Thats 2 seconds of video per box per minute.

    Spread that across a render farm of 30 boxes and you get "realtime" rendering which would make life for the animators much nicer I would think.
    • Re:Good Idea (Score:2, Informative)

      by evalhalla ( 581819 )

      I think that GPUs (at least, nowadays) are too focused on the tasks they have to perform (working with relatively little polygons, applying a few "small" textures etc., all of this in a very short time) to be useful in a totally different task like animation, where you have to work with huge amount of polygons, with complex textures etc., even with special software.

      You also have to add the fact that many GPUs have specialyzed "special effects" built in, like light effects and similar stuff that may look great in a game, but are totaly useless in a movie, as they would be too standard and not-so-effective, so you will only be able to use the standard features of your GPU and still use the CPU for most of the work on special effects etc.

      Even worse, video cards are more and more focused on speed rather than quality, and this is not going to help when making a movie.

      Of course they could have some board specifically designed for the tasks they need, and this would surely improve the time needed to render a movie, but I'm not so sure whether it would be worth the cost.

  • by molo ( 94384 ) on Thursday June 27, 2002 @04:16PM (#3781660) Journal
    A software renderer is just plain more flexible. When there's something you want to change in the rendering process, fix the code, recompile, distribute to the renderfarm. Done.

    When there's something you want to change in your hardware-based rendering, what are you going to do, re-fab the silicon and solder it in?
    • Nvidia makes changes to their hardware-based rendering all the time. It's called new Detonator drivers.
    • When there's something you want to change in your hardware-based rendering, what are you going to do, re-fab the silicon and solder it in?
      You can all but program FPGAs in C these days anyway, and a modest stack of FPGAs can do amazing things, fast.

      You could start with an architecture similar to Andrew Huang's five-or-so-year-old Tao [mit.edu] reconfigurable computing platform, with pipelining de-emphasized. system speed approximately doubled, and (possibly) multi-ported memory added.

      -jhp

    • Have you heard of "programmable hardware shaders"? I hear they're all the rage these days. When there's something you want to change in the rendering process, you can fix the code, recompile, & run your app. Done, and done a lot sooner too.

      Come to think of it, I believe these miracle shaders have something to do with the "Cg" language this article just happened to be about. What a coincidence.

      • Have you heard of "programmable hardware shaders"?

        Have you heard of the term "Turing Completeness" ? - as in, "programmable hardware shaders are not Turing Complete" ?

        • That isn't necesssary; the gfx hardware isn't doing it all by itself, the CPU is controlling it.

          The hardware is just accelerating the process by doing most of the grunt work of multiply-adds & shifting bits around, on a dedicated chip with 4 or 8 pipelines using multiple vector ALUs each, fed by 10 or 20 GB/s of local bandwidth. And gfx hardware is increasing in speed & complexity a lot faster than CPU hardware.

          Think of it as a massive SIMD array coupled to a Turing-complete CPU, and you'll have the right idea.

          • Think of it as a massive SIMD array coupled to a Turing-complete CPU, and you'll have the right idea.

            Sorta true. Though definitely not like a massive SIMD array, the control structure is far more complex and loosely-coupled than that.

            Because the CPU is not in the datapath you get a few problems that you don't get with a plain general purpose CPU:

            - Limited precision of intermediate results -> restricted space of implementable algorithms

            - Very restricted data addressing modes -> you need to build lookup-tables at run-time which can eat into your performance for certain algorithms

            - Difficult to implement conditional tests

            So, yes, because you start with a Turing-Complete CPU in the system, the system stays Turing-Complete, that's trivially obvious. However, not all algorithms can be implemented on the devices mentioned in the article, which was the point I was trying to make.

            Future hardware is a different matter - both precision and programmability are increasing in leaps and bounds. This is pretty traditional for graphics hardware, and has been so since the start of the field - machines get more and more programmable until they start to become fully programmable, then the lowest levels get replaced by faster, fixed-function hardware, then the cycle repeats itself for those levels. Newman was talking about it in the mid-70s, and it's been pretty much the case from then on.

            We are getting to the point now where we will be able to put enough flexibility and functionality on a chip to run Renderman pretty much natively on the graphics chip, but not with the stuff the article was talking about. I'd give it 18 months to two years.
            • All good points too.

              - Limited precision of intermediate results -> restricted space of implementable algorithms

              Granted, for today's hardware at least. Although float precision (coming with NV30 & maybe R300) still isn't up to the accuracy levels required by some algorithms, it's good enough for the great majority, and certainly enough to make nearly anything possible, if not perfect. In any case, way better than the 8/9/10/12 bit integer hardware out there now.

              - Very restricted data addressing modes -> you need to build lookup-tables at run-time which can eat into your performance for certain algorithms

              Yeah, but performance isn't really the issue, so much; chances are it's still going to be way faster than executing on a CPU. And if not, well, GPUs are increasing speed at 3x the rate of CPUs...

              - Difficult to implement conditional tests

              Harder, yes, but possible using the stencil buffer. The compiler can take care of implementing it. Perhaps inefficient, but see above point.

              There have been numerous papers on running Renderman shaders in hardware. (Very) simple ones are possible with just register combiners, but the upshot was that, with dependant texture lookups & float pixel support (available now, & end of the year), full Renderman support was possible.

              I'd guess that SIGGRAPH this year will see a few very interesting RT shader examples running on "unannounced' hardware, and by next year it'll pretty much be a done deal. In 18 months to two years, it should be more than possible - I'd expect it to have hit mainstream.

  • I personally think all 3D animation should revert to the days of Dire Straits' Music Video....

    "I want my, I want my, I want my mtv..."

    Now *there* is high tech animation.

    Seriously though, the geometry nor resolution of even the most cutting edge graphics cards are anywhere near the level required to produce the high quality images, especially an image that wouldn't turn to crap on a typical movie screen. For the mainstream this just wouldn't cut it... Imagine the jaggedness and polygon count on your monitor scaled to a theater screen.... scary.

    And for the people who would appreciate this sort of thing and would enjoy watching or seeing what they can do with a restriction on polygons and resolutions, there is always the demo scene dedicated to showing off what they can do at any level between all processor load to entire system in realtime. For movies I remember watching a couple of films written as Quake demos, I presume this is still happening somewhere on some level.

    This appeals to some people, but those people are already served...
    • actually the company that made that video (http://www.mainframe.ca/) is still very busy making a lot of the 3d cartoons you see on tv today. like reboot, action man, beast wars, and weird-ohs.
  • Digital Theatre (Score:2, Insightful)

    by Gulthek ( 12570 )
    So essentially this would be the technological version of a stage theatre production? If it's done right it could merge the uniqueness of a live performance with some spiffy effects that would not be possible to create otherwise. Sounds cool to me!
  • by UncleFluffy ( 164860 ) on Thursday June 27, 2002 @04:18PM (#3781674)
    The current generation of "GPUs" (ick, I hate that term) are neither powerful enough nor flexible enough to handle something as complex as a Rendeman shader. Go pick up a good Renderman book and look at what the spec requires from the implementation.

    Stuff like DX8/9, which the gfx chip companies design to, is a very very small subset of what Renderman specifies. I suppose in theory you could build a tool that split shader work between the main CPU and the gfx card, but, I really don't think it would be worth the effort.

    That's not to say that future hardware won't be able to do this kind of thing, but I'm not going to violate any NDAs on Slashdot ;-)

    Come back and ask the question again in 18 months or so.
  • by deathcow ( 455995 ) on Thursday June 27, 2002 @04:18PM (#3781675)
    We wish to express our sympathetic apology to all our regular Clone Cinema viewers. Unfortunately, Toy Story 3 requires at least a Quad-Pentium projector with a GeForce 6 card to display properly.
  • What the hell? (Score:4, Insightful)

    by Gizzmonic ( 412910 ) on Thursday June 27, 2002 @04:18PM (#3781676) Homepage Journal

    Replace Renderman with a fuckin' PC video card? Maybe if the folks at LucasArts were weaned on paint thinner.

    This sounds like your typical PC blowhard who believes his DVD player, Playstation, telephone, and eventually his computer will be replaced by a graphics accelerator.
    Hey, you might need some justification for dropping $400 on that latest waffle iron from ATi, but you'll get none here.

    And as for "improv animation," blow it out your ass. The reason that company quit is that it looks like shit. The closest you're going to get to that is games like Samba de Amio and Dance Dance Revolution.

    Lastly, Mr. Dean Siren, what's your relationship with Strange Company and Machinima? Cause this sounds an awful lot like a puff piece from a PR flack...

  • Fans like to follow the lives of their favourite actors, not just watch them in movies. A computer character won't have a 'real life'.

    Part of the reason is that people wish they could be like that. Who will be able to live vicariously through a computer program (slashdot crowd excluded).

    Jason
  • Shader languages such as Cg (Indeed, even Cg itself) is supported by many software renderers. Software renderers use pixel shaders, they just don't do it in realtime or in hardware.

    So it is in fact conceivable that we can see professional pre-rendered animations done using Cg.

    Looks like you people don't know what you're talking about. GPUs and shader languages are independant.

    Regards, Guspaz.
    • Yep. In fact the real big deal about Cg is that it optimizes your code for many different possible configurations. This could actually mean that we could see software mode again in games. But for most of us, it means that Nvidia's programming language will work just fine in your ATI card, provided ATI comes up with a Cg compiler for their card that's less buggy than their drivers.
  • Right now the problem with movies is that they're costing more and more to make. Studios looking to pop out more movies for less money will probably do it. A studio generally doesn't like putting a bunch of money into a computer-animated movie, only for it to come out terribly (see Final Fantasy movie). So pushing out more movies will help them to hedge their bets.

    The immediate advantage in Cg is allowing independent film makers to make special effects more easily and faster than before. It helps the push towards giving computer animating power to the masses. But this doesn't mean that computers will replace actors anytime soon. Think of what will happen to Entertainment Tonight and Access Hollywood!
  • A lot like the facial "actor" implants in Diamond Age [amazon.com].

    Take a real-time rendering system and a complex 3D matrix plotter and combine them and you can have real-time digital actors modeled by RL people.

    Add a lot of CPU power and a genetic algoritm and the computer should be able to, after some time gathering information, immitate the "recorded" actor much like voice recognition learns your voice patters.
  • Geforce video cards okay but remember...their catored to gaming. The graphics / rendering / core of the cards is nothing compared to SGI / Sun video cards. Keep in mind that you will still need a farm because even though you $900 Geforce 6 gets 1,000 fps in quake 3, it's no rendering beast. You'll still need Maya, Soft Image, Lightwave, and Renderman. The only difference if nvidia improves performance for a workstation market (even including the quadros) is it'll probably be slightly cheaper then an sgi card. The downside, visual quality. SGI owns visual quality.
  • So these people want to put massive multimillion dollar renderfarms in theatres just so it can be done in real time? Sounds like a bad idea to me.
    Even rendering the sound in realtime just doesn't sound feasable. Csound couldn't do a whole orchistra with voice modeling and effects in realtime on even most nice clusters...
    Moreover, will the audience care? It's not like the CG actors are going to 'screw up' so it's not interesting like seeing a play. I personally don't see the point.

    Well, the answer to the question of the topic is......
    Never, or when StarTrek and Holodecks become reality. It's just not feasable, and with 40-70 hours a frame for current movie renders, you can't move that into 29.97 times per second for the sake of it being realtime..

  • Personally, I think Cg (or a derivitive) will eventually be used for movies. Eventually these kind of tools (and hardware) will reach a point where they can compute the same algorithms that renderman, etc. use internally (to something very close to the same precision). This then can be executed on a graphics card at much greater speed than can be done on a traditional CPU.

    However, this doesn't imply that the rendering by the graphics card will be real-time. Renderings per frame may drop to minutes instead of hours, but it probably won't be interactive. Also, the same amount of work by artists tweaking animation and doing post production still applies. Basically, graphics hardware will replace 1 portion of the pipeline, not the entire thing. It will probably be many years before hardware can generate really convincing photorealistic images at interactive rates (don't listen to the marketer speak of graphics IHV's!)

    Post-production will always exist, it's not like it was invented with CGI. They use post-production techniques on live-action film sequences as well, why would it be any different if the CGI was generated in real-time (like camera photography already is).
    • Renderings per frame may drop to minutes instead of hours, but it probably won't be interactive.

      You have no idea how wrong that statement is. The first half, that is. The second half is perfectly accurate.

      People in the visual effects/animation business have something they call "Blinn's Law", which is the flip-side of Moore's Law. It states: Renders will always take the same amount of time. It's true. On average, computing frames for Monsters, Inc took about the same time as they did for Luxo, Jr. The reason why this is the case is that audience's expectations increase at about the same rate as the power of hardware. Yes, eventually we may well have prman in your graphics card, but by then, the CG films of the turn of the century will look quaint at best.

      Hell, Toy Story already looks quaint.

  • I was wondering about the applications of this technology, and of course I first turned to games. How about:

    • Someone putting on a Shakespeare play in Ultima Online/Everquest for other people to watch/buy "tickets" for? (I'm having this horrible image of a "charity" event to get enough money for a "cure poison" spell or something....)
    • Why spend the money on video imaging for the company - load up Quake III with the Videoconferencing Mod, and just have your meeting that way. (Personally, I'd just want a black rectangle with the words SEELE 13 - Audio Only on it.)


    Um, so that's all I can think up. I'm goin' to get some chili. (Another reason for virtual videoconferencing....)
    • (I'm having this horrible image of a "charity" event to get enough money for a "cure poison" spell or something....)

      I'm having even more horrible images of hundreds of (lit, wooden) torches being waved from side to side in the air during the lute solo.

      Damn you!

  • Lest we forget Triton and Future Crew and the rest of the demo scene? If you have
    you better remind yoruself [hornet.org].

    The 4k demo contests have always been the pinacle (IMO) of art as not only did you have a visual experience but the wonderment of how much was packed into a 4k executable. It was art in design and programming.

    And all done with typical PC hardware. No fancy render farms. Hell, FC's Second Reality ran on a 386!

    And now look torwards all the work being done with Flash, especially with respect to animation. But I think the author of this post means to focus on realistic animation.

  • There was a pretty cool little windows demo a few years back from the GODS demo crew called TOYS. It wasn't realy improv.. but it had a story and a plot (somewhat)... which was a nice break from just the flashy gee-whiz only factor that alot of other demo's had. The graphics are pretty crude looking back on it now but it was a slick preview of what a movie/cartoon rendered in real time would be like.
  • Judging from past experiences, does Lucas know the difference?
  • A GPU make sense to view the improvised motion capture result in real time, but the motion capture stage has nothing to do with the final product. The rendering farms will still be used to make the final image with much inproved quality, and the post-production obviously has to take place too.
  • Motion captured acting is bad. Really bad. I've seen believable acting in a few video games (notably a couple of the cut-scenes in Onimusha), and it shocked me. Most of the time, it looks like the actors are all on `ludes. They are quite literally in their own little worlds, going through the motions.

    Now, real-time rendering, even if it wasn't production-quality, could change this. Just giving the actors decent HUDs so that they could actually talk to the CG creatures would help a lot. Real force-feedback stuff would mean that they could actually touch each other. This is what we need for cg to replace real actors. Then it would really be the Future, what with Virtual Light and etc. being reality. (And we'll have flying cars, damnit.)
  • Even if its rendered through a GPU, it doesn't have to be improv.... they can control exactly what happenes...
  • Never Be Replaced... (Score:2, Interesting)

    by dmarien ( 523922 )
    "...computer animation should be performed, maybe even improvised, live by motion captured voice actors."

    Computer graphics and rendered animation isn't replacing live human actors. If motion capturing and voice over is used you're still going to see the actor/actresses unique style in the finished product... I'm picturing some of the characters in Shrek/Toy Story(2), and how they were obviously very digital, besides of course the voice overs... If motion capturing is used, the emphasis will be squarely back on the actor animating the character. If Jim Carey was the actor behind a character in one of these new mtion captured productions he would be instantly recongnizable because he is such an animated person to begin with, and if the digital character is animated by his motion captured movements and vocalized by his voice overs it's would be 100% classic Carey, and wouldn't come close to putting human actors out of work -- if anything this would force the actors to develop new strenghts and talents to make their animated characters -- which *they* animate through motion capture suits, come to life!
  • Sure sure... hardware rendering...
    When a gpu can handle 20+ million polly's with 4k textures on them... and 600+ MB scene files.
    And 2gb of system ram.

    If you look at what a cpu based render has to handle and all the files it has to generate, it would have to be an extremely specialized machine that would cost an extreme amount of money. I would rather throw my money at more dual 2.2 Ghz P4 rackmounts.

    They prove they work. And they are standard hardware. So anybody that makes software will support them.

    It is all pretty much a pipe dream to get realtime renders at the quality needed for film. As soon as that happens, I am out of a job. The amazing thing about CG Studios is that they keep raising the bar as hardware comes out... so the faster the machines the heavier the scene is.

    Its not that artists are getting much better as much as machines are able to handle more.

    -Tim
  • "I'm Sorry ladies and gentlemen, but due to a small bug in Maya Realtime 2004, the theatre has crashed."

    Or Better yet
    "Part of our cluster is now out... but instead of compromising the movie and showing this prerendered reel, we will show the movie at 1 frame/sec for the rest of the movie, LoTR will now finish in 27 days..."

    Come on people, we can't even get digital screens and real THX in all theatres, let alone renderfarms, Machiimia is crazy...

  • I think this is an unreasonable goal (replacing offline rendering with motion capture and realtime rendering, for motion pictures), and Dr. Alvy Ray Smith agrees with me. [sc2001.org] (And he knows more about the subject than anyone likely to post on Slashdot, myself included.)

    Yes, I've heard him talk, and I know he's not addressing exactly what this article talks about. What I'm saying is that the task of making computer animation truly realistic is more difficult than we are capable of, using the most advanced tools available today. That, to me, means that it's much, much more difficult to do it in real time using algorithms and hardware that is much less sophisticated.

    Can you do something cartoon like? Certainly. Look at Clippy. Can you make it believable and real? No.
  • i just saw a film called Missing Persons at the LA film festival that was done this way. Not realtime rendering i don't think, but they used geforce hardware. www.missingpersonsmovie.com, very cool but weird movie.
  • There are things in RenderMan/non-realtime rendering programs that simply cannot be done by realtime renderers.

    For one thing - programmable shading. Programs like PRMan and BMRT support programmable shaders - which are incredibly important for photo-realistic effects. They also are expensive in terms of processing, which is why realtime is going to have some problems with them.

    Another thing is resolution. I don't remember what resolution the images need to be for film, but I think that it's pretty high. More pixels = more processing.

    Realtime effects for games are getting to be stunning, but motion pictures are another thing entirely.
  • The graph on that linked page seems very objective.
    Cg quality is no where near to close the quality of modern ray traced images and movies. When Cg can produce an image like this [irtc.org], then maybe things will change.
  • Already being Tried! (Score:2, Informative)

    by pi42 ( 190576 )

    I read this and immediately remembered when Brian Henson (Jim Henson's son, of Muppets fame) came and gave a talk at my school last year.

    One of the things "The Creature Shop," the company he runs, is working on, is digitally animated puppets which are played in real-time the way that a normal puppet would be. He didn't give too many technical details then, but I found this press release, check it out:
    http://www.henson.com/company/press/html/060601.ht ml [henson.com]

  • I think some people don't understand how much work goes into making motion capture animations look good. It's not just having animators clean up the motion capture data, but your actors have to be able to do the motions right on the first take.

    It would probably work for making some crappy saturday morning cartoon (I could've sworn I saw one once that was doing something like that), but for good quality animations, you need good animators.

    What's more interesting is the work for physics-based animation. Again, you won't get good movies out of it, or even "realistic" human characters, but it will be a big advance for games -- though I doubt it will make a dent in the demand for good animators.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Thursday June 27, 2002 @04:40PM (#3781815) Homepage Journal
    Studios like Disney, Pixar, and Dreamworks are installing PCI 3D cards in the animator and technical director's workstations. These were previously much more expensive SGI workstations, and are now IA-32 PCs running Linux and OpenGL.

    It's not at all clear to me that Cg provides any advantage over OpenGL used from C/C++ for the sort of work that the high-end studios do.

    The vanilla CPUs in render farms and the software renderers that run on them could be replaced by hardware rendering for the lower-quality work, but never for the highest. First, the render farm doesn't need the real-time facility of the GPU - the part the GPU does best, and the part that contributes most of the cost to the GPU. The render farm just needs to render a frame to disk, and can do this more cost-effectively with a software renderer and a general-purpose computer. Second, the GPU isn't as extensible as the software renderer, because it's cast in silicon. There will always be an effect you want that the GPU can't handle. And then, the GPU is built to render video fast, and trades off many aspects of the rendering algorithm that we really want when we render to film.

    You will, however, see all of the studios buy arrays of GPUs for making rushes. These are less-than-full-quality playbacks that they use to review the animator's work-in-progress before final rendering. If we got some really fast programmable gate-arrays, or GPUs with documented and programmable microcode, we could use them as a GPU is used, but in a way that might support the highest-quality rendering.

    Pixar tried to make high-speed hardware for years, and we always found it to be a losing game. I wrote microcode for one of these beasts, a parallel bitslice engine that inspired today's MMX instructions. We could not keep up with the development of vanilla CPUs, and the CPUs ended up being more cost effective.

    Thanks

    Bruce

    • Rushes... we do those... (its nice to see when your studio isn't completely out in left field)

      But when done, the system takes over the GL display and the frame sections are copied off of the GL buffer on to disk... at least in Maya... if I remember correctly.

      For the most part, we will do those for animation checks, but every night the animator will still have a flip rendered of their work at that stage. The nice thing about only doing one movie at a time is that all the renderboxes are dedicated to what stage of production you are in, so the artists can get actual renders back instead of hardware approximations.

      Also due to the way we do mouths, we need flips to see mouth animation on the veggie characters.

      -Tim
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Thursday June 27, 2002 @04:40PM (#3781816) Journal
    How would real-time animation be different than puppetry? Modern puppets often have more than one person controlling them, and the controls are arcane to say the least (each finger might control a different part of a face, say.) In my experience with puppeteers and animators, I have found that you can teach any competent visual artist animation -- some will be better than others, no doubt -- but puppetry is a much more rare talent.

    Real-time rendering of CG puppets has been done by Brad de Graf, now at Dotcomix [dotcomix.com] and several other people over the years; but it's never been easy or particularly successful.

    Real-time capture of data for later non-real-time rendering is much more common. Graham Walters and I did the Waldo puppet [umbc.edu] for The Jim Henson Hour back in 1988. One might also consider the motion-capture technology now widely used in visual effects production as a type of whole-body puppetry -- the robots in the latest Star Wars movies are animated by having people perform the parts, and then capturing that motion.

    There may be a future in multi-track puppetry; where you can lay down a track at a time, each pass recording a few more paramters until you get the whole sequence done. This would be of course analogous to multi-track audio recording. But recording a whole complex character in real time would mimic puppetry with all of its limitations and flaws, but more expensively.

    thad
  • Rustboy (Score:2, Interesting)

    There is a project to make a movie using standard render tools and such. There is a single guy trying to make a quality feature film using a regular computer not a render farm. Check out his site for more info http://www.rustboy.com [rustboy.com]
  • Dear god... it sounds like they're simply describing Evercrack except with realistic graphics and a good speech synthesizer.

    It's the people acting out part and taking quests that could be entertaining, if viewed from afar. Also since the world is so big you could have an interactive component for the "viewer" to play god and jump about watching what everbody is doing....
  • by FreeUser ( 11483 ) on Thursday June 27, 2002 @04:55PM (#3781906)
    I've seen a plethora of posts that basically argue "today's tech can't do it, so this is a stupid discussion."

    Remarkable.

    Technically savvy poeple, of all people, should realize that simply because Farscape-style special effects cannot be done in realtime today with today's low end consumer graphics GPUs doesn't mean the concept of 'live performance animation' as such is flawed at all.

    First, much lower quality 'live performance' animation is possible with today's consumer hardware, and the improv aspect alone makes it an art form worth persuing in and of itself. The possiblity for algorithmic and technical enhancements that could be driven, or at least explored, by such an art form make it a worthwhile endeavor as well.

    Second, in another 5 or 10 years (at most) it will almost certainly be possible to do live performance, farscape quality digital animations (assuming the technological development of the computer hasn't been brought to a standstill through stupid legal 'innovations' like DRM and Palladium). While movie makers would likely simply add this to their set of tools and not replace post-production entirely, the ability to create 'live theatre' digital productions and interactive, perhaps even submersive, two or multi-way environments if not completely synthetic realities is an intriguing one, to say the least, and certainly a worthwhile endeavor whether or not Hollywood can make use of the technque in their movie productions. Indeed, such systems could well render the movie as obsolete as the live stage play is today: in other words, no longer the main popular attraction, but a continuing artform valid in its own right, if no longer the center of public attention.

    8 years ago I was at the U of Illinois' virtual reality lab and had an opportunity to play around with some of simulations they run, including one which allows the viewer to explore a three dimensional (submsersive) grey-scale view of the mega-structure of galaxies in the universe (to study large scale structures such as strings of galaxies, etc.).

    8 years later I can explore the universe in living color on my GNU/Linux box running Celestia, in 1920x1200 24-bit color, in realtime. While it isn't submersive 3-d VR just yet, it is much higher resolution and full color, and while I can't explore the farthest reaches of the universe, I can explore the immediate galactic neighborhood in incredible detail (much greater than the old simulation allowed). All of this on a $400 Nvidia card, running a free operating system on commodity hardware.

    So, in other words, dismissing this possibility simply because you can't do it with perfect, photo-realistic effects today shows a remarkable lack of vision, and a blindness to similiar leaps in technology that we've all beeen taking for granted for the last decade or two. We will be able to do this sort of thing, photorealistically, much sooner than most people probably realize, and the art form can be persued long before the final polish is available.
    • Just plain old realism. You can always say, 'in the future X technology will be cheaper and faster than today's Y technology'. C'mon, it doesn't take a genious to say that.

      The thing is, that the original poster has a tone of "we've been doing real time animation for quite some time" and asking why movies studios are not going in that direction. The answer to that is pretty obvious too.
  • I was thinking that with real time CGI characters, there could be a call-in TV show where viewers could speak with their favorite characters, sort of like "President Clinton Answers Childrens' Questions". It's probably already possible with puppets, but perhaps with CGI it could be more elaborate.
  • Unfortunatly all these /. people don't any imagination. OH that's a stupid idea, why would you ever want to render a movie in real-time?

    The point isn't to render a movie, the point is to use your computer like a canvas to paint on. Only instead of making a picture, you make an animation. Maybe you use one computer (or many) to control it and then feed all the control data to a main computer system that renders it in real time for the audience to see it. Maybe you've got 10 people controling monsters, beasts, and other imaginary characters, with people doing their voices (and probably also controlling the facial animations at the same time. Think like how TV is done, with make camera men , a control booth that splices all the sound together from different sources, the guy who's job it is to overlay different titles on the screen and do transistions between show segments. Just replace the 'real' life actors with computer generated ones.

    It would be easier to do a cartoon style show because people prefer actual actors to computer generated ones.
  • We've had the tech for years with video games
    Yes, we have. And it's gotten to the point where games that have cut-scenes generated using the game's own graphics engine are MUCH better than those pre-rendered. Even when they hire actors and film an elaborate mini-movie into the cut-scenes, the compression needed to stuff them into part of a CD totally ruins it. That, and the `switch' from the game's beautiful in-game graphics to the pre-rendered graphics really takes away from the effect.

    Are you listening, game developers?

    ANIMATE YOUR CUT-SCENES WITH THE GAME'S OWN GRAPHICS ENGINE WHENEVER POSSIBLE!

    but the art form hasn't really been tried
    Sure it has. The demo scene has been around for decades. First they were doing 3D w/o graphics hardware assistance at all on 286's, then 386's, 486's, 586's, Amiga's, etc. Nowadays, the demo scene seems much smaller, but they do use the 3D graphics cards to make much more elaborate demos. Funny, however -- they don't *seem* that much more impressive than they did. (I've probably just been jaded by modern games. And I'm probably not the only one, which might explain the smaller demo scene.)
  • This takes me back (Score:2, Informative)

    by SandSpider ( 60727 )
    This reminds me of the first time I went to SIGGraph, the big convention for computer art geeks. It was really cool, since there were lots of high-end toys to play with. It was about 6 years ago, and VR was really big at the time. Everyone had some sort of poor headset display that would make you sick or give you a headache. Many people had special "3D input devices" like a mouse with a stick at the end that you drew NURBS in real time or somesuch.

    Anyways, the other really big thing were the motion-captured, live 3D actors. They'd project an avatar of someone up onto a big screen, and have them try to talk to hold conversations with you and the like. It was actually kind of annoying.

    =Brian
  • Isn't there some Simpsons' gag (maybe the one with "Poochie") about how they had to give up doing the animation at the same time as the voices, the animator hands were getting too tired?
  • Even if you can get decent rendering in real-time (doubtful), you still have to do physics and collision detection. That means no inverse kinematics, cloth simulation, fluid mechanics, realistic lighting. All you'll be left with is shaded, textured triangles. HMm.. I guess you could make Tron.
  • A lot of the "it can't be done yet, but maybe in X years" answer are missing the fundamental truth of computing... The more power you get, the [even] more power you need.

    On a simple level, look at how much staggeringly more powerful your PC is than the one you used to run Word Perfect 5.1 for DOS on. "Imagine how quickly your Word Processors will run in the exciting future when PCs break the 1 GHz mark." What, your Word Processor got slower because it's now bundled with a million other tasks that might only be used by 1% of users, 1% of the time but are still considered essential enough for everyone to have a [relatively] recent version?

    Now the same's true for 3D graphics. You can probably render all of Tron in real time now. You just wouldn't want to because Luxor Junior came along and raised the bar, then Toy Story, Shrek, Final Fantasy and so on. While modern hardware can do the slow tasks of ten years ago in real time, todays slow tasks are even slower than the ones from ten years ago as we come up with new concepts like trying to make milk and the surface of the skin handle lighting properly.

    So, don't get your hopes up for doing the latest movie on your PC in ten years time - the movie guys will always want to push for better algorigthms, new techniques and need more power to do them. What you will see - and what you're already seeing - is having close to the power of the renderers of maybe five-ten years ago on your home PC. That being the case, you'll probably see Matrix quality games in another five years or so.

    As others have pointed out, digital real time puppetry exists already, just not of the same quality as the pre-rendered stuff. But that's another topic.

  • by Vito ( 117562 ) on Thursday June 27, 2002 @05:55PM (#3782319) Homepage

    The person who posted the question misunderstands the purpose of the concept of Machinima, and all the higher-moderated comments lambasting the idiocy of "replacing a render farm with a video card" are missing the intent of the question, as a result.

    Machinima means real-time 3D movies. "Real-time 3D" in this case means using a rendering engine capable to putting the 3D content on the screen fast enough to appear animated. This usually means a game engine, like Quake, Unreal or Lithtech. I think the Machinima.com website blows (terrible design and layout, no helpful information, not even decent forums), so I won't link to it.

    Those two points are about as far as machinima creators will agree. The "point" of Machinima as a form of filmmaking can be one or more of (depending on whom you're asking):

    • Low financial barrier to entry. If you want to make a sci-fi action flick, fire up Quake 1 (not for the GPL, but for the QuakeC portable scripting language that Quake 2 lacked), download any free/Free level editors, content packages, and demo recorders, make your levels (locations), your models (actors), and either script them (program their movements in QuakeC) or improv them (have other people join you in multiplayer mode and play the parts), and then record a demo (roll tape), edit the demo (post-production), and distribute it along with any additional content files necessary for others to watch the "movie." Total financial burden for obtaining new tools and equipment: $0. Some engines (or third-party tools) can even dump your frames to disk for when you want to press the animation to a VCD or DVD to show your friends.
    • Low cost for repeat distribution. Distributing an movie usually means hundreds of megs of DivX;-) or MPEG content. Even if it's animated, and done in Flash, it's still positively enourmous. With a machinima movie, the only thing you "really" have to distribute is the "script," which is usually a "demo recording" depending on the engine, but can take the form of an actual textual script as well. The game engine takes care of the rest. If you're using all the stock content from a video game, then that's really all you need; if you've created your own levels, characters, art and audio, then you obviously have to distribute that as well. But consider a serial or episodic feature, where it's a new 22-minute fruitastic movie every Saturday morning on CranberryAvengers.com: once they've downloaded the 20MB "player," which is your engine + base character and art content, each new episode is only a few megs or less, consisting solely of the new script and compresed audio, and perhaps a new character or location.
    • Getting rid of the actors entirely. Mark Hamill once said about George Lucas, "I have a sneaking suspicion that if there were a way to make movies without actors, George would do it." Machinima lets you do that. An animator can script a character's mood and movements, a programmer defines states for these, and you "write" your "script" as code or as level design. Some engines let you set up cutscenes or other types of scripted sequences, where you define places for characters to move to, animations and sounds they should play on their way and once they get there, etc. Others require you to do it code, but in general, you can leave the blocking and "acting" to the engine. You tell the characters where you want them to go, what you want them to say, how you want them to say it, and then just tweak the end result. And you can do it all in real-time.
    • Focus on the actors entirely. And now we get to the original poster's question. As mentioned above, the original, easiest way to do machinima wasn't to program new characters to perform actions, it was to have a bunch of your friends get online with you in multiplayer mode, and have them act out the sequences and trigger sound playbacks. This is what the poster is referring to. Imagine artistic productions, from comedy, live drama, ballet, etc, using motion capture stages instead of a "stage," and a projection screen instead of letting the audience see them. What you get is essentially live animation. A small theatre company wouldn't have to build inadequate sets; they could partner with a local college's computer graphics program and design anything they desired. Their actors could look like anything the director wanted, allowing you to focus on the person's acting ability instead. You could even separate the voice actor from the physical actor. Your budget and carpentry skill suddenly goes out the window, since your hardware costs are one-time, and not only do you get to reuse everything you ever make, but there are innumerable low-cost stock 3D object libraries available, and countless kids making "mods" for game who'd make models for free for real-world exposure.

    Machinima is about all of these things, and perhaps even others. Right now, it's a niche art form, due to the high technical barrier to entry (catch #1), and the typical desire for large amounts of custom content that a budding director wants, but as a technical person often can't provide himself (catch #2). This kills a machinima film in two ways: an artist would rather model his own models for his own traditional 3D movies, and and no-one wants to download 500MB of new content to watch a single 500K movie.

    I think a live (improv, as the poster put it) animation production might be very interesting. It could also suck. Machinima, like hammers, guns and DeCSS, is just a tool. What matters is what you do with it.

  • Digital Puppetry (Score:2, Interesting)

    by Bob19971 ( 111037 )
    Here is a site that uses Quake for digital puppetry. It's pretty good too.

    http://www.illclan.com/

    -B
  • a beowulf cluster of Geforce4s. Not likely.

    OpenGL uses a client-server rendering model. But I don't think you can spread the rendering over a bunch of clients. It's gonna take more than one computer to render even one frame of the next Pixar movie (whatever that may be).

    The point of GPUs is to offload stuff from the CPU so that it is free to do stuff like AI and physics. In animated movies you don't really care where the calculations take place as long as they finish relatively quick.

  • This is already happening (and has been for years) with video games. I know what you're thinking, "It's not the same!" and you're right. But go play Metal Gear Solid 2; the game honestly plays like an interactive movie. Games will only get more cinematic (MGS2's credits are filled with Hollywood talent) as companies like Capcom, Konami and Square find better uses for the hardware. This style of game hasn't really caught on with PCs, but I have a feeling it may only be a matter of time.
  • I am very impressed with the results that have been coming out of NVIDIA and Stanford, such as their work on ray-tracing and global illumination (!) on commodity graphics cards.

    The one thing, however, that I see blocking the use of GPUs for general-purpose high-quality rendering is sampling (the technique of avoiding aliases by low-pass filtering the scene at various stages of rendering). All of the GPUs I have seen are limited to dumb box filtering of texture and pixel samples. (i.e. calculate the color at several points inside a region and average the results). The best software renderers do a much more careful job of surpressing high frequencies while keeping the good low frequencies. (e.g. using a several-pixel-wide Gaussian or windowed sinc filter). While these methods are more computationally expensive than the box, they are much better low-pass filters. It makes good sense to choose them for final high-quality rendering.

    High depth (10-16 bits/component) framebuffers are another necessity, but I hear they will be available in hardware very soon...
  • by John Carmack ( 101025 ) on Thursday June 27, 2002 @10:51PM (#3784210)
    There are some colorful comments here about how studios will never-ever-ever replace tools like renderman on render farms with hardware accelerated rendering. These comments are wrong.

    The current generation of cards do not have the necessary flexibility, but cards released before the end of the year will be able to do floating point calculations, which is the last gating factor. Peercy's (IMHO seminal) paper showed that given dependent texture reads and floating point pixels, you can implement renderman shaders on real time rendering hardware by decomposing it into lots of passes. It may take hundreds of rendering passes in some cases, meaning that it won't be real time, but it can be done, and will be vastly faster than doing it all in software. It doesn't get you absolutely every last picky detail, but most users will take a couple orders of magnitude improvement in price performance and cycle time over getting to specify, say, the exact filter kernel jitter points.

    There will always be some market for the finest possible rendering, using ray tracing, global illumination, etc in a software renderer. This is analogous to the remaining market for vector supercomputers. For some applications, it is still the right thing if you can afford it. The bulk of the frames will migrate to the cheaper platforms.

    Note that this doesn't mean that technical directors at the film studios will have to learn a new language -- there will be translators that will go from existing langauges. Instead of sending their RIB code to the renderfarm, you will send it to a program that decomposes it for hardware acceleration. They will return image files just like everyone is used to.

    Multi chip and multi card solutions are also coming, meaning that you will be able to fit more frame rendering power in a single tower case than Pixar's entire rendering farm. Next year.

    I had originally estimated that it would take a few years for the tools to mature to the point that they would actually be used in production work, but some companies have done some very smart things, and I expect that production frames will be rendered on PC graphics cards before the end of next year. It will be for TV first, but it will show up in film eventually.

    John Carmack
  • Cel animation has been around for a long, long, time. Both it and movies are still popular.

    The motion picture has not replaced the stage.
    The television has not replaced film.
    The record has not replaced concerts.

    In fact, I don't know of any new artistic form that has replaced another. Computer generated characters are different from live actors, and always will be.
  • We're doing it already. It's called actors. They should've gotten a clue and shot Final Fantasy with the actual actors they were paying for, instead of buying expensive voices, and using some nameless schmuck to generate an insipid, generic, motion catpure file. It sure would have cost them a whole lot less.

    On the other hand, if you don't have decent script, dialogue, and direction (ie, ATOTC), even with the best of digital and a cast of good actors you might as well save your money and go home...
  • This has been debated at SIGGRAPH the past couple years. The game boards, e.g. Nvidia/Geoforce do not provide full feature rendering. For example movie makers prefer 48 bit full color, but the game boards or only 8-16 bit indirect color.

"Here's something to think about: How come you never see a headline like `Psychic Wins Lottery.'" -- Comedian Jay Leno

Working...