Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics Software

Software for the Realtime 3D Modeler? 204

Milo_Mindbender asks: "I've been involved in doing a number of games and other realtime 3d apps and I always run into the same problem: the 3d modelers that most artists use (MAX, Maya, Softimage, Lightwave...etc) are all heavily biased towards doing non-realtime rendering using raytracing or some other technique. While they can be used to make models for realtime 3D hardware, a very large number of their features don't map well onto realtime 3D hardware. For example many of the procedural shaders used by these packages map very poorly to a hardware shader's abilities, and similarly, if you want to use a hardware shader on some polygons, most modelers give you no way to see the effect while modeling."

"There are other problems too: modelers that have no concept of polygon strips/fans or that make it very hard to avoid generating polygons that will never be seen (the inside surface of a pipe for example). Even if you have the target 3D hardware on the modeling machine, it's rare to have the modeling windows look anything like the finished product. I'm wondering if anyone has run across a good solution to this. Possiblly a modeling package more geared to hardware capabilities, or some way of adapting an existing modeler to make it more hardware friendly by blocking or modifying features that 3d hardware can't handle. It would seem such a package could be cheaper too, since it wouldn't have to support as many fancy features."

This discussion has been archived. No new comments can be posted.

Software for the Realtime 3D Modeler?

Comments Filter:
  • MultiGen (Score:5, Informative)

    by billd ( 11997 ) on Thursday July 18, 2002 @12:42AM (#3906681) Journal
    Take a look at MultiGen & SGI's Performer
    • Yeah, if you want to make a game that is about 5 years behind in terms of graphics.

      Multigen Creator can do multitexturing, but it doesn't give the user any way to set the blend level between the layers. Use two textures and get a 50% blend. Use 3 and get 33%, etc... blech.

      Detail texture application didn't work right in 2.05 (when I got away from it).

      It doesn't do keyframe animation. That's a show stopper right there.

      It doesn't support hardware shaders. It's only OpenGL based, so vertex and pixel shader playback would be done completely via vendor specific extensions to OpenGL.

      The list goes on and on.

      Creator tried to bust into games. Unfortunately for Multigen two things happened:
      1) The ability to do amazing graphics on PCs exploded very quickly in the timeframe between the orignal GeForce card and the GF3 card.
      2) The market for PC based military flight simulators dissapeared. Since Creator was (and still is) marketed as a military and civil engineering vis-sim tool it is best suited for those types of products. You'll never make a Quake III, Tribes II, NWN, or GTA3 with it.
      • Not only that, Creator is really expensive (17kish) per license. Their other modeling product, Multigen II is around 25k a license so that really hits the pocket book hard.
  • Blender? (Score:1, Insightful)

    by Anonymous Coward
    How does Blender stack up? Aren't they going open source?
    • Re:Blender? (Score:3, Informative)

      Blender is pretty good if you can get used to the crappy UI.

      They might be going Open Source, but they have made some demands like they need to get $100k (Euro, ~95k US) in user donations before they open up the source code...So its not a sure thing.

    • Re:Blender? (Score:2, Informative)

      by acasto ( 591344 )
      Information on Blender's current situation can be found at the blender3d website, or at elysiun. As for the money needed to free the sources, this dosn't have to come solely from user donations. Ton has quite a few ideas in mind, and Siggraph is coming up here soon. As for it's realtime and gaming capabilities, they are extremely promising. What will be interesting though, is to see how the developement of the gaming engine is handled upon open sourcing. There seems to be a definite split between those who are into gaming, and those of just modeling/animation. Also, another interesting point is, they had blender running on an iPaq before NaN went under. This opens up the future for blender and mobile type applications.
    • Re:Blender? (Score:1, Interesting)

      by Anonymous Coward
      Blender doesn't even have undo/redo :-)

      Don't even get me started on the lack of features in every other aspects. I still think it's amazing how people seriously compare simple open source hacks with commercial packages that are in a whole different legue. Photoshop vs Gimp, Blender vs Maja, Reiser FS vs Veritas, Postgresql vs Oracle.
      • The lack of features is only relevant to those whom lack either the skills or the imagination (or both) to compensate. The market is too saturated with people who claim to be professionals, yet are only professionals when using expensive proprietary software. Sure they make life easier, and speed things up, and are just great to have. But if you must use them (I'm not talking about pro full length movies here) because that's all you can, then you are not a professional.

        • A simple fact: If the product doesn't cater to the needs of the professional user, it won't be used by professionals. Maya does, Blender doesn't. There is a great deal of tedium that can be done manually, but with little justification that it should be - in many cases (but not all), this has less to do with one's skill, and more to do with one's desire to produce something within a reasonable time frame and within reasonable cost constraints.
    • FWIW, there is a petition you can sign if you want to see the source code to Blender opened up. It's here. [petitiononline.com]
  • by flewp ( 458359 ) on Thursday July 18, 2002 @12:49AM (#3906702)
    Are you looking for a modeller that will render as you work in realtime, or have a gamelike renderer that will render out in real time?

    It does sound like you're looking for something more in-tune with hardware realtime rendering, am I correct in thinking this?

    Most programs have an OpenGL/D3D/etc realtime modelling mode, so I guess what I'm asking is, are you looking for something that can give you an accurate representation of what it will look like when rendered in game, using a card's built-in shaders? I guess I'm just getting confused on what you mean by mapping. Are you talking vertex/polygon level procedural mapping, or texture mapping, or what?

    I'm also asking these questions so I can guage what a low-poly/gaming modeller is looking to accomplish. All my 3d renderings are done in trueSpace, using raytracing or a hybrid of radiosity/raytracing.
    • if youre looking for a cheap solution, try mosh http://www.jimbomania.com/mosh.html it uses blocky opengl primitives (glutSolidCube() & Tetrahedron & Sphere() ) in pseudo-opengl code, saved as text. you will have the disadvantage of using an unfinished piece of software, but the advantage of heavy influence ofver the future design of the code/interface.
    • I think I should also mention that if you're looking to find out how well your models perform in a game situation, the only reasonable way to tell is to actually have them in the "game" and it's engine. I don't think any 3d app can reasonably mimic the performance hit a game's render engine is going to take unless it can actually mimic or run the actual game engine.
  • by Anonvmous Coward ( 589068 ) on Thursday July 18, 2002 @12:50AM (#3906704)
    Both LW and MAX have gotten better in ways that you've described. MAX previews motion blur in OpenGL (Which as some of you know is VERY expensive to render...) and LW now does lens flares, hypervoxels, and fog in OpenGL as well.

    LW's VIPER is pretty interesting too. What you do is you do a test render of your object and the resulting image is stored in a buffer. Then, let's say you want to play with the surface texture on it. Fire up VIPER and it shows you a low res version of the image you just rendered. When you modify the surface, it re-updates that image preview, but it only re-renders that particular surface that you're playing with. It's much faster feedback for getting a feel for procedurals than a full bore-render.

    Is it real time? Ehh.. no. It's close. 2-3 seconds maybe? I agree with the author, though, that more could be done. Why couldn't they convert a proceedural into a texture and present that in OpenGL?

    The good news for the author is that the industry (not just Newtek and Discreet) sees the value in having more real-time feedback. The next couple of major releases for the main packages out there will be very exciting for just that reason.
  • quake? (Score:3, Interesting)

    by GoatPigSheep ( 525460 ) on Thursday July 18, 2002 @12:51AM (#3906705) Homepage Journal
    nobody has converted quake into some sort of modelling program yet?
    • The last time I looked quake was using binary space partitions, portals etc... in the rendering process.

      These are very quick for real time play, because world is broken up into subspaces and placed in a binary tree when the wold is pre-render compiled for the game.
      all of this breaking up takes a hell of a long time, it's also a slow process to merge the subspace trees whenever a space changes, this is why it's all pre-rendered and shiped with the game.

      So the quake engine isn't good for dynamic real-time modelling, but it is good for realtime play with a fairly static world.

  • trueSpace 6 (Score:3, Informative)

    by lunadude ( 449261 ) on Thursday July 18, 2002 @12:55AM (#3906717) Homepage
    The new version (6) of Caligari's trueSpace has "texture baking". It can translate procedural textures into mapped UV surfaces. Pretty nifty set of tools too. Windows only.
  • by kbonin ( 58917 ) on Thursday July 18, 2002 @12:55AM (#3906723)
    The problem is that there is far more money to be made selling expensive seats to well funded studios, movie houses, and advertising agencies then in selling to game companies. Most game companies are cheap, and full of people who are biased against anything not written in-house.

    I write engines now, but I spent a good 10 years as lead programmer of tools groups at various game companies (I distributed the first public reverse-engineering of 3ds file format, I think for version 2, had fun explaining that to a room full of lawyers... :)

    I'm still dealing with these issues today. My current employer is starting a major project (MMORPG), we spent a decent part of the last year researching art tools. Ended up picking Maya, as its got a great C++ API that exposes darn near everything, as well as a great C++'ish scripting language for the technically inclined artists to use. (Disclaimer: I have no financial interest in Maya, etc...)

    The problem is that it takes years to tune a decent editor - I started out in CAD/CAM/CAE and even I know better than to try and editor unless I've got a few man years to dedicate to infrastructure.

    Unless the budget absolutely cannot handle it, I'd recommend taking Max or Maya and extending them with your own tools. Maybe even make some of them open source, like Pierre Terdiman did with his Flexporter [codercorner.com] system for Max, which saves a good man-year of work on 3ds Max exporter work!

    • Most game companies are cheap, and full of people who are biased against anything not written in-house.

      You're obviously well more knowledgable in this area, as I have never done anything in the 3d game area, but most of the requests I see for low poly 3d modellers for games ask/demand that the user is proficient in Max.
      • most of the requests I see for low poly 3d modellers for games ask/demand that the user is proficient in Max.

        Which is interesting, given the likelihood of the average hobbiest/graduate owning a paid license for MAX is somewhere around... oh, let's see... uh... ZERO.

        And, like all technical skills, proficiency in any other 3D package is less than useless. So all the nine-hour-a-day Blender people needn't bother.

        • Some universities do offer MAX experience - but on a cv it cannot compare to actual commercial experience. But then how many people can afford a full version of Visual Studio for learning D3D, or even less likely a PS2 Tool for learning PS2 development. As far as I can tell- the only way to get into PS2 coding is to have a good games CV already and get lucky - at least in my experience, and anyone who has worked with it will know that it takes a fair while.

          I was lucky enough to go to a uni that had both Max, Visual Stu(with all the extras) and a room full of Yaroze's(yes I know outdated but its still a good start towards Sony hardware- and it was 3 years ago).

          Of course theres always other ways to do these things - like the number of people with Warez MAX/Maya/Vis Stu Ent or a homegrown PS/DC dev system(I havent seen a homegrown PS2 dev system yet).
      • You're obviously well more knowledgable in this area, as I have never done anything in the 3d game area, but most of the requests I see for low poly 3d modellers for games ask/demand that the user is proficient in Max.

        I would like to throw in my tidbit here and mention that Max sucks for modeling.

        I do not know if they have gotten rid of that stupid separate rotate view tool yet though, I certianly hope so, the second/third/forth/fifth/sixth/seventh/eighth/ ninth mouse button is there for a reason. . . .

        (yes my mouse really does have 9 buttons on it. :-D The higher end CAD mice have well over 16, heh)

        Rhino3d is my prefered modeling tool.

        Max does rock for setting stuff up though, and doing animations in it is pretty darn intuitive, but ick, the interface . . . . ::shudders::

        Oh well, most of the major packages DO have a cruddy interface, heh. and tend to degrade the qualify of a modeler's work ;-D

      • heh - so much for my doc being "public". I did it while I was at SSI ('91-92), gave it to people at tons of other companies (games & later VR), looks like it never made it out anywhere Google is indexing.

        I'll dig up the old docs for integration into what's still public, if anyone is interested, I looked over the docs Google found, and I had a bunch more chunks documented, especially animation, and all the material data...
  • Try these (Score:3, Insightful)

    by Anonymous Coward on Thursday July 18, 2002 @12:57AM (#3906729)
    Take a look at gmax [discreet.com] and Milkshape3D [swissquake.ch]
    • Milkshape rocks! (Score:2, Informative)

      by Klowner ( 145731 )
      As a proud registered user of milkshape I'm very happy with it, it can even export models in ASCII format for very simple loading into homebrew apps, the ms3d model format also saves surface specularity, ambient color, diffuse color, etc. And has smoothing groups..


      Klowner
  • by Anonymous Coward
    If you are looking for ease of poly visualization Multigen's Creator or Performer are definitely my favorites, with some neat features like the ability to "shrink the edges" of the polygons to look inside your model and catch those annoying hidden faces...
  • Blender (Score:5, Informative)

    by Picass0 ( 147474 ) on Thursday July 18, 2002 @01:02AM (#3906745) Homepage Journal
    Blender is in the process of becoming open source, and would make an awesome tool for game modeling. Blender had a built in game engine and modeling tools.

    Visit Blender3d [blender3d.com] or elYsiun [elysiun.com] and help open the source on this program. It could become THE gaming modeller for Linux.
    • Yeah, blender's a right b*tch until you get used to it...
  • *If* you are just talking about modeling, then there are some possibilities. www.wings3d.com is a free version of Nendo and I *think* it's open source. It is written in ERLANG, however. If you are talking about the full range of 3D asset generation (models, textures, animation, etc) then there isn't much you can do. As for Max, Lightwave (and all the others) becoming more game friendly... Doubt it. Those companies are all trying to make their 3D app "all things to all people". That was lies madness.
  • Do a search for Gmax. It's built for game design, and kicks ass.
    • Did you even read what he's asking? He's saying that Max and etc. don't meet his neads. Gmax is just crippled Max, so why would he care?

  • http://www.equinox3d.com

    It lets you write your own OpenGL renderer for a whole 3D window, or just a specific Geometry.
  • by Dr. Bent ( 533421 ) <ben&int,com> on Thursday July 18, 2002 @01:08AM (#3906763) Homepage
    3DFilm [radtime.com] does realtime modeling AND video compositing. I use it a ton....
  • Multigen [multigen.com] is sadly the only solution i know, (use for example in plane simulation) with Vega or Creator.

    But it exists only on Windows NT/2k or IRIX. there is no real stuff for Linux or no equivalent on free platform i know.
    I'm will also very interested to know if there is other good realtime software.
  • whats the problem? (Score:3, Informative)

    by dpotter28 ( 588834 ) on Thursday July 18, 2002 @01:19AM (#3906787)
    I work mainly in film and commercials, but I do work on a few games per year (modeling/texturing/animating characters). I don't see whats worng with the current tools. 3dsmax supports both OpenGL and D3D in the viewports.
    As for your problem with procedurals... Well, procedural textures are, for the most part, 3D textures. Which means you do not need UV coordinates to map an object with a procedural. You can, either by script or plugin, "bake" the procedural into a bitmap and use that.
    You also have options to send data in any format you choose to another viewer. There are tons of options available. I dont see why choosing a more limiting system would make life easier. As porjects change, you may very well like the versatility your "fancy" application affords you. :)
  • by leabre ( 304234 )
    If you can go the Maya route, they have the Maya Real-Time SDK (which is basically a tool for converting Maya worlds into a game). I think Maya RT-SDK is a game engine. Anyway, last I checked, about a year ago, it cost $250,000 (not sure if it was per-seat or site)...

    Thanks,
    Me
  • Max4 shaders (Score:1, Informative)

    by Anonymous Coward

    many of the procedural shaders used by these packages map very poorly to a hardware shader's abilities

    Didn't Max4 add the ablity to execute DirectX 8.1 shaders for its real time rendering so artists could preview exactly what they were getting?

  • Softimage XSI allows you to see OpenGL shaders in realtime. This may not be useful for a game being developed in Direct3D I suppose.

    If you got some time (say, 3 months) you can get the free Softimage XSI Experience CD set from www.softimage.com.
  • Have patience (Score:4, Insightful)

    by yoDon ( 123073 ) on Thursday July 18, 2002 @01:41AM (#3906844)
    Hmmm... lets look at the phrase "hardware capabilities"

    "Hardware capabilities" doesn't have a single definition. Are you talking hardware like the Radeon 8500? (which is pretty sweet and fairly common among gamers) or are you talking hardware like the graphics chip in the Intel 810 motherboard set? (which is pretty weak but extremely common among the general public). Believe it or not, a huge number of games are still spec'd to run on low-end chipsets like the i810 because thats what consumers own. Sure, everyone you know has an 8500 or a GeForce4, but you're not a normal person if you're here reading slashdot. Normal people have old machines with slow graphics cards. A modeler that was designed for 8500 cards would be next to useless on a project aimed at i810's, and vice versa.

    More importantly, "hardware capabilities" doesn't have a static definition. Graphics hardware is subject to Moore's law (except that the doubling time is even shorter than for CPUs). In the 12-18 months it would take you to write a high quality hardware-oriented modeling tool, graphics chips would run through at least two, possibly 3 generations. Do you design for the current hot chips, since those will be fairly common by the time your software is ready? Or do you try to guess what hardware will do in 18 months even though only a few hardcore gamers will own cards with those features when your tool is released?

    Lets assume you've figured out whether you are going for serious gamer hardware or mass market hardware, and you correctly predicted what year you were targeting and guessed the feature set perfectly. Lets even assume you've written it and finished debugging it. Six months later (when the next generation of chips arrives), do you (a) throw away all your work or (b) invest another 6 months completely revising it because NVidia just announced some whizbang new feature that your user interface can't support?

    Chances are, whatever you decide, pretty soon you'll find yourself doing exactly what Alias|Wavefront and Discreet and NewTek and etc. are trying to do: build the very best modeler you can, and let the hardware vendors catch up. You don't need that many more doubling times before all those slick features are "hardware capabilities."

    Have patience.
  • by array_one ( 582818 ) on Thursday July 18, 2002 @01:50AM (#3906869)
    Doesn't nvidia's CG fix most of your problems? They (nvidia) are creating plugind for maya and 3dsmax so that one can see the final product in the viewport. If you dont want to wait for Nvidia, try out Softimage|XSI as it comes with some very robust tools for creating shaders for use in video games (yes it really looks like it would in your game). Check this out for more info: http://www.softimage.com/Products/Xsi/v2/features. htm#rts
  • Zbrush 1.5 came out at Macworld today. (PC also) www.pixologic.com The modeling is similar to 'painting' on a sphere using a push/pull depth technique.. but about 100 times easier with 100 times the features. (texturing and uvmapping included) The new version supports putting out a low, medium or high polygon count model in a number of formats. I do not work for pixlolgic.. I just have the demo. Download 1.23b and you'll be totally amazed. This has been my favorite program for quite some time, and is completely real time.
  • by shplorb ( 24647 )
    3ds max r4 has plugins available that allow you to create d3d8 pixel shaders on materials and view them in viewports. i presume that's what you're after?

    really though, i think your question is silly because you want a procedural modeller, not a polygon based one like versions of 3ds prior to it becoming 3ds max. procedural modellers allow you to go back through the stack and modify parameters, rather than deleting and recreating to change something.

    because of the procedural nature of modellers, the polygon geometry that the renderer uses has to be regenerated everytime you make a change, which is slower than manipulating polygons directly.

    other modellers probably have the same shader previewing functionality now, or the ability to create plugins to allow them to. i can't speak for them though because i'm only familiar with 3ds max.
    • also, if what you really want is to do your final renders with hardware then i have two things to say:

      1. write a render plugin to do it.
      2. 'scuse the french, but you're fucking nuts unless you have a card that does 64 - 128bit colour rendering like modellers do. cards are also designed for realtime rendering, and as such are full of tricks and hacks to get the speed at the sacrifice of quality.
      • also, if what you really want is to do your final renders with hardware then i have two things to say:


        1. write a render plugin to do it.
        This is exactly what we do for our games; we have a bespoke realtime rendering solution with a novel idea of materials (similar in some ways to a RenderMan shader system, though much more limited). This means a material plugin needs to be written for our artists just in order for the materials to be edited in situ. As a side-effect it's usually relatively simple to get the materials rendering as they would in the game in the render window.
  • depends on the future of game content tools.

    Traditional content development is a long arduous pipeline. Skilled artists trained extensively in high end proprietary software packages create fat content chunks that get ground through this pipeline and end up in a game, presented via the engine, as polygons with textures or music or what have you. It's expensive, wasteful, and a prohibitive barrier to entry for indie developers.

    Ultimately, the only part of the pipeline meaningful for a gamer is the output - what's in the games. But, the more flexibility and power a content pipeline is imbued with, the bigger the payoff is for gamers and developers.

    So, the question then is how to we get from the grinding fixed pipeline to something elegant and useful. The best possible future is for content creation to change in a few major ways: to be immediate and in realtime, to be in the game engine itself, and to be mostly procedural to satisfy these constraints. This isn't simple, but here's a quick outline.

    Part one requires the pipeline be as short as possible. Instant game feedback means no extra turnaround time for revisions.

    Part two is to have our realtime zero-length pipeline in the game engine itself, so the game, which is the game content, is a self supporting environment. The engine becomes not only the output method, but the development host. The best way to change a game world is obviously from within the game itself (sound familiar? that's how the real world works).

    Part three is the magic that makes this possible. Procedural content capability changes the entire outlook of what is possible to do in a game engine. Procedural content is more flexible (since it's programmable), more lightweight (since the descriptions are effectively code - even if you model them through a different interface), more distributable, more scalable, more unique, more immediate ... the list goes on.

    The next John Carmack won't be a game engine guy, he'll be a game CONTENT engine guy.

    And that, temporarily, is the end of this story. Game content creation is basically in the dark ages because these parts haven't fallen into place yet. Nobody has this together yet. Someday this will be real, and we'll look back and think of current content as being like coding assembly instead of using high level languages.

    When game content is capable of really expressing what's in gamers' imaginations and dreams, well, then we'll have quite a party on our hands.
    • Now lets discuss the (theory of the) technology briefly.

      Most current content authoring tools are specialized data manipulation facilities. "Isn't everything on a computer data?" Right, but manipulating what is essentially the output data directly, even from a high level, is more than a little stupid. It's what's done because it has always been deemed necessary, but it's not very smart.

      To start working with procedural content, you need content tools that are about modeling processes. Now, not everyhing can be a runtime process (yet) because things like radiosity computation are freaking slow. But, and this is important, fun game content that's interactive and interesting is several orders of magnitude more important than having the prettiest most "realistic" (don't make me laugh) game content. This should probably be enshrined as BIG IMPORTANT RULE #1 or some such.

      Anyway, modeling processes that 'create' the content is not entirely different from modeling the physical processes of how that content came to exist in the game world. It really is a form of simulation, that while not necessarily anything like the 'real' physical processes that shape our world, that will give life to games that just can't be manufactured by hand.

      The computational process of simulating the modeling processes will be the next big thing.
  • by Anonymous Coward
    Hi,

    the purpouse of softwares like Maya, LightWave, POV, Cinema 4D, 3DS, ... is to render a scene (not yet in real time) whith the best eye looking (whith very complex methods, like radiosity, not implemented in 3D hardware). It is used, for instance, to make special effects in movies.

    Using Maya as a modeler to export objects for a game you are writing (or whatever) is convenient because you do not have to write a modeler by yourself, but do not expect an "Unreal engine"'s quality result in the modeler.
    Anyway most of those products are supporting OpenGL. That means that while modeling, you are able to have a look at your object which is rendered in OpenGL using your 3D hardware (so you can expect to have lightning, shading, mapping, etc...).

    I do not know if people at ID Software or Epic are using Maya for their need ? Or if they have their own modeler ?
  • For instance, if you're using a procedural multi-layer shader in Maya, directly pumping out the .ma file with a separate shader isn't a good idea. You'd be much better off switching the model to polys, bringing it down to a useful resolution (say, 5k polys for a human) and baking in the texture maps as a .tif (or whatever format) image file.

    Save the whole kludge as a .x file, and you're done. Then again, this approach takes skill and time.

  • search for Wings 3d. Its an open source polygon modeller that works similar to Nendo. However it is written in a language called Erlang. It runs on Windows, but you have to download the Erlang interpreter from Ericcson.

    This is all you need for making the actual models. OpenGL support, fast modelling, mouse emulation for the popular 3d apps, and features are being added every day. Exports/imports 3ds, obj, and vrml.

    Then use max or something to UV map, and you are set.
  • Try Animation Master (Score:1, Interesting)

    by Anonymous Coward
    Try Animation Master by Martin Hash ( www.hash.com )It uses a different approach to the whole process (patches instead of polys) and it's only about $300. (there's a renderfarm option too)The learning curve was a little steep, but I had no previous experience in any other package.
  • Milkshape [swissquake.ch] is a shareware modeler that is designed specifically for making models for games/real time applications. It imports and exports all major game model formats(md3, md2, etc..) as well as the professional model formats(3ds,lwa, etc..) It has support for bone based animation and plugins. I'm currently using it for my own engine and its served me quite well.
  • Actually the Real Time tools in Soft XSI are very nice. Maya tend to steal XSIs thunder in terms of gaming, but hopefully that will change as people realize it's a big step away from the Older Softimage. Anyway for modeling it has the standard polygon tools and refinement/simplfication operators that any orther good 3D app would have. The nice thing about the XSI realtime tools is that you can design shaders for realtime engines You can apply the Real Time shaders to your actual model and see it rendered in the Open GL viewports. Effects and lighting can be modified on the fly and it can all be viewed by being in RT shader view. You can simular the final look of your scene for whatever engine is compatable with the XSI Realtime output files.. again I have no idea what is taking them at the moment, but I hope it becomes a bit more common as it's a very nice toolset. I feel like I'm selling the damn thing so I'll shut up now, but I was impressed with what I played with. I just happen to like XSI's workflow so anything that makes it more popular help my investment of time in it ;)
  • A fractal based renderer (that's practical) that can be adapted for viewport previews. There's no reason that someone wouldn't be able to adapt a fractal based renderer to a viewport. The only problem is... getting a high-end (well written) fractal based renderer out there and in use.
  • See the Gamasutra postmortem [gamasutra.com] for more details.

    I spent a while trying to convince Maya to support the particular type of higher-order surfaces we needed for our game. Due to bugs in the Maya API though this couldn't be done - so we decided to write our own modeller.

  • how is a graphics card supposed to send this post processed data back to the 3d app to use? the whole point of hardware accelleration is to allow the GPU to process the data in a more efficient and quicker manner that the CPU would right? so how is the GPU going to give this data back? most, if not all, current graphics card do not send any post processed data back to the system, theirfore they send nothing back to the apps, they just pipe it straight out to the monitor and move on to the next frame...

    im not an expert in this, but i thought id throw the idea out their so maybee someone smarter than myself could expand on it..
  • alias|wavefront's gallery is currently
    showcasing (read: showing movies)
    about how games developers have used
    Maya
    http://www.aliaswavefront.com/en/WhatW eDo/maya/see /gallery/gallery.shtml

    this includes:
    tekken 3 (namco)
    gran turismo 3 (polyphony)
    crossfire (EA)

    certainly worth a look

    .
  • Hi,

    We had nearly all the problems you are describing therefore we decide to write a built-in viewer/exporter for 3DSMax (3.1). It was quite a difficult task (doc suck IMHO) and requires nearly one man year of work, but I think the result worth it.

    It uses our game engine (InVivo) to display in RealTime a view of your model. This view display the materials as defined in our Shader Editor, therefore we specify all our shaders (shading, z-buffer writing, Back Buffer blending, alpha test, stencil buffer, ambient, diffuse, specular, emissive) in Max and Artists can view the result as if the model was in our game engine in RealTime. Then we just had to plug our exporter and you can export our scene. We've also add animation and a particule plug-in we developed to Max and now Artist don't have to exit max to do all game assets.

  • You have some of the possibilites you are
    askin about. In 3dsmax as it is. If you want
    to have the very exakt procedural textures in
    max as you have in your game. - You can always
    code a material plugin for yourself. (the SDK
    is bundled with every copy of max). And you
    can use it in 3dsmax as a material. - Then you
    can render a texture from your material. - To
    get a snapshot (you can render animated
    textures as avi's to) to get an idea how it
    will look. - Of course you will never get the
    EXAKT view you will get from your own 3d
    system. But that would be hard. - Maybe you
    can make a preview plugin or something along
    thoose lines, to use your own real-time
    renderer in max. And if you look att gmax you
    have the core framework of the 3d package and
    can build any plugins you want for it. And you
    can also bundle your plugins and gmax with
    your games for the players to have
    mod-capabilites. And for the omptimize issues
    you are talking about. You can always use the
    optimize modifyer and of course you have to
    think yourself what is going to be visible or
    not. - I think you would get pretty pissed if
    max deleated faces/polygons that was supposed
    to be availible if it tought they wherent?
    =)

  • It's not true. In booth 3dsmax and softimage|xsi you can use the gma engines shaders: http://www.discreet.com/products/3dsmax/3dsmax_fea tures.html . Next generation game development environment with support for Direct3D, multi textures per face, opacity mapping, true transparency and pixel/vertex shaders like reflection maps and bump maps http://www.softimage.com/Products/Xsi/v2/features. htm#rts
  • Seems to be getting a lot of support lately, and its fairly easy to model something and put it into a game that supports it..

    http://www.discreet.com/products/gmax/

    You also might want to look at Cg from Nvidia if you want more control over how stuff looks when rendered, although its not a modeling program. From what I understand it would give people better control of the hardware, and it probably makes it fairly easy to use the pixel and vertex shaders..

    I hope it helps..
  • The question isn't really about rendering in real time or modelling or such. Its more about seeing it as it is supposed to look like, ogl or not.

    There is a major problem here though... the max renderer is not the renderman renderer is not the maya renderer is not the lightwave renderer is not the game's render engine.

    Basically, none of the render engines are really comparable as they all do different things differently and to different degrees of accuracy (such as how renderman doesn't do any of the nifty lighting affects found in brazil).

    So my answer is this... have your engine programmer write a plugin to whatever renderer you do use so that rather than rendering from the program, it renders using the game engine. This sounds kinda obvious, but i'm not sure how difficult this may prove to be. But this will solve the problem of not being able to see how it is supposed to be in game (without having to export the blasted thing everytime you want to load it in the game).
    • Another, though clumsy way might be to have your modeler running and a in house viewer/renderer running. It might not be possible to write a plugin for your modeler that uses the same exact code as the final rendering engine.

      Unfortunately this is one case where experience is invaluable. Someone who has a ton of experience will know as they are modeling how it will look in final form. It's a hard skill to develop and requires deeper knowledge than just x colors look different in the console vs monitor.

  • Check out Wings3D - it's a wonderful dedicated low-poly modeler, inspired by Nendo (the little brother of Mirai, which is the tool of choice for many professional game modelers).

    Wings3D is Free Software, and it's already surpassed Nendo in many areas. The user experience is unmatched - closer to modeling with clay than I've ever got on a computer before.

    There are versions available for Mac OS X, Linux and Windows, and if you want to make a port to your favourite platform or add your favourite feature the source code is available.

    Download Wings3D immediately, you'll never look back [wings3d.com].

    --
  • I have used Paradigm MultiGen Creator for building visual databases for realtime systems and I am currently using it to build excercise areas for a Polaris ship bridge simulator. I am not sure if it is the fix to your troubles (i know for certain that it is definetly not cheap...) However it is a modeller that definetly focuses on realtime rather than fancy rendering effects. Check out www.multigen-paradigm.com [multigen-paradigm.com]
  • Ended up picking Maya, as its got a great C++ API that exposes darn near everything

    I'm not a game programmer nor a graphic artist. Can someone explain to me why I'd want an API library to the tool that designs my models? Once my model is in my MMPORPG is it going to call a Maya library function in-game?
    • No, it's so you can expand the tool to fit your needs. Both Max and Maya ship with C++ SDKs and scripting languages.

      We're using the Max SDK and MaxScript to create a suite of tools that run within Max that are specifically tuned for our game development path.

      Why re-invent the wheel (polygonal modeling, vertex animation, texture UV application, etc...) when an off the shelf product can provide that functionality? Then, you use the provided APIs to extend that tool to fit your needs. Both Discreet and Alias/Wavefront market their products to game developers as a sort of graphic-design middleware. It's a good solution.
  • There's little substitute for having models visible in your game engine. Even if the modeler matched the capabilities of your game, you're likely to have slight differences in your shading and lighting models, in your camera rules, and so on.

    It's not that tough to attach a real-time exporter to most 3D modeling packages. Most of the major game development houses have some facility for displaying what's in a 3DS/Maya scene in a linked up game, whether it's running on an XBox, a GameCube or under Windows.

    On top of that, you pretty much need to write your own tools to help artists spot abuse. Add a wireframe mode to your game, and tag all double-sided or currently-reversed polygons with a dot in the middle. Something that simple can point out poly overpopulation or wasted polys pretty quickly.

  • Check out Carbon Graphics Geo [carbongraphics.com].
  • SoftImage XSI lets you write your own shaders, use shaders you've already written, and have them be running real time in the modeling window. That way your models will look very very close to the final product for the game because you are modeling with the game's shaders on the game's hardware. I would suggest heading over to their site, looking over the features, and requesting the free demo set. It even has video tutorials!
  • This is a decent modeler focused on low poly models and texture editing:

    http://www.swissquake.ch/chumbalum-soft/
  • Possiblly a modeling package more geared to hardware capabilities, or some way of adapting an existing modeler to make it more hardware friendly by blocking or modifying features that 3d hardware can't handle. It would seem such a package could be cheaper too, since it wouldn't have to support as many fancy features.


    Both max and maya can be adapted to do some or all of what you want. It is fairly easy to build you game engine into Max or Maya so that it runs with the model in a preview window at all times. In the past year or two, Game Developer Magazine has run articles on doing both things. You might be able to find the articles at gamasutra.com (some articles from GDM end up being published there as well).

    Along with this, you can add tools for carefully controlling the blending and hardware shading as data attached either by uv maps or stored per vertex.

    As to fans and strips, it definately would be possible to make an exporter that would automatically do this, but not nescesarily in an optimized manner. I'm fairly confident that it would be possible to write plugins for both that add fans and strips as a new object type, but I'm not sure if it would be worth the effort. You also might be able to do it by storing extra information along with each triangle saying what strip or fan it belongs to.

    The great thing about Max and Maya isn't their built in tools. It is how programmable they are. Thus they are best used when they are being used as the foundation for writing your own production pipeline you can do everything from integrate in your own asset management system, to adding modelling plugins to help, to storing all sorts additional information per vertex or per triangle (or per whatever object type), and then integrate in whatever animation tools you want, whatever special effects you need, what ever exporting, rendering, and post processing you might need. There really is pretty much no end to what can be done too those programs.
  • Derivative's Touch Software [derivativeinc.com] is exactly what you're asking for. Its a real-time full-featured 3D modelling and animation program tied in with a compositing and a pile of other goodies. All aspects are modifiable in real-time and you can create your own slider interfaces to control exactly the parameters you choose. Their software is based on Side Effects [sidefx.com]' well known high end 3D effects package, Houdini. You can play around with the synth's on Derivative's website by downloading the player software or get the designer software for a free 30 day trial to see how it works.
  • I'm afraid I'm a raytracing freak. I would rather have hardware that does raytracing freaking fast than keeping up with this thing of trying to make non-raytracing look better. It seems that the better long-term solution is hardware accellerated raytracing than trying to make better software for cobbling together models for non-raytracing systems. (Realtime caustics! Realtime raydiosity! Well, I can always dream...)

  • try: http://www.derivativeinc.com/ [derivativeinc.com]

    they're the guys that INVENTED procedural real-time 3D graphics...

    Derivative is dedicated to advance the way people make art. Derivative produces innovative tools for designing and performing interactive 3D artworks and live visuals.

    Touch 101, the Derivative product family, is a new artform which enables you to create interactive 3D visuals for the web, interactive art installations and live performance. Touch is a unified content development environment that combines 3D modeling, animation, MIDI sequencing, QuickTime mixing and more.

    regards,

    j.

  • Here's some clarifications regarding my problems with "traditional" modeling packages that I am looking for solutions to.

    1. EXPENSIVE--I realize they are selling to a small market but it is very pricy to give a large team of artists their own copies of Max, Maya or whatever. It would be nice if these companies at least offered a version without the film-quality renderer, which game artists rarely use (ok, cutscenes maybe, but other than that who needs it?)

    2. Lets you do things that don't translate to hardware well:
    - non-power-of-two textures (no way to force the modeler to reformat the textures to a sensable size)
    - inefficient primitives (no way to remove interior polygons in many cases)
    - no way to lock out features that hardware can't reproduce, like some procedural textures.

    3. Poor support for things hardware DOES support.
    - Triangle strips/fans, no way to generate, see or edit them IN the modeler, particularly editing-in points to corner-turn strips.
    - Hardware shaders, no way to see them in the preview windows
    - Poor support for baking lighting info into textures/vertices
    - Hard to preview art in different rendering modes (bilinear, trilinear, AA, ansitropic filtering...etc) to determine what looks best.

    4. Poor capabilities for optimizing models.
    - need tools for removing invisible (interior) polygons
    - need tools for turning texture res up and down to quickly find the resolution that looks the best and resampling/filtering textures.
    - better support for generating Levels of Detail
    - No good way to track the texture/polygon budget of the model you're working on.
    - Poor support for making multiple versions of the same model (ie: when you want to make varients for cards that do rendering/shading in different ways, want to do high/low performace versions of a model...etc.

    I certainly don't expect the modeler companies to produce something where the preview output will be an exact match for my engine. But it would be nice if they had a lot more "hardware friendly" modes so you don't have to spend a lot of time training artists what features of their tools work and what don't, or teaching them to "guess" what a particular effect will look like when done on the hardware.

    I've had to write a LOT of different kinds of exporters/postprocessors to do these kinds of things and it's quite a bit more difficult to do, for example, triangle strip/fan generation in an exporter than to have the modeler generate properly stripped/faned primitives in the first place. (ie: make a cylinder as one strip and two fans PLEASE)

    Sure, you can fix some of these problems with plugins, post processors, exporters...etc, but almost all of them are ten times easier to fix if you do it in the modeler itself.
  • MindsEye [sourceforge.net], while not ready for prime time yet, looks like it could be very nice. They look like they need developers, however. So if anyone is talented in that area, you might want to go help them out.
  • Comment removed based on user account deletion

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...