Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Designing Computer Animation Software? 354

reversedNormal asks: "I would like to write a full fledged 3d-Animation Software package from scratch. Yes, I know, a VERY daunting and time consuming task. But I have a very good understanding of 3D mathematics, physics, and computer graphics in general, plus a solid foundation in computer programming. To give you an idea, this package will be similar to Maya, 3DS Max, etc... in many respects. The question is, what is the best programming paradigm to use for such a project? I have all of the major concepts, and relationships in mind, but refuse to write one line of code until I have a good design plan. How does a professional programmer approach this design task? Ultimately I would like to be able to tie it into any number of different operating systems, graphics API's (OpenGL, DirectX, etc..), and so on. What are some good ways to do this?"

"Essentially I want the core of the software to be written in Standard C++, and then be able to tie into the Win32 API, or X, or QuickDraw (etc.) for display and input. The main concepts, such as procedural 2D and 3D geometry, 3D geometric transformation, polygon modeling, NURBS modeling, subdivision modeling, keyframe based animation of parameters, motion capture control of parameters, physics-based animation, sound synthesis, texture-mapping, lighthing, rendering, and so on are generally abstract ideas that do not need to depend on (but can certainly take advantage of) any particular API or environment. Of course, the idea is to eventually visualize the abstraction onto the screen, allowing the user to interact via the 2D pointer and keyboard input, and ultimately rasterize it, which will mean turning to various operating system standards. It will also be open as a plugin host and have a built in scripting language. Any design suggestions? I know that this is probably the most intelligent audience to communicate with, and any feedback would certainly be appreciated"

This discussion has been archived. No new comments can be posted.

Designing Computer Animation Software?

Comments Filter:
  • by Dwonis ( 52652 ) on Friday October 04, 2002 @08:48PM (#4391255)
    Slashdot is probably not the best place to get advice. It's great for ideas, but I would definitely recommend posting to a few newsgroups and getting involved in a few mailing lists as well. My experience is that the /. crowd has a lot of great ideas, but those ideas are usually not backed by a lot of practical experience.

    Of course, I'm giving you advice on Slashdot, so what do I know? :-P

    • by NFW ( 560362 ) on Friday October 04, 2002 @10:04PM (#4391583) Homepage
      [...] those ideas are usually not backed by a lot of practical experience.

      True, but if he's willing to separate wheat and chaff, there's probably enough people here who know what they're talking about that asking here will not have been a waste of his time.

      Especially if he's not discouraged by the e-holes ridiculing him for thinking big. While it's true that he probably won't realize all of his goals, before he's done he will have learned a lot and had a lot of fun. What else matters?

      Anyhow, I have a bit of experience [natew.com] (and some of it with a not-completely-unrelated project [natew.com]), so I thought I'd chime in.

      First, not coding yet is a good idea, and one that's lost on a lot of people. Think first, design, plan, write down your designs and plans (the very act of writing forces you to think about them more), and re-read them to think about them some more. Better yet, find some like-minded people to critique your designs and plans. They'll see things you won't.

      Changing designs is easy and painless when you've only invested a couple paragraphs. It's a huge pain in the ass when you've invested hours or weeks or months.

      I used to work for a manager who believed that with a good design document, you could hire a semi-talented high school student to do the coding. I think that's design documentation beyond the point where diminishing returns sets in, but on the other hand, you I also believe that if you know what it is you're going to create, you can't write too much design documentation. XP and "agile programming" are great for situations in which the client changes the requirements regularly, but if you have a clear picture of what you're creating, it's worth spending lots of time on documentation. In my experience it saves far more time than it costs.

      Design the user interface, and write that down, in detail.

      Do a high-level design of the whole system - what are the objects, what are their responsibilities, and how to they communicate?

      For each class, do a detailed design. How does it carry out its responsibilities?

      Then re-read the whole thing and look for issues that you didn't see when you started. Have a teammate reread the whole thing and look for issues. Look for assumptions you didn't know you had. Look for objects that have been tasked with doing things that they can't do with the information or interfaces they have available.

      Then figure out a game plan, a timeline, that will get you a minimal application with at least some usable functionality. That gives you a gratifying achievable goal to shoot for, and it gives you something functional to (hopefully) help keep you inspired.

      Good luck.

      • with a good design document, you could hire a semi-talented high school student to do the coding

        Yes, and you will end up with software that looks like it was coded by a semi-talented high school student.

        Design the user interface, and write that down, in detail. Do a high-level design of the whole system - what are the objects, what are their responsibilities, and how to they communicate? For each class, do a detailed design. How does it carry out its responsibilities?

        And what you will get out of that process is a professional Windows-style application--a big, monolithic piece of software with lots of buttons. It's not very UNIX-like, and many UNIX programmers don't consider the product very high quality. But, I suppose, to each their own.

      • First, not coding yet is a good idea

        In this case, I disagree. Having a remotely similar resume like you and a master's degree in informatics with a partial focus on software engineering, there's one thing I experienced:

        The only way to learn how to write software is by writing software.

        Apparently, the guy who asked the original question does not have a lot of experience in software development and now asks how to bypass this learning process. My advice to him: You can't. As someone else in this thread said correctly: You can't refuse to learn the piano and demand a record deal first.

        There IS NO working theory on software development. The holy grail hasn't been found yet. (There's a reason why commercial software development rarely is more organized than private hacking.)

        There are a number of methods and tools that are good and helpful, but some of them come and go like fashions (remember? "OOP is going to revolutionize software development!") and some of them are highly effective for one developer and highly distractive for others (e.g. Extreme Programming).

        I'd recommend that you start planning a rough outline of what you have in mind and then start coding. It's silly to expect that planning can replace experience.
        • by richie2000 ( 159732 ) <rickard.olsson@gmail.com> on Saturday October 05, 2002 @06:27AM (#4392650) Homepage Journal
          the guy who asked the original question does not have a lot of experience in software development

          I didn't interpret it like that at all, he claims to have:

          a solid foundation in computer programming.

          He just wants more input on this particular task, probably since he has never put all of his experience in 3D graphics, maths and programming together in one single big-ass project before and wants to minimize the number of false starts. That's my take on his request anyway, I think it's actually a little skimpy to give any really solid advice on. One thing I'd say is to not go it alone. Very few people have the necessary skillset and experience in everything from project management, coding, 3D graphics, software development, documentation and the rest to be able to pull something like this off on their own. He might have, but odds are he hasn't.

          You can't refuse to learn the piano and demand a record deal first.

          Sure you can, if you have nice tits or are willing to undergo minor surgery. :-)

          There's a reason why commercial software development rarely is more organized than private hacking.

          Oh, it can be a LOT more organized. It might just not always help. :-) Microsoft, to name one, has very organized software development methods and employ lots of testers, internal quality tests, code audits and whatnot but still manage to miss out in the basic design - the very area you seem to play down.

          It's silly to expect that planning can replace experience.

          And it's silly to expect that coding skillz and experience can replace a good design.

      • by PingoSvin ( 17682 ) <pjunold.daimi@au@dk> on Saturday October 05, 2002 @05:12AM (#4392555) Homepage
        Hey, with all due respect, that kind software development died along with the dinosaurs. It got waterfall written all over it.

        Design the user interface, and write that down, in detail.
        How about drawing a quick sketch, hack together a quick prototype, realising that in just a week you'll have a much better idea of the system you're writing.

        Do a high-level design of the whole system - what are the objects, what are their responsibilities, and how to they communicate?
        How about skipping that part entirely, realising that you're not going to get the architecture right upfront anyway. The architecture is going to evolve through a number of refactorings, not through some superhuman designprocess.

        For each class, do a detailed design. How does it carry out its responsibilities?
        How about realising that would be a complete waste of time, as you'll once again be much smarter after just week of codewriting. Here's some
        • Yes, waterfall-style planning (actually, it's "Vattenfall"-style planning, as in the company "Vattenfall", but that's another story) has been abandoned for being too inflexible. When new requirements pop up, that kind of planning requires you to rewind to the requirements phase, which is Bad and not very much in line with how reality works.

          However, your arguing is equally out of touch with reality, but from the other side. Have you ever written a spec? Have you ever made a design? I have, on some projects, and I have not, on others. I have been a professional designer-coder for 18 years, and I've seen projects without management crash and burn. I think the best way to sum it up is the old military adage;

          "No battle is ever won according to plan, but no battle was ever won without a plan, either."

          Let's begin from the top. Code is emotional. You don't throw away code. You rewrite it, you re-encapsulate it, you tweak it. But you never throw away perfectly working code. It's your baby, damnit, and you're proud of it.

          So what if it doesn't solve the right problem? Well, that's what you find out after you've coded for some two weeks and start to see how things fit together. You're now stuck with two weeks' worth of coding that WILL make it into your final product, relevant or not.

          OR... you could plan for two days and discover that already. And you could make classes that fit better together from the start.

          It's true that you get started quicker if you don't plan ahead. It's pretty much like orienteering and running away in some direction (hey, it's about running, right?) without looking at the map and planning your route first.

          Wrong. Coding is not about programming. It is about solving a specific problem. Unless you understand the problem before you start coding, you are going to solve a different one.

          The statement "you'll be much smarter after just [one] week of codewriting" smells of elitism and being so out of touch that I don't know where to begin. Yes, you will know more about your product. You know why? Because YOU THINK ABOUT THE DESIGN as you code!

          Only you're producing code that you wrote before you knew which problem you're solving. Back to square 1.
  • by cscx ( 541332 ) on Friday October 04, 2002 @08:48PM (#4391256) Homepage
    Step 1: Create Sourceforge account.
    Step 2: Place project into "Planning" phase.
    Step 3: Wait 3 months.
    Step 4: Purchase 3D Studio Max using the money you've been saving for 3 months.
  • by youBastrd ( 602151 ) on Friday October 04, 2002 @08:48PM (#4391259) Homepage
    Seems to me you need some sort of mechanical device, perhaps one useful for motion. You should try a roundish shape, some research has proven useful in this area. However, you should not take advantage of this research, rather, reinvent it.
    • you need some sort of mechanical device, perhaps one useful for motion. You should try a roundish shape

      That's called a wheel, and it's been patented (PDF) [ipmenu.com].

      • No problem. Open source cars will simply use triangular wheels. The ride may be a little bumpy, but it is worth it to send a message to those greedy bastards.

        Wheel show them!

        BTW, why can't those slimeballs patent spam and popup ads, and then hunt down the spammers and wack them in the wallet. That is how the patent system is *supposed* to work in my way of thinking.

        On a serious note (and to avoid "offtopic" mods), I am surprised that 3D software packages are not riddled with patent disputes. Some of the technologies involved are not trivial. It is a good thing that in the 70's and early 80's it was tradition to share such info.

    • by timeOday ( 582209 )
      That's my approach to the piano. So many other people already play, why bother reinventing the wheel by starting with chopsticks? I'm not playing a note until I get I get a record deal.
  • First things first. (Score:2, Interesting)

    by shufler ( 262955 )
    You might want to write pseudocode before you do touch that one line of code. For something this large, jumping right in will leave you frustrated and you will surely abandon this project.

    This is something that cannot be stressed enough. Every single detail should be planned out before you begin to code.

    The SDLC is your friend.
    • by Peaker ( 72084 )
      I disagree [slashdot.org].

      Only the top level design should be completed in order to start working. There is no way in hell a programmer can think of every single detail of a large software project, even after writing it, let alone prior to writing it.

      Perhaps you mean that he shouldn't start if he doesn't have any idea of what interaction and what modules he has, and what the interfaces between those modules should at all look like.

      In that, I agree. But I've made The Last Detail mistake in the past, and it has prevented the success of some of my attempted software projects. I have done much better in projects where I jumped right into the water. Usually got it right, when I didn't, I had a much better clue how to get it right the second time. The time it takes to spawn two attempts at a software problem is much quicker than trying to think of all of those details in the abstract.
      • SDLC or Systems Development Life Cycle.

        Decide first if it is worth your while to write it.
        Get someone to fund it.
        Find out what your intended users need.
        Use IPO or input, process, output charts to help you get an idea of what the main program process must do. Break it down then into logical component parts and do IPOs for them too. When you get down to the simplest level that can be broken down, then you are ready to begin designing your datastructures and relationships - use DFD and ERDs. Use top down hierarchichal DFD to define the scope of your system. Don't try to do too much or you will kill your project. A tight refined scope is better than an ambiguous... It's gonna do everything!

        After datastructures are done, then do the pseudocode for manipulating the data. About this time, you want to begin work on defining an easy to use user interface - let the user's mull it over while you pseudocode.

        Review, refine, tighten the scope if neccessary. Reevaluate feasibility. Get some help but not too much. Use Ghant, Pert charts to plan phases of implementation. Make sure you have adequate hardware and software to support the system. Don't lock yourself into any proprietary toolkit that may carry license fees or limit your potential userbase.

        Chose a platform, coding model, naming convention, and language that best suits the apps purpose. Use a code control system such as CVS. Create a working prototype of the user interface the user's agreed on. Let them play with it and give you feed back.

        Review, refine, tighten the scope etc... etc... etc..

        Fill out the prototype with some real code and throw it at the users again.

        Review, refine... you get the picture.

        Somewhere along the line, you will begin to get a working system. Now you must support it or find/train someone to. Remember your documentation. Did you fully comment your code?

        Was it all worth it?
        • by Peaker ( 72084 )
          I believe this method of software creation is good for the Cathedral model, and with well-funded programmers who program for money, it can get done. But keep in mind, that since its not incremental, its not exciting. DFD's on paper, ERD's, are all boring, when you get down to them. Unless you finish it all very quickly, you're going to be stuck in very long design phases. Unless you're an extremely good designer, you're likely to hit some problems in the actual implementation that may require refining the design.

          Code is getting cheaper and cheaper to write. A rapid prototype of how even a large program should generally work or look like can be created in a very-high-level language in just hours. So if you're one of those who can do the designing while coding, its probably most efficient to do it that way, as you can easily throw away the code if seeing the design sucks (Much easier to see with your head on the ground and code written).

          Also, I don't see how multiple contributors can't fit into this well-framed process. I would also like to disagree with the requirement of pseudo-code, as today's high-level languages, such as Python, are pretty much as high-level as pseudo code and correctly described sometimes as "Running Pseudo Code". This also means that their code, written correctly, is _truly_ self-documenting, truly. Rarely is there a weird piece of code that requires extra documentation. I'm talking about the what and the how, and not the general software architecture, which should be documented separately.

          In summary, I think that your software creation model is too "formal" in the sense that it will not excite programmers, and will be very difficult to get contributors from all around. Excited programmers are better programmers. Also, I think its a bit presumptious of you to think you can know, to detail, the exact best way for the program to be designed, and thus its probably best to write it piece by piece, and see where usability's taking you. An assumption of mine, is that most useful features of a program are suggested at its usage stage, and not its design stage - This means you better minimize the design stage, and get to the usage stage as soon as possible. Do you disagree with this assumption?
          • I believe this method of software creation is good for the Cathedral model, and with well-funded programmers who program for money, it can get done. But keep in mind, that since its not incremental, its not exciting. DFD's on paper, ERD's, are all boring, when you get down to them.

            All this formality, boring as it is, is necessary if you want a cohesive, functional product and not a pile of hacks. Further, this model is quite compatible with incremental delivery; it's just a matter of determining your feature set and building to support it at each stage.

            Also, I don't see how multiple contributors can't fit into this well-framed process.

            They certainly can, but they should be strictly limited in the initial stages. Too many designers will kill your design clarity.

            An assumption of mine, is that most useful features of a program are suggested at its usage stage, and not its design stage - This means you better minimize the design stage, and get to the usage stage as soon as possible. Do you disagree with this assumption?

            Vehemently. useful features and usage models are best done at design stage, which necessarily includes a prototype (Maya, in this case). The worst thing that can happen is to get to the usage stage (near the end of the dev cycle) and find out that your core assumptions were wrong. Better to find that out up front.

            As far as the specific question being asked here, I think this guy's got no chance, and he'd be better off thinking about what unique features he wants and then go write a plugin.

          • >I believe this method of software creation is
            >good for the Cathedral model, and with
            >well-funded programmers who program for money, it
            >can get done. But keep in mind, that since its
            >not incremental, its not exciting. DFD's on
            >paper, ERD's, are all boring, when you get down
            >to them.

            The SDLC was originally designed in the cathedral yes. However, that doesn't mean it's not effective. Better to design your house and then build it rather than the other way around.

            I don't find DFDs and ERDs boring - rather, I find them useful and time saving tools for designing software. Some of my best programming has been done AWAY FROM THE COMPUTER.

            >Unless you finish it all very quickly, you're
            >going to be stuck in very long design phases.
            >Unless you're an extremely good designer, you're
            >likely to hit some problems in the actual
            >implementation that may require refining the
            >design.

            You can use a hybrid of the RAD method and the SDLC as I suggested. The RAD part is building a UI for the users to toy with while designing and building the core of the app.

            >Also, I don't see how multiple contributors can't
            >fit into this well-framed process.

            Let each contributor have a group of related DFD/Pseudocode blocks along with the ERDs to clarify their data relationships.

            >I would also like to disagree with the
            >requirement of pseudo-code, as today's high-level
            >languages, such as Python, are pretty much as >high-level as pseudo code and correctly described
            >sometimes as "Running Pseudo Code". This also
            >means that their code, written correctly, is
            >_truly_ self-documenting, truly. Rarely is there
            >a weird piece of code that requires extra
            >documentation. I'm talking about the what and the >how, and not the general software architecture,
            >which should be documented separately.

            Well, as you say, Python can be a form of runnable pseudocode. I'm not a python user but I'd assume C would run faster under most circumstances? We pick the best language for the job. C will not be the *best* under every circumstance. Anyway, nothing wrong with using Python for a pseudocode RAD tool - so much the better! It would dovetail nicely with the UI RAD idea.

            >In summary, I think that your software creation
            >model is too "formal" in the sense that it will
            >not excite programmers, and will be very
            >difficult to get contributors from all around.
            >Excited programmers are better programmers.

            Better is subjective. I don't know about you but... I kinda like knowing what I'm to program before I reqired to program it and find out it's not exactly what the user wanted. An excited programmer, in my myopic view, is one who has the tools before him/her, clear program specs to work from, good pay and benefits and the skills to do the job. In the opensource world, pay and benefits can translate to notoriety and peer recognition as well as goodwill that can open doors.

            >Also, I think its a bit presumptious of you to
            >think you can know, to detail, the exact best way
            >for the program to be designed, and thus its
            >probably best to write it piece by piece, and see
            >where usability's taking you.

            I agree. You will not know the exact best way for a program to be designed... that is precisely WHY I gather information during the design stage. During the design stage, you are dividing the program up into these little logical parts that you write piece by piece.

            >An assumption of mine, is that most useful
            >features of a program are suggested at its usage
            >stage, and not its design stage - This means you
            >better minimize the design stage, and get to the
            >usage stage as soon as possible. Do you disagree
            >with this assumption?

            I agree that ideas happen during the usage stage. Sometimes you get deluged with too many ideas and never actually get a running program. You have to decide what is need and what is want - this requires analysis. You can also get so wrapped up in the UI that you lose your focus on the actual problem. That said, the UI is important! That is what the users will see and interact with. It should be efficient and enjoyable as possible. Minimize keystrokes and mouse clicks to get the job done. Watch the users use it - see what they have to do to use it. Make it easier!

            It's the Systems Development Life ->Cycle- which means that it is iterative. RAD is another tool that can fit right into the SDLC. No reason why you can't have your cake and eat it too.

            The SDLC is iterative. This means that ideas come in from the users, they get evaluated and prioritized, coders are then selected (or in the case of opensource, they select themselves) and the coding is done. Rolled out to the users for feedback again and again.
      • Hell yeah. I spent the day trying to write a unit that could detect an internet connection (LAN or DUN) and if one wasn't present, start up the default DUN, and it had to work on ALL versions of Windows, and also handle Windows doing an AutoDial during the LAN detection phase.

        I did an initial flow chart of how the detection should work. Of course after spending the day running a floppy from machine to machine to test the unit, and numerous hours in the newsgroups, it finally works nearly perfectly. Of course now I have to create a new flowchart to match what the unit ACTUALLY does now. Although I can keep two of the processes in the diagram (terminators for "connection good" and "no connection")
    • by scott1853 ( 194884 ) on Friday October 04, 2002 @09:48PM (#4391530)
      Pfff. What are you? A professional.

      Real developers jump in and just write.

      Then when you're done, you write it from scratch again after seeing all the mistakes you made the first time.

      Then you write it a third time and add comments since you can't remember what the hell you were thinking at 3:00AM on the last rewrite.

      You think I'm wrong? Look at Windows CE.
      • I cannot believe that this was modded as Funny. I think insightful, or Informative would be better. I cannot begin to think of the number of times I have gone back and rewrote something over just because it was just not right enough.

        But then again I am not a professional software developer and don't have a due date to meet.

    • Agree on not jumping straight in. Disagree on pseudo-code. That is a waste of time. Properly written code looks like pseudo-code anyway.
  • by Anonymous Coward on Friday October 04, 2002 @08:54PM (#4391287)
    I know that this is probably the most intelligent audience to communicate with, and any feedback would certainly be appreciated.

    You're new around here, aren't you?
  • team work (Score:2, Insightful)

    by murat ( 262137 )
    I think the first thing you should do is getting/recruiting yourself some programmers and designers and having a good team.
  • Scripting Language (Score:4, Insightful)

    by Hal_9000@!!!@ ( 152225 ) <slashdot@not-real.org> on Friday October 04, 2002 @08:55PM (#4391296) Homepage Journal
    For your builtin scripting language, may I suggest you *not* invent your own, especially for a small project. If it were me, I'd create a Perl module (probably a class of them) and use those for the scripting. That way your program has much greater power than it would with a custom language (think web-based 3D apps) plus it reduces learning curves. Think AutoCad/Lisp.

    If you're going to enter the big, bad world of 3D, the only way you're going to get noticed is if you can offer something really special. And not having to retrain all your programmers in a new language is something special. Being able to give an artist a copy of "Learnig Perl" and having them go to town is a lot better than trying to give them some documentation written by a programmer at the last minute.
    • by tc ( 93768 )
      Scripting in Perl is fine, and I agree with the advice about not inventing Yet Another Scripting Language, but you definitely need to also allow people to supply binary plugins developed in their language of choice (typically C/C++). You're going to be chewing a lot of data in many cases - trust me, using an intepreted language to do non-trivial things to million-poly datasets puts a real kink in your productivity.
    • by null-und-eins ( 162254 ) on Friday October 04, 2002 @09:16PM (#4391408) Homepage
      Lua (http://www.lua.org/) is a small, fast, extensible language that is designed to be embedded into an application. It has already become a favourite among game designers. The idea is, that you extend it with new datatypes in C, such that the objects in your application become scriptable. Think TCL, just better. For a performance comparison, see http://www.bagley.org/~doug/shootout/craps.shtml. It beats both Perl and Python.
    • >
      it reduces learning curves. Think AutoCad/Lisp.

      That is only true if you assume someone is already familiar with Perl and procedural, structured programming. For a lay person Lisp is as great and easy a programming language as one could wish, and far more powerfull too.

      All things considered, the original argument by RMS is still valid. If one is gonna need a language, why not learn the one more powerful language in the world, that happens also to be among the oldest and easier to read? Lisp all the way...

    • That's an excellent idea, and I was about to suggest the same thing. However, in this case I'd argue for Python. I'm not a huge Python junkie, and I actually use Perl for most things, but if you want it to be easy for non-hackers to script and extend you should choose a language with cleaner syntax. At the level you probably want, Python really will look like pseudocode. Perl just seems like a bad idea for this because the syntax is so. . . different. You really don't want people taking advantage of all the shortcuts it offers.

      I'm suggesting this partly because there's a really excellent molecular graphics package out there called PyMOL [sf.net]. It's got an OpenGL core written in C, but the application itself is essentially one giant Python module, with higher-level commands and APIs written directly in Python. It also has a simplified command language- so you can extend it three different ways, and control it two different ways. Unfortunately the code is not very well documented and it's essentially a one-man job- rather difficult for me to contribute. However, the product creates beautiful images and the overall design concept is very sound. I know that several other similar programs use the same model, one using Tcl instead.

      These aren't going to be directly useful for the task under discussion, but they are very similar- the more sophisticated molecular graphics programs really are like 3D modellers. It's be a good idea to check them out, since they're more accessible (i.e. open-source) and higher-quality than most regular 3D products. They're also all cross-platform.
  • was not this recently purchased from the people and GPLd? rip them off. it's cheep and easy. or better yet, forget it and just help them?
  • by kbroom ( 258296 ) on Friday October 04, 2002 @08:56PM (#4391298) Homepage
    Now that the Blender Foundation [blender.nl] have collected all the money (100k.. wow) to buy the blender source from NaN, they will be releasing the source under the GPL very soon (paid members pre-release due tomorrow).

    Blender is a full fledged 3d program with some animation capabilities. Maybe looking at their design will give you some good ideas.
    • You say you want to have a solid design before starting to code? In that case you don't need to look at source code just yet. But by all means steal ideas during the design phase: scripting languages, ideas for the user interface.

      Get the things you are trying to build down on a conceptual level first... start from the basics: what is an object, a path, how will I represent them, how will I describe them, manipulate them, etc. Looking at the work of others will give you ideas for this, but try to think the basics up yourself first. Having a solid foundation of the concepts will make or break the expandability and flexibility of your program later. You will have changes of mind and new ideas during the detailed design and implementation phases, but if you thought your concepts through well enough, the ideas that govern the structure of your design and code will not change, not by much. The difference between good and bad design on conceptual level is like having a framework for your house made of steel girders, strong yet open to changes, or having it made out of ground chicken poo, doing the job at a glance but the slightest shock could send it crashing.

      Do not look at code until you are wrapping up design, and perhaps not even then. Working bottom-up on this scale will be the kiss of death for your project
  • by youBastrd ( 602151 ) on Friday October 04, 2002 @08:57PM (#4391301) Homepage
    3DS MAX and Maya pretty much do everything under the sun. If they can't do it natively, third-party plugins are a good way to go. If you need some functionality that's not there, write the plugin, surely you've got the skills to do that. These products are very mature already, nevermind their popularity and the amount of training users have invested in them.

    You've got an uphill climb if you want to write this thing from scratch.
  • You can read more about it at http://www.blender3d.com/ [blender3d.com]. Here's a brief synopsis of their goals:

    Goal 1
    Make the sources free

    Goal 2
    Establish artist/coder services

    Goal 3
    Make Blender a better product, and promote free access to 3D technology in general

    So, not to totally discourage you, but perhaps you could simply learn how the code works for this project (which is very mature and powerful) and then contribute to it.

    Good luck regardless of whether you start your own project or learn about Blender and help those folks out. Most importantly, have fun!
  • by SIGFPE ( 97527 ) on Friday October 04, 2002 @08:58PM (#4391310) Homepage
    So get some people to help you.


    Here are some suggestions:

    • Make sure the system is fully scriptable. And try to use a real programming language, not something like Mel in Maya.
    • One thing Maya gets right is its use of cached lazy evaluation and a pull through flowgraph model. If you want any degree of sophistication I think this is essential and your first code ought to be designing the architecture of your flowgraph processing.
    • Make everything procedural. You should be able to put expressions anywhere in a scene.
    • Think hard about an API for plugin writers to make it easy for others to extend it. You're going to need all the help you can get.
    • Disclaimer: I work for Alias|Wavefront.

      Just a word of advice when designing this...Be careful how you copy Maya's dependency graph stuff. While it is very cool, Alias|Wavefront, in general and Kevin Picott, specifically, holds several patents that describe the dependency graph's implementation.

      That being said, I think that Maya is one of the most elegently designed pieces of software that I have had the pleasure of working with. It would be a good place to start for ideas.
  • by coljac ( 154587 ) on Friday October 04, 2002 @08:58PM (#4391312) Homepage
    My intuition says that there is years of work in this project. And, you ask, how does a professional programmer approach such a task? This kind of project is difficult and expensive even for teams of professionals with a lot of money behind them. They start with whiteboards, and use cases, and specs, etc.

    If I were you, the first thing I would do is identify a very small subset of the functionality - say, the ability to parse and view a .mdl file - and try that first. Perhaps there's a need out there for a model viewer. A project of the scope has a chance of completion, and if you're still enthusiastic at that point you can expand the scope of your app, building on the code you already have.

    Again, I'm sure you're smart and understand coding and the right physics, but the one thing the experience of a professional programmer would give you is a sense of the scope of this task.

  • probability (Score:2, Redundant)

    by khuber ( 5664 )
    Probability this goes anywhere: 0.

    Anyone who had the drive to complete something like this would already be coding.

    Silly.

    -Kevin

  • by Schnapple ( 262314 ) <tomkiddNO@SPAMgmail.com> on Friday October 04, 2002 @09:00PM (#4391322) Homepage
    Although your question was very specific in some respects, it was vague in others. Possibly purposely. Still, I have a few questions of my own for the original poster:
    1. Is your intent commercial software, free software or other?
    2. If your intent is free software then are you thinking Open Source?
    3. If your intent is commercial software, then why do you think this product would be any better than the other commercial packages out there?
    4. What is the overall goal for this - professional quality animation? Movie or TV quality work? Video Game design?
    5. Are you working alone?
    6. How is it that you will have the time to devote to this? What makes you think you will finish?
    7. And finally, if it turns out that you are an individual from a commercial organization willing to undertake such a tremendous task in a crowded field with such strong players, why do you think Slashdot will be a good place to get meaningful advice?
    Don;t get me wrong - I'm not trying to slam you or your idea or anything but these are the questions that popped into my head when I read this. I know history is filled with projects like this but for every Linus Torvalds who sits down and makes his own OS (and yes I have read the GNU/Linux FAQ) there are thousands that get 10% in and say screw it.
  • Check out OGRE ... (Score:5, Informative)

    by Scotch Game ( 442068 ) on Friday October 04, 2002 @09:00PM (#4391324)
    Steve Streeting [sourceforge.net] had a similar concept in mind when he implemented his OGRE [sourceforge.net] 3D Engine. He also has designed his engine so that it is written in C++, has a modular plug-in architecture that enables extensibility without recompilation (for certain portions of it, obviously), offers multiple 3D API support and builds both with MSVC++ 6 & 7 and also gcc 3+. The MS builds require STLport [stlport.org], an open-source replacement STL that's more compliant than Microsoft's -- ha, imagine that ... -- but that's along the lines of what you're talking about.

    He's got a number of interesting design ideas and, from what I understand, is fairly accessible.

    Also, and let me offer this, I have no idea about your programming skill and knowledge other than what you've claimed, but please ignore whatever posts come up that try to tell you how incredibly difficult this all is and how you're just better off joining an open source project or buying a package and saving yourself the hassle. If you want to do it, can really do it, and enjoy doing it then, not meaning to quote Nike's marketing department or anything, but: Just do it.
    • by p3d0 ( 42270 ) on Friday October 04, 2002 @10:38PM (#4391677)
      Unless I'm greatly mistaken, OGRE is nothing like a 3D modeller. It is a 3D realtime (ie. game) rendering engine.
      • He's on the right track - the difference isn't so great.

        A good (or even decent, these days) realtime 3D engine is going to support skeletal animation, so you're halfway to modeling movable meshes.

        There's two real differences, and one of them is simply a matter of what data you put in, and the other is quickly becoming an actual part of realtime engines (I suspect either the next release of a "major" game, or the one after that).

        The first is simply that if you add vertices and pretty textures and procedurally generated effects and other eye candy to the point where you're only getting a fraction of a frame per second, it's no longer realtime even if you're still using the same engine, right? ;)

        The second big thing is doing *both* halves of the IK function per frame. Currently in game development, the first half of IK work (placing the targets for the movement and recording keyframes) is done in a 3d modeling program, and the second half (interpolating between keyframes to actually generate animation) is done in the realtime engine.

        UT2003's "ragdoll" system is an example of doing both halves of IK in realtime (or per frame, if you have enough data that you're not running in realtime - see above). A 3D modeling program would allow you to set an arbitrary situation, put force in the anchor points, and generate an animation.

        In the near (2-4 years) future, models for realtime engines won't require keyframes to make the skeletal animation work. You'll just drop in a model with some metadata attached to it that says "this is a hand" "this is a foot" "this is a knee" and the program knows that to take a step, it should move the foot forward a certain amount - the reason engines will move in this direction is because that will allow their animations to respond to the environment better, cleaner, and without extra work on the artists' part (in fact, less work on the artists' part). For instance, placing one foot on a stair and the other foot on the step below, or being able to block a punch thrown at the character's face without "teleporting" both characters into the appropriate positions for the pregenerated animation (or worse, allowing one character's arm to clip through the other's).

        So, to make a long point short - there's not too much difference between making a realtime engine and a non-realtime engine. Any optimizations that make it run faster are still good, as long as you're not sacrificing quality to do so. You want the physics to work realistically regardless (for your personal value of realistic, of course). So what's the diff?
  • by egg troll ( 515396 ) on Friday October 04, 2002 @09:01PM (#4391330) Homepage Journal
    As much as FSF advocates are pained to admit it, C# is going to become the de facto programming language in the next few years. By writing your program in C#, you'll be the first 3D animation package to use this, and take advantage of the power of .NET. Since there are already several packages similiar to yours, you have to do things like this to make your project stand out.

    Good luck!
    • by NFW ( 560362 ) on Friday October 04, 2002 @09:43PM (#4391509) Homepage
      Three weeks ago I would have laughed at this suggestion. But, three weeks ago I'd never messed with C# and the .Net API. I figured they were a necessary evil though, and set about learning them.

      Today I have a nifty little directed-graph editor with cut/copy/paste, a palette of nodes to be drag-dropped onto the graph, a property window for selected objects, and multi-level undo/redo. I've written 4-5 such things in the past using C++ (I have this digraph fetish, you see) but I never got near as much done in three weeks. The timeline really impressed me.

      Other environments may be just as effective, of course. I've only dabbled with java and smalltalk, so I'm not in a position to compare. I just know C# and .Net make for a pretty productive platform.

      And no, I don't work for MS. In fact I've loathed them since bundled their email application to their (monopoly-holding) operating system, thus both tying and dumping, and thus putting a previous employer of mine out of business. That's a pervasive rant though, so I'll stop here. :-)

      Anyhow, in spite of its birthplace, C# and .Net will be the foundation for my next couple of personal projects, and possible for many more, until something better comes along. I really like what I've seen so far.

      The lack of multiple inheritance bugs me, but it's less of a problem than I'd expected, and it also presents an interesting challenge. [sourceforge.net]

    • Heh heh. Newbs are so funny. First its VB. Then its Java. Now its C#. Meanwhile, everyone is STILL USING C.
      • > Heh heh. Newbs are so funny. First its VB. Then its Java. Now its C#. Meanwhile, everyone is STILL USING C.

        Uh, in what world is everyone using _C_?
        C is probably less popular than VB, Java, or perhaps even C#.

        Anyways, most people who bash C# are people who haven't used it. I know because I did until a week ago when I got a copy of VS.NET. I was thinking of checking out managed C++, but I gave C# a try. I'll have to say that's a very pleasant language to work with. Anyone whose done any coding in C++, java, delphi, or even VB should pick it up quite fast.

        And also, VS.NET is a great IDE to work with.
  • by TheAwfulTruth ( 325623 ) on Friday October 04, 2002 @09:04PM (#4391345) Homepage
    The first thing to do is define your target audience. There are already so many 3d modeling/animation programs out there, what is it you are trying to do by making a new one?

    There is a reason why people are willing to pay hundreds and thousands for Maya and 3ds. They are THE standard and they work great. And if that doesn't matter to you then Martin Hash's Animation Master is an amazingly powerful set of programs for dirt cheap.

    So if it isn't to be either of those then what? The first 3d program with a truly easy to use interface? (That may not even be possible, but it would be a godsend)

    Before thinking about programming "para-dig-ums", I'd concentrate on what the "product" (Free or not) really is. Believe it or not, desighning the code framework for the internals and drawing the 3d elements on screen is the EASY part. Getting a good, no make that excellent, "User Interaction" going on what is likely the most difficult thing anyone does on a computer is far more work.
  • by yorgasor ( 109984 ) <ron@tr[ ]chs.net ['ite' in gap]> on Friday October 04, 2002 @09:04PM (#4391349) Homepage
    I think this is one of those situations where if you have to ask slashdot, you're not up to the challenge ;)
  • For what purpose? (Score:5, Insightful)

    by tc ( 93768 ) on Friday October 04, 2002 @09:04PM (#4391351)
    A fully-featured 3D animation package is pretty damn huge. What is your intended purpose in building your own, rather than using an existing package? I assume that it is simply for fun, or perhaps you have a more ambitious goal of creating an open source 3D modelling package that might be a replacement for Max, Maya et al?

    If you are intending a serious replacement for professional packages, perhaps you need to talk to some of the users of those packages. I'm sure some game developers (just as myself) and animation folks lurk on Slashdot, but to get really great feedback you really ought to go to a more special purpose forum.

    That said, some things I'd consider if you're planning a truly professional quality package are:

    - Support and documentation. Especially really great documentation and samples. Plan a lot of time on this. Getting this piece right will pay for a lot of fuckups on the rest of the design.

    - Extensibility. Every pro user I know uses an array of in-house extensions, for everything from custom data format importers and exporters to plugins for procedural geometry, custom shaders, special lighting models, and a whole slew of other things. Make everything scriptable, overrideable, and customisable. Consider writing the bulk of your standard features using the same toolkit people will use to write plugins, because then they serve as sample apps.

    - Consider providing compatibility modes for people migrating from other pro packages. Artists get very set in their ways. Unless you have a truly revolutionary and more productive UI, follow some of the existing conventions, or at least make it an option.

    - Provide a batch processing mode, so that offline tools can invoke the power of your package without firing up the whole damn UI. In the games business, we have a lot of build process running on our artwork from assorted batch files, Perl scripts, and whathaveyou. I'm sure the same is true in other pro environments too.

  • OpenGL sites (Score:2, Informative)

    by willpost ( 449227 )
    Take a look at this website.
    It has plenty of tutorials and downloads on OpenGL.
    There's also a large message forum.
    http://nehe.gamedev.net/ [gamedev.net]

    This one is a reference to the OpenGL Commands
    http://www.eecs.tulane.edu/www/graphics/doc/OpenGL -Man-Pages/index.html [tulane.edu]
  • Incremental work. (Score:5, Insightful)

    by Peaker ( 72084 ) <gnupeaker @ y a h oo.com> on Friday October 04, 2002 @09:14PM (#4391401) Homepage
    An important thing to remmember, especially for motivational projects (projects that require a lot of motivation to keep going), is to write code incrementally.

    One of the best methods I know to write code incrementally is to rapidly model it in a Rapid Development language such as Python.

    Since I get excited by seeing results quickly, I'd probably start by deciding on a GUI toolkit, and find some Python bindings for it. Perhaps an OpenGL GUI of my own, in any case, that's where I'd start. Whatever excites you the most (Perhaps rendering ray-traced images of simple objects excites you, and you can start there), is where you should start. As long as you're excited about what you're doing, you can easily keep on going.

    Then, when you have a GUI (Or a simple renderer for that matter), you need to generate "stubs" for other components. There are various meanings of the word "stub" flying around, so I'll explain mine: A simple replacement of the interface a software component, that is trivial to implement, lacks any of the functionality, and is intended to later be re-written.

    This enables you to work on an exciting Skeleton of your program, that lacks almost all of the functionality, but there is already a bit of something very exciting to you, and perhaps to others who share your interests, to work with.

    Notice this is the same method used in the early development of Linux.

    Linus provided a very simple Skeleton of a Kernel, with either stubs or extremely naive implementations of almost all kernel subsystems. This is much better than the alternative, of trying to create the 2.4.19 kernel, component by component, from scratch.
    Linux 2.4.19 shares very little in design and in code with Linux 0.1, and the actual implementation decisions of Linux 0.1 don't really matter at all now.

    This is why I emphasize that you should start before you know exactly where its going, because there's a good chance you'll be stuck planning it forever, if you try to get it all right in the first time. If you don't bind yourself to backwards compatability, it doesn't really make a difference what kind of design error you make now, it can be corrected with time and with rewrites. Don't worry - rewrites are much shorter than the original designs and writes, as they come after a lot of experience, and can often reuse most of the code.

    Keep excited, start coding. Whenever there are tidbits of work you don't like doing, but must, keep in mind how the great cool exciting things that depend on it will look like.
    Don't code without design, but do code what little parts you know the design of already.
    • > One of the best methods I know to write code incrementally is to rapidly model it in a Rapid Development language such as Python.
      Since I get excited by seeing results quickly, I'd probably start by deciding on a GUI toolkit, and find some Python bindings for it.

      Or you could just do the whole thing in a RAD environment (Delphi, kylix, VB.NET, C#.NET). The last one is particularily good for new development in Windows.
    • You know, this is pretty close to what is being called extreme programming. The idea is to get something working quickly, and write lots of tests for each feature/function as you add it. Make the regression tests part of the build process so you are always keeping things in a working state as you go. Expect to throw stuff out and redo it instead of doing extensive rework.

      It's never apparent what the best final structure is when you start. Allow the design to evolve. I think you still want to do a fair amount of high level or top down design at the start, but don't worry about getting everything perfect. Part of this model is also working in pairs or teams.

  • Emacs (Score:4, Funny)

    by cscx ( 541332 ) on Friday October 04, 2002 @09:14PM (#4391403) Homepage
    Doesn't Emacs already have this functionality built-in?
  • by Laplace ( 143876 ) on Friday October 04, 2002 @09:15PM (#4391405)
    The posts here remind me of a story I once heard. There was a bucket full of lobsters. The lobsters hated being in the bucket, and were all trying to get out. Every time it looked like one lobster was about to pull itself over the edge, the others would grab ahold of it in the hopes of being dragged out too. Instead of being finding their freedom, they would pull the lobster back down and they would all be back where they started.

    Why are you being a bunch of lobsters, Slashdotters? Why can't you support this guy and move him along towards his dream? Trolling and cynicism: is that what we have all come to?

    On the other hand, the guy does sound like a fucking idiot.
    • The lobsters represent, in general, the human condition: People see life as a zero sum gain, and any gain of anyone else is a loss of theirs. That sort of mentality is far too common.

      Mind you I think this ask slashdot is insane: That would be an absolutely massive project.
      • When RMS created the GPL and started emacs and gcc, it looked pretty insane too. Same with Linux. Don't tell him he can't do it, just give him a shove and maybe he'll land on the outside of the bucket.

        Come to think of it, the whole think is pretty insane, but it's a wonderful insanity, and it just might work.

    • Why can't you support this guy and move him along towards his dream? Trolling and cynicism: is that what we have all come to?

      What if the opposite happened? What if the author went ahead and started a huge project without exploring the scene (pun). 5 years later he/she releases it to find out that nobody is interested because there are better choice out there.

      I would rather be heartbroken up front than after wasting 5 years.

      He/she is still free to ignore our device. Are you saying that knowledge is dangerous here?

      I suppose we would also have talked Linus out of his little OS project :-)
    • by Brian_Ellenberger ( 308720 ) on Friday October 04, 2002 @11:28PM (#4391867)
      Mr. Linus, it is my understanding that you intend to write your own operating system from scratch. I just want you to know your "Linux" kernel is a stupid idea. Why don't you just buy Unix from one of the many vendors out there? It is a waste of your time and resources to try and reinvent the wheel.

      There are many things I have written that have been "reinventing the wheel", from a merge sort to converting a Windows BMP to JPEG. But I learned a ton from doing it. Heck if he just wants to write something just to learn more about 3D modeling, more power to him. And you never know if in 5 years we will be raving about a new open source 3D modeler giving 3DSMax a run for its money...

      Brian Ellenberger
  • I would like to write a full fledged 3d-Animation Software package from scratch. ... The question is, what is the best programming paradigm to use for such a project?

    Ask Larry Wall nicely. Maybe he'll squeeze some NURBS and Inverse Kinematics support into Perl 6.
  • Just some suggestions off the top of my head: It seems like that's what you're going to do by writing it in C++ so maybe I'm just repeating what you know, but it lends itself naturally for such things... abstract object/shape class and then go from there.

    You should have some sort of independent 3-D rendering layer, if you want API independence. Make it extensible, and don't forget about hardware shaders for quick visualization of textures, etc... even something that many commercial modellers don't support yet. Have some sort of translation layer to translate from your own material model to whatever hardware shading language is in vogue now.

    Use embedded Tcl/Tk for UI scripting, to allow maximum flexibility
  • break it up (Score:3, Insightful)

    by g4dget ( 579145 ) on Friday October 04, 2002 @09:26PM (#4391440)
    How does a professional programmer approach this design task? Ultimately I would like to be able to tie it into any number of different operating systems, graphics API's (OpenGL, DirectX, etc..), and so on. What are some good ways to do this?"

    Professional programmers generally work as part of larger teams with lots of division of labor. Many such teams have dedicated designers. I seriously doubt that a professional programmer would attempt a project of this magnitude on his own. That doesn't mean it can't be done, but you are asking the wrong question.

    Another question to ask is: who is this software for? Is it for your own edification? Do you want to write a book about it? Do you want it to become a larger project with more participants? Answers to those questions should determine how you structure the project, how you design it, etc.

    My general advice would be: break the problem up into lots of smaller, independently useful programs. That way, you'll have something to motivate you and to show for your work. Don't create ambitious, general class hierarchies--that is the best way of killing even a large project, let alone a one person project.

  • At least until you understand the tools relevant to your design well enough to decide for yourself.

    Go away and build something (or multiple things) small, but related, using technologies and methodologies that might be useful. Learn what works for you and what doesn't. Repeat a few times, until you know what's going to work for your project.

    To give an analogy, look at what John Carmack is doing at Armadillo Aerospace [armadilloaerospace.com]. When he started out, he and his team didn't know enough about rocketry to decide what technologies they were going to use in their ultimate goal (a vehicle to win the X Prize for a private space shot). What did they do? They've experimented with a variety of things on test rigs and as part of complete craft, and discovered what works and what doesn't.

  • A Few Tips.. (Score:3, Informative)

    by Shelrem ( 34273 ) on Friday October 04, 2002 @10:18PM (#4391625)
    I've used a couple of the big-name packages in this area (Maya, Lightwave, Renderman(prman & bmrt)), but i'm primarilly a programmer. Being a programmer of 3d applications at that, i have a few suggestions as to how you do it:

    First, encapsulate the system-specific stuff, preferably through pre-existing libraries where available. You can encapsulate the 3D renderer as well, though i'd suggest just picking one (*cough* OpenGL *cough*) and doing it well, at least at first, not worrying about wrapping it up. Next, i'd design the entire interface in said 3D rendering context or other windows popped up from it, both so that you don't have to worry about gui consistency across platforms, and so that it goes fast with fewer big library dependencies. There are a couple cross-platform libraries that do GUIs for OpenGL out there.

    Now, if you've used Maya much, you'll know that it's basically a big programming enviroment with a few graphics hooks. The rest is scripted. It's truly amazing, but i think that this is quite vital. I'd suggest using SWIG or Boost::Python to do Python interfacing to your compiled code, and use Python to build the interface and implement a lot of the details (some tools, basic relationships, ...). This doesn't mean you won't also want some simple command-language as well, but for the heavier-duty stuff, i think Python's your language, but then, it's really personal preference. I suggest you go with something that's clean and robust and has good, easy C or C++ language bindings.

    Don't worry about a rendering engine, just get it to work with Renderman (prman, entropy, bmrt, etc.). Most renderers in comercial software fall short of those anyway.

    Oh, and try to get the groundwork in there quickly, then do RAD with Python, replacing stuff as needed for performance.

    So, to recap: incremental development, scriptabiliy, OpenGL everything for display, scriptability, Renderman export, and above all, scripability. Especially scriptability that's easy for artist-types to use.
  • Something like:
    Software Engineering 201 - Large Project Design. Fall 2002
    McMartin/Dhawan, T-Th 1-3pm, Rm A104.
    Students will produce a detailed project plan for a large graphical software package, with emphasis on design, resource requirements and critical path.
  • I will advice you on the things your 3D Experiment Project should have:

    1. A plug-in architecture, based on a very simple to use API. This way you concentrate on the basics, and hopefully then allow everyone else to develop plug-ins to extend the functionality. This will save you a lot of time.

    2. Provide a tutorial to show how simple it is to develop a plug-in for it.

    3. Use XML-based formats for your files.

    4. Plan from the start for the program to be distributable, so one can render on multiple nodes on a network.

    5. Check out the TrueSpace [caligari.com] user interface. TrueSpace's output is not as refined as the big guys, but the 3D interface is bar-none the easiest to use (and it also support traditional 4-screen views). You can download a demo from there to check it out yourself.

    6. Make it run on Linux, Mac OS/X, and Windows 2000/XP.

    7. This is a long shot if you don't use Java, but if you program to the Java 3D API you automatically support OpenGL, Direct 3D, QuickDraw, etc, saving yourself a step in the process.

    8. Include a scripting language for ALL internal functions and user interface commands and menus. That way a hard-core programmer has access to the low-level stuff, and a casual programmer can create simple scripts to automate a series of keystrokes and menu commands. Javascript could be great for this, or maybe some XML-based language?

    9. Plan to include support for 3D glasses. They trully make modelling and animating a lot easier.

    10. Include a utility to import/export to at least one well-known format, so people can get started right away experimenting with their 3D objects and scenes (Lightwave, 3DS MAX, Maya, etc).

  • Cool idea, but... (Score:4, Insightful)

    by p3d0 ( 42270 ) on Friday October 04, 2002 @10:31PM (#4391658)
    Hi. I'd like to build myself a television. I know all the features I want to have (colour, brightness, and contrast controls; coax, RCA, and SVGA inputs; 16:9 aspect ratio; light weight; 35" diagonal). I'm wondering, what approach should I use to build my TV? How do the pros do it?

    I hate to be a downer, but that's way too big a question, and too fundamental. It's a catch-22: the fact that you are asking this question indicates you probably won't be able to accomplish the project.

    If what you want to do is try your hand at designing a 3D modeller, I'd say you should fork or join (no pun intended) an open-source project. If you don't like some of their design decisions, then redesign those parts.

    OK. Having said all that, I'm actually going to try to answer this question as best I can off the top of my head. Beware: this is a brain dump, and that's how it will read...

    Start with the interfaces. They are everything. Without good interfaces, you find that the development time for a project with n lines of code will grow as n^2. With good interfaces, it's more like nlogn. I don't know much about 3D modellers, but I bet it will get big enough that this will matter. If your brain is too small to design all these interfaces at once, try to design as many as you can, and then start writing prototype implementations, but be ready to chuck them when you figure out their weaknesses; after all, that is why you are writing them.

    For each interface, ask not what facility that interface provides, but rather what information it hides. That is, what changes could occur behind-the-scenes without requiring corresponding changes to the caller of that interface? If you can't describe in one simple sentence (with no "ands" or "ors" in it) what an interface is hiding, then it's no good, and you need to take another stab at it. (Of course, I didn't think up this information hiding thing myself [acm.org].

    As you design your interfaces, identify those that are truly fundamental (ask yourself: would every conceivable 3D renderer need to be able to do this?), and separate them from the others that contain some of your own personal choices. The former are your base interfaces that should (in theory) converge toward the ideal design, such that you feel less and less need to change them as development progresses. The simplicity and stability of these interfaces will determine the flexibility of your design. Their header files should be physically segregated from those of the other less-fundamental interfaces.

    Then, remember to think big and code small. By that, I mean you should brainstorm while writing your interfaces, and design them so they could accomodate every plausible implementation; then, implement them in the simplest, most straightforward way you can. Churn out those prototype implementations with a focus on the shortest path toward correctness. Worry about everything else later; thanks to the flexibility of your interfaces, you can change any of the implementations later. This approach prevents premature optimization, and keeps you from writing lots of intricate code you don't need.

    Recognize when you have opposing forces on each side of one of your interfaces (ie. the caller and the implementor), and split that interface into two. That way you can give both the caller and the implementor an interface they like. (That's described in my thesis [toronto.edu]--chapter 4--and the PowerPoint slides [toronto.edu] on my web site [toronto.edu].)

    When you don't know how you want to do something, see if you can make an interface that hides that decision. That way you don't need to think about it now; punt the decision until you have enough information to make a good choice. If there's no obvious "best" implementation, then that may be something you'll want to change later anyway, and you'll be glad you made an interface to hide it from the rest of the system.

    I have only just barely scratched the surface here. This is a truly vast question you have asked.

    Good luck with the project.

  • Before I took my first hang gliding lesson in the late 70's I went down to the beach and watched seagulls soar along the cliffs for several hours.

    When I went to my first lesson, the instructor asked the group if anyone had prepared for it. I told them what I had done. The other students laughed out loud. The instructor gave me his best 'you are the only chucklehead here who even has a clue' look.

    Hope this helps
  • A more doable approach to getting into 3D animation software design would be to design something to complement an existing program.

    For example Weta [wetafx.co.nz] designed an AI program to create the hordes of characters in the battle scenes of LOTR. Their program (plug-in?) worked with Maya (the industry standard)and far surpasses previous attempts. Compare the armies in LOTR to the robot army in Star Wars. If you look real close at Star Wars you can see multiple robots that are duplicates but in LOTR every character in the army behaves independently because they are all AI.

    This guy could work on something like that which would improve on the allready impressive Maya. Or you could just contribute to an open source project...
  • new kind of spline (Score:2, Insightful)

    by f00zbll ( 526151 )
    If you're going to it, then why not do something totally revolutionary? Why just redo what already exist. Try to think of a totally new paradim for 3D modeling and animation. You mentioned NURB, polygons and a bunch of other stuff. All of the various techniques have weaknesses. For example, take hash splines in Hash Animation Master. Hash can create a surface with 3 or 5 points, where as NURB's have to have 4 points. A 4 point surface makes it easy for subdivision calculations are render time, but it increases the complexity of a model and requires more memory.

    If you can find a completely new way of representing a 3D object that is both more flexible to model and animate than current techniques, then you've got something really worth doing. I'm not smart enough to think of a totally new way to represent a 3D object, but if some one could it would change the 3D application world.

  • Despite the multitude of responses you've already received, I'll throw my two cents in.

    A while back I wrote a raytracer. After I had it doing primitives, texture mapping, etc. It occurred to me maybe I should just go whole hog and write a 3D modeller. Well I changed my mind due to time considerations, but maybe I can help you a little:-

    If you know the mathematics behind it, implement the different rendering engines you want: raytracing, radiosity, photon mapping, NPR, etc with the primitives you want to support (spheres, planes, triangles, nurbs, etc). Doing this lets you do what has already been mapped out for you by mathematics. Just implement them, the things I listed above really don't take that long if you have some solid time to dedicate, and a firm understanding of the math.

    Also make sure you have good reference to backup that firm understanding. If you haven't already,
    check out:
    I found this [amazon.com] book useful for basic CG related math.

    This [amazon.com] is a good one on radiosity.

    And this [amazon.com] one photon mapping.

    Here [amazon.com], and here [amazon.com] are useful as well.
    For particle streams and the such, see the papers residing somewhere in Pixar's servers.

    The problem comes when choosing APIs, GUIs, etc. I would suggest going with something like OpenGL with GLUT. Most of the 3D modellers out there use OpenGL and it has good cross-platform support. You can then use OGL not only to display scenes rendered (I wrote mine out to png - I was lazy), but you can use the wonderful main loop of GLUT to
    write your UI. Mind you this can be a pain, but it means you can make your interface fully scriptable and skinnable on-the-fly using discrete objects to make up the whole. The other choice is to rely on any of the many well-supported UIs out there with OGL support. Just watch the platform (in)dependence if that matters to you.

    Overall, take the project in stages, ideally from the best defined (math) to the least (UI). Make sure each chunk is highly modular so it's easy to alter or replace.

    I'm not sure if this will help, but I wish the best of luck to you. Remember to start a sourceforge project. You might find you can get some help.
  • (* I have all of the major concepts, and relationships in mind, but refuse to write one line of code until I have a good design plan. How does a professional programmer approach this design task? *)

    You will get a jillion different answers to this question. There are very few agreed-upon metrics for deciding which methodology is the best.

    It probably depends on the personality of the coders/mainainers and on the domain (graphics in this case).

    One size does not fit all. It is sometimes said that you don't get a decent design until the 3rd try.

    On another note, as others have noted, there are plenty of *existing* open-source projects that need some improvements. To be practical, you should look at improving existing ones instead of starting from scratch (unless it is a purely personal quest).

    Start by fix Blender's goofy interface :-)
  • by oquigley ( 572410 ) on Friday October 04, 2002 @11:05PM (#4391774)
    I'm not a programmer, but I am a professional user of 3D tools.
    I've noticed that the huge advances in 3D modeling & animation packages that we saw in the late '90's, with the release of Maya, Max 1-3, Lightwave, Soft and the like seem to have come to a stop.
    The most recent releases of all of these seem to be converging on the same feature sets. They're all making dull, incremental progress. At the moment, I'm wondering whether it's even worth the hassle to upgrade from Max 4 to Max 5. The only thing it really seems to offer is built in global illumination rendering, which has been available as a plugin for a while.

    I'm wondering what the next revolution in 3D authoring tools is going to be. I can't imagine that we'll be going down this path of diminishing returns forever.

    One possibility seems to be true WYSIWYG realtime rendering using the coming generation of floating point accurate 3D cards. Another seems to be automation of character animation (embedding simple AI into the skeletons)...

    I'd question if it's worth the bother to simply replicate the existing functionality of mature, static programs. If it's a new project, you could rethink what a 3D package is supposed to do and make a real leap.
    What do you all think it would take to refresh the 3D tools world.

  • I would sugest that you use OpenGL it is widly used and it is better in cross platform development then DirectX is.
  • by composer777 ( 175489 ) on Friday October 04, 2002 @11:17PM (#4391827)
    Ok, I thought I would try to make this stand out a bit, since I specialize in 3D graphics for a living. I am by no means the top guy at my company, but do have experience in design implementation, as well as reading the Open GL reference guide several times. Let's start with general advice:
    1. Keep all your rendering loops tight. Avoid doing any extraneous operations such as caching. Use arrays instead of linked lists(this keeps the data inside your cache). Avoid recursion unless you can be sure that your complier is not pushing and popping the function stack(some compilers are smart and will not create a stack unless you pass data as a parameter).
    2. Try to perform as much work as possible at startup or while the user is editing. Remember you want to make as many operations as possible a once only thing. The last thing you want to do is put a bunch of crap in your rendering loop.
    3. Take advantage of caching on your graphics card by using display lists and vertex buffers. On nvidia cards this alone can speed up your application by 3x. Only use immediate mode rendering when necessary. Keep in mind that most graphics cards use extra memory when you put the data inside a display list, so there are times when display lists can be slower.
    4. Perform depth sorting for proper rendering of alpha blended objects. (this is something we failed to do in the initial design of our application, which was written in 1992 before alpha blending was a widely used feature).
    5. Try to keep interface code generic, and try to make rendering code specific. It's always a tradeoff between readability and performance.
    6. Learn assembly, not because you're going to use it that much, but so that you can spot areas of slowdown. Learning which operations are expensive is crucial. Function calls, random memory access, pointer deferences can all slow your program down.
    7. As mentioned in six, optimize your access to memory, pay attention to byte alignment, which will allow you to pack more data into the cache. Also look into AMD and Intel's articles on optimizing for performance. The most crucial aspect is how you access memory. There are new instructions which allow you to load data from memory into cache before it's used. This can often speed computations up significantly in real-time applications. There are also many other tips, but I'll leave up to you to go to AMD and Intel's websites and download the white papers.

    You mention animation, the project that I worked on for the last year tackled this problem:

    The project was to integrate animation into an application that was not designed to do this, and to make it generic enough so that the user could animate anything. Here are some simple concepts to get you started in designing an app that allows users to animate in an intuitive manner:
    1. Timelines - A timeline is a graphical way of representing time. You can use something that looks similar to a ruler, with time marked in units of usually every second.
    2. Keyframes - These are points on this timeline that are specificed by the user. Keyframes always have a time associated with them. If I want to animate positional data, then that keyframe will have a time as well as data about the X,Y,Z position of that object. When the user hits play, the application will interpolate between points on the timeline.

    Here's where C++ comes in handy. You can make both timelines and keyframes a class. Then, let's say I want to animate clouds, I can simply create a class called cloud timeline that contains cloudkeyframes. When the user clicks a keyframe, an interface opens up that allows him to edit that data, which in the case of a cloud might be both transparency and position. Then when the user hits play both position and transparency are animated according to the values of the keyframes given. The neat thing is, that a cloud timeline can be derived from positiontimeline, which means that you only have to do the work of creating an interface for animating position and orientation once.

    Next, it is important to remember that timelines are a property of some object within the scene. I would say that it you can also keep object data organized in a generic manner. I would recommend using a scene graph. So, what the user would see is a scene represented by a tree, with the root node being the terrain and child nodes being objects on that terrain. You can also pull some neat tricks with scene graphs, such as nested transforms. This would allow you to have an object such as a car, with four wheels, to have wheels that are child nodes of that car. In this way, you could create a timeline for the entire car, and then the wheels could have their own timelines which would animate their rotation. The wheels would not know anything about the fact that they are moving along with the car. There are of course other ways of animation, such as writing your own scripting language, which I have never done. I have written a VRML parser, however, and I can tell you that learning both Bison and Lexx is important if you want to implement a language. There are other types of parsers, but using these compiler tools tends to be more straightforward. In the least it would be good to pick up a book on language designed and construction. The book I studied in College was "Compiler Construction: Principles and Practice" by Kenneth C Louden, but there are others that may be better. Anyway, that's enough rambling, and since most on slashdot are pedantic, please forgive any technical erros, it's Friday night and I wrote this in about 20 minutes.
    • Keep all your rendering loops tight. [etc.]

      Premature optimization is the root of all evil. This is not the place to start worrying about this; get a working program, and then you can worry about all the deep optimization then. Even most of the upper level optimization - data structures and such - you can rewrite once you know where you're going.

      Here's where C++ comes in handy. You can make both timelines and keyframes a class.

      You mean object orientation, right? The same OO you can do in Simula, Smalltalk, Ada, Java, and dozens of other languages.
  • I have all of the major concepts, and relationships in mind, but refuse to write one line of code until I have a good design plan.
    You sound like a manager who has just completed a software methodology course and now wants to force all of the staff to use what he's learned in class. Bah! At least I, and probably others, think in code and use an editor as a scratchpad to come up with a design.

    There are things you sometimes don't or can't see until you try to code it. Often, it's the case that it's difficult or impossible to do something a certain way due to constraints of the language.

    Iterative design and prototyping are, IMHO, much better than the old "design, then code" method.

  • Why go for C++ and all its complications? Objective C is cleaner, and if using the GNUStep framework easily portable to the Mac OS X.

    Obviously, if you want to Do The Right Thing you might want to use a really relational database and code in Lisp, but I do not know if that is practical already. At least the Lisp part is for sure, and one can always go to SQL as a second-best and stopgap to a real relational system.

  • If you want to write something like 3DSMAX lightwave Maya or XSI, alone, you're either a genious or completely on crack. If you're a genious, and know how to make such an app, I don't understand where you could not have the intelligence or knowledge of coding it the right way with the right tools for the job.

    I don't want to sound like a flame but comments like this always crack me up, if you want to see the biggest success of a "little people's job" look at lightwave3D, 2 guys made that software in the beginning, one guy on the modeler, one guy on the layout, today, they have a lot of people working with them because at some point, if you want to have features, one person or 2 people can't cut it anymore. Even if you know your 3d your maths and all. You'll always end up not knowing that little thing and require some research and steal valuable time...

    Look at another example, project messiah, supposed to be the best thing out there, with the best renderer, the best animation software and all, all in one package... guess what? they are Late, they have a pissed off userbase, and while I have a lot of respect for that company and they did a lot in the 3d scene, they've hit a reality in the programming world that doesn't always apply in the 3D production world: you don't always meet your deadlines or objectives in time. So 1 year later than the "release announcement" they are still late, not because they suck or they have no talent, god, they got LOADS of talent there, and they've proven that they can do the work with the previous character animation plugin they've made for lightwave, but doing everything from scratch is more work than you can think of. Plus reinventing the wheel is kinda useless.

    If I were you, I'd use my skills to write a new breed of plugin, there's always things that people would like implemented, new paradigms, new concepts, this year, stuff like sub-surface scattering and 3d hair were the big hit, be in advance, read some theorical siggraph paper, do some plugins, get known, and probably a 3d software company will notice you and buy your stuff to integrate it to their package and there you will be able to make a difference. I can name you people that got millionnaire that way.

    Again, I'm wondering if this story isn't just a way to generate traffic and make people talk, Maya got an ARMY of developpers, Discreet (3dsmax) got a lot of people, Lightwave, less but still, it's NOT a one man's job. By the time you'll finish the basics, you'll have 10000s of features to catch up, and lots of debug too probably... anyways...
  • My best advice for those developing 3D animation software: do 3D animation. Then write the tools you need to do it better.

    It may not be that difficult to reverse-engineer an existing program (in the sense of reverse-engineering its general architecture, not necessarily duplicating it bit for bit). I find that the public documentation on Maya and Lightwave is detailed enough to get a very good idea of how their internals are structured.
  • by splattertrousers ( 35245 ) on Saturday October 05, 2002 @12:35AM (#4392079) Homepage
    1. Think of the most important feature that you can describe in one sentence and that you estimate will take a small amount of time (say, 4 hours).
    2. Write a test for it.
    3. Write the code to make the test pass.
    4. Refactor.

    Repeat steps 1-4 until finished.
  • Common Lisp (Score:2, Interesting)

    by voodoo1man ( 594237 )
    Before you mod me down, check out what these guys [izware.com] have done. [izware.com] (The site hasn't been updated in a while, due to company problems and one of the main coders being on hiatus or something (fyi, his name is Larry Malone, he has been doing this ever since he modelled the sailing ships in Tron using custom-developed software at III [ohio-state.edu], and has been writing graphics Lisp software at Symbolics [uni-hamburg.de] and since afterwards)). Well, enough of the history lesson.

    Common Lisp has a lot of benefits for this type of work. Since it is completely dynamic (ie - everything runs in an image with which you can interact, add code/compile and debug, all at run-time), the plug-in/scripting is taken care of from the start, and can have the full syntax of CL and access to any of the main program's features you choose to give it. CL will probably give you the most results per line/minute of code because of this dynamism.

    Most CL implementations have pretty good foreign function interfaces for C and C++ libraries (Franz's Allegro CL even provides support for run-time Java objects.)

    CL's performance is on par with C++ in general, and lags only in one major area - FP operations require "boxing" overhead when the symbol pointing to the numbers is dynamically typed (most compilers optimize statically typed declarations pretty well - which makes most of the overhead go away.)

    Of course, before you go off on your great quest, you should probably read what some of the other posters have suggested. Writing graphics software like the type you describe is an incredible amount of work (I gave up my uber-Scheme system after 100 lines and settled for writing smaller utilities and plugins), and many have tried and miserably failed before.

  • Actually, at one time I had a semi-working 3d modeller, based on Visual Basic and DirectX 5. One could create simple objects or design faces by vertex, and then create objects from the faces. I seem to remember in the end I had gotten a working gl modeller, although I think the texturing had issues.
    Direct3d is not incredibly difficult to work with. Although I swear that it was easier back in the days when one would import library functions from custom TLB's and there were a crapload of samples online, since the current Microsoft SDK examples suck.

    I don't know if openGL is any more difficult, I imaging skilled C++ programmer could probably come up with something quite a lot better that I did in about 3 months.

    My last recommendation, check for similar projects, and release your project as open source. It might net you more assistance, and it would probably make a lot of slashdotters happy as well.

    I'd give you code for the old VB modeller, but looking back at it, it's so bad that even I can barely figure out what I was doing. I figure that by the time you're 90% done you'll look back and say "wow, that first month of code is absolute shit, maybe I should rewrite it." :-)
  • by PotatoHead ( 12771 ) <doug.opengeek@org> on Saturday October 05, 2002 @01:55AM (#4392253) Homepage Journal
    I decided that I was going to create a viewer for CAD models using OpenGL. Take my home page link to see what the end result was.

    You should set some realistic goals early on. I have read some good comments about planning. --Do this, they are telling you the right thing to do!

    I wrote the core of what I thought would be capable of becoming the viewer. Turns out that I was right, but I ended up with a viewer that will need heavy rework before it can be built upon. It actually works well and does the job it needs to do, but not in an elegant and extendable way. Better planning and research, on my part, would have helped out a lot. Of course, now I can comment on such things because I see the value --even if it was the hard way :)

    Take that plan and break it into a couple of realistic initial projects that both accomplish something and contribute to your end goal.

    I also read a couple of comments to the effect of "If you are asking on ./ you likely do not have the ability to complete the project." --Ignore these. If you have the time and interest, you will have a lot of fun and learn things that would be hard to do otherwise. This is worth doing IMHO.

    After putting my early revision of the viewer up on sourceforge, I have recieved comments and e-mail from people wanting to help me code better (thanks Thierry!) and from people letting me know how they use the program. One person, after some conversation, went through the code line by line with suggestions and advice that helped me improve the program quite a bit. This whole experience was good for me.

    I have some general comments about this sort of program as well based on a few years AE experience with them. (I have worked with MAYA, ProEngineer, I-DEAS, StudioTools.) Watching users learn and use these tools has shown me a lot. Spend some time with these users and watch them work. Consider what is done well and what could use improvement. This will help your planning more than you know.

    Go with OpenGL as the foundation for your display. It will keep your project cross platform. OpenGL is mature and very well documented so leverage that.

    Think long and hard about your interface. What actions will one need to accomplish at a particular time? Think about different workflows and allow for them. Some users like to free form draw, others draw then size then draw --that sort of thing is important. Consider MAYA, it presents both methods at all times.

    Part of the overall success of MAYA also lies in the fact that it is as much of a platform as it is an authoring tool. I believe this is key for this type of application. Since you really do not know how people will create with your tool, allow for that in the core design.

    Think about the command structure as well. Lets say you have a function that will sweep a curve along a path. (Which is a very common function.) Instead of creating many sweep commands, build one that handles sweeps in general. Every package gets feature creep, make sure yours puts it off as long as possible.

    Most of the programs I mentioned above have good workflow built in that is ruined by lots of odd commands that fill gaps in the feature set that a well developed core command set would have addressed if they better understood how people would be doing things.

    Get copies of these and learn them enough to understand how things are accomplished and build from there. Many of the common mistakes have already been made for you, take advantage of that.

    Build upon some of the more documented file formats out there. Your project will be used more if it is data friendly. While you are at it, make your file format a smart one. Document it well and be sure it can grow with your project in a sensable way. Given all the inexpensive storage today, do not be afraid to store lots of smart data that is easy to work from.

    The most difficult part of this project is likely to be the geometry kernel. There are kernels out there that you can build upon. Most of these have many man years invested in them. You would be wise to do your homework here and take advantage of one of these. This will also help greatly with the data elements I mentioned earlier. ACIS, Parasolid, Hoops and others like them are what you should be looking for.

    Invest a lot of time in good graphical feedback to the user when you get to that point. If things are modal, indicate that mode onscreen in a way that does not distract from the task at hand. Things like direction, spatial location, surface normals, control points and such should all be distinct and clear for easy manupulation.

    Go have a bunch of fun, learn stuff, live to tell the tale. --Remember to go outside once in a while though!

  • We had to write a modeller and a raytracer for the 3D graphics class at UCSB. The modeller was just a single window with no pull-down menus or text or anything fancy like that. You would click the 'n' key to start a new polygon, then you would click around on the screen a few times to create the polygon. We only supported polygons. You could select polygons with the mouse and use 'r' to rotate them around. The drawing to the screen was via OpenGL.
    The modeller was simple, but it worked. If I recall, you hit 't' to do a raytrace of the current scene. It would pop up another window of a hard-coded size and do the raytrace for each pixel (i.e., traverse accross the data set figuring out what to draw). By the end of the quarter (it was a 10 week course), it did lighting with the blinn-hill ambient, specular, highlight model and it had different reflectivities and transparancy. Plus, in the modeller, you could save and load models and reparent objects to create a hierarchy.

    Anyway, it was a good way to learn a lot of 3D graphics stuff.

  • by ikekrull ( 59661 ) on Saturday October 05, 2002 @03:40AM (#4392433) Homepage
    Linux already has good tools for modelling, rendering, and reasonable tools for output.

    Also, trying to reinvent the wheel is a waste of time, there are a jillion different frameworks, engines, modellers, renderers out there for Linux, none of them complete enough to produce professional, day to day 3D animation work.

    Blender is the most complete of the free packages, and it really is an extremely good piece of software, despite the annoying lack of 'Undo'.

    Blender has some good animation facilities, but I really think it would be worthwhile to write a separate module that specialises in character animation. This would be a godsend to people who are trying to do complex animation with Linux, without paying for Maya etc.

    I suggest, you take Blender and build a module into/around Blender's workflow to bring professional-level character animation tools to Linux. Use Blender as a modeller, as a 3D format, and a scene-integration tool, and build us a set of professional non-linear character animation tools, that integrates well with the best (soon-to-be) open-source 3D package in the world.

    Look at Project:Messiah, a character-animation addon for Lightwave3D for a good example of how a great character animation tool works, and also at Hash's Animation Master, as those tools are really, really good too.

    This would fill an existing hole in the toolset available on Linux, reuse work already done by the community, stand a better chance of getting to a usable stage quickly, and probably give you a chance to think about doing a 'ground-up-rebuild' from the perpspective of the most 'demanding' end users of your software - the character animators.

    With Blender, you also have a huge community of artists who will thoroughly test your package, and provide suggestions and help to make it the best it can be.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...