Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Rendering Processors: AR350 vs AMD vs P4? 21

landrau asks: "I'm planning on building a render farm and was wondering whether anyone would know the pros and cons of the AR350 processor, used in The Renderdrive, as opposed to building a renderfarm with an AMD or P4 processor." While the Renderdrive looks like a real rendering workhorse that can produce some gorgeous results (see images in page header), does it justify its lofty pricetag of £6950 (over $12,300USD)?
This discussion has been archived. No new comments can be posted.

Rendering Processors: AR350 vs AMD vs P4?

Comments Filter:
  • Alphastations (Score:4, Interesting)

    by madaxe42 ( 690151 ) on Friday April 23, 2004 @08:03PM (#8955801) Homepage
    I've been using 7 AlphaStation 255s as a renderfarm for the last few years - works rather nicely, just using standard linux job distribution/allocation, and rendering all sods and sorts. Also, they're dirt cheap!
  • Included renderer? (Score:5, Insightful)

    by BrynM ( 217883 ) * on Friday April 23, 2004 @08:07PM (#8955827) Homepage Journal
    From my perusal of their site, it would seem to me that one of the main advantages is having a standard and fully featured renderer come with the boxes. The renderer acts like a plugin for both Max and Maya (look in the download section) and can do caustics, reflections/refraction, radiosity and such. Having that stuff out of the box is so much better than pounding out a Renderman install. And the render queue is native via a plugin to boot! This may be one of the easier solutions for setup I've seen yet if all they claim is true.

    Be sure to count the price of your rendering software into your comparrison. The price of Renderman and it's associated support could well make up the difference in your hardware costs. Don't forget to include the price of your install time (man-hours) as well.

  • Vanilla hardware (Score:5, Informative)

    by tolldog ( 1571 ) on Friday April 23, 2004 @08:12PM (#8955860) Homepage Journal
    I have used P3 and P4 based systems (and SGI's before that) and have been happy with the speed to dollar ratio.

    I have never tested or looked at the render drive, the price seemed a tad high.

    I would rather be able to do several frames at a time than one frame really fast.

    I imagine the AMD64 based solutions will be nice farm boxes as well. Rendering is so IO intensive, having a wider, faster memory bus has to help.

    -Tim
    • "I imagine the AMD64 based solutions will be nice farm boxes as well. Rendering is so IO intensive, having a wider, faster memory bus has to help."

      Depends on what he's using to render with, really. Frankly, I don't think AMD's 64-bit processor will be all that interesting to use for rendering until the renderer is optimized for it. You don't really get a speed boost from using 32-bit code on a 64-bit processor.
      • Re:Vanilla hardware (Score:5, Informative)

        by tolldog ( 1571 ) on Friday April 23, 2004 @08:36PM (#8955989) Homepage Journal
        I am talking purely about the memory bus and the speed gains from that, not the 32bit vs 64bit nature.

        The AMD64 systems have faster memory access than the Pentium based systems. Its comperable to the bus on Apple's G5 (AMD and Apple worked together on it).

        -Tim
        • "I am talking purely about the memory bus and the speed gains from that, not the 32bit vs 64bit nature."

          I apologize, I was running two ideas together and communicated them poorly. I don't think the memory bus is making that significant of difference with rendering. If it was, there would (potentially...) be a larger difference between the P4's and the Athlons when rendering. True, lots of data has to be pulled from RAM to the processor to perform calculations on, but the big bottleneck seems to be in
          • From talking with friends, no renderer is really cache efficient. They need memory and lots of it, but the on chip cache doesn't do much to help.

            The rest of it depends on what other bottle necks you have... obviously. Also scene size and complexity. I know that having as much stuff as close to the processor as possible always pays off. I am also going by what I have been told, I could be off.

            The other big bonus for the AMD64 systems is more memory. I hate hitting swap.

            Its been a while since I have wo
  • by NanoGator ( 522640 ) on Friday April 23, 2004 @08:20PM (#8955903) Homepage Journal
    I personally wouldn't go the Renderdrive route. I don't have first-hand experience with it, but I have heard from other artists that have said that it imposes some rules about what you can do with it. Its core functionality is supposed to be pretty darned good, but if you try to step out of bounds of the renderer, you're in trouble.

    You should examine, though, what your needs are and think about whether or not the limitations of the RD would really be a BFD to you or not.

    So.. unless the RD is really what you're after, that leaves AMD and Intel. Frankly, this is a tough call. The deciding factor may very well be the renderer you use. AMD's done a real nice job of keeping the render speed per dollar ratio nice and affordable. Intel, however, has a few tricks under the hood that some 3D apps make really good use of. Hyperthreading really muddies the waters as well. For the longer more detailed scenes, I've seen a good deal of benefit from using Hyperthreading on a P4 via Lightwave. Although, for smaller scenes, the overhead of setting up multiple threads can often defeat the purpose of using HT.

    Yeah, I know, not a very helpful answer. I think if you went AMD, you'd see a price savings, and not lose a whole heck of a lot of performance. At least that's the direction I'd go. However, I wouldn't buy either until I've taken a typical scene from my 3D app and performed a benchmark analysis on either of the two processors. I mean do this first hand, don't read the benchmark sites, they can be very misleading.

    Fun stuff. Truth be told, though, I think you'll go with either one and find times where you ache for the other. Grass is always greener?
  • AR350 (Score:3, Interesting)

    by psyconaut ( 228947 ) on Friday April 23, 2004 @08:22PM (#8955918)
    I somehow doubt that this company actually designed and fabbed a 'real' chip....I somewhat suspect that the rendering chip might actually be an FPGA. Anyone know for sure?

    -psy
    • Re:AR350 (Score:3, Insightful)

      by GoRK ( 10018 )
      A logic argument --

      If they are getting that kind of speed out of an FPGA, they would be a bunch of complete stupid idiots not to develop it on into silicon. The speed advantage alone would probably put them on top of the market instantly. Heck, they could probably even offer realtime rendering at broadcast tv resolutions. Also keep in mind that FPGA's with enough gates to actually do this kind of thing cost a heck of a lot more than 12K.

      Not that this kind of thing isn't coming.. I would assume that realti
      • Re:AR350 (Score:4, Informative)

        by psyconaut ( 228947 ) on Friday April 23, 2004 @08:40PM (#8956011)
        "FPGA's with enough gates to actually do this kind of thing cost a heck of a lot more than 12K"

        *cough*

        Actually, the advantage of using FPGAs over ASICs would pertain to reconfigurable computing.

        Send me a link to an FPGA devise that costs more than 12K :-p I design FPGA stuff and am curious to see which vendor is offering devices in that price range ;-)

        -psy
        • Well, send me US$ 12,000 in cash, and I'll send you a Xilinx Spartan II.

          If you ask politely I'll even throw in a Digilab DI01 for free.

          But you have to pay shipping and handeling.

          Deal? ;-)
    • Yes they did.. (Score:5, Informative)

      by elrond1999 ( 88166 ) on Friday April 23, 2004 @09:25PM (#8956277)
      "The AR350 uses a 0.22-micron line size and is delivered in an SFBGA package" 0.22 is pretty decent. And its a programmable processor, so its more like an ASIP than an ASIC.
      The Ray Tracing Graphics Processor The AR350 is the company's second-generation ray tracing processor. The AR350 features a memory manager to access local DRAM, an on-chip data cache to reduce the bandwidth required of the host bus and two 3D rendering cores. Each rendering core performs both the geometry and the shading operations of the ray tracing algorithm. The geometry co-processor of each core is capable of performing a ray / triangle intersection calculation every processor cycle. The shading co-processor is completely end-user programmable through the RenderMan Shading Language. A simple interface between AR350s produces a scaleable architecture with good processor linearity. The AR350 uses a 0.22-micron line size and is delivered in an SFBGA package for integration into the heart of ART VPS rendering devices. The AR350 delivers four times the rendering speed of the AR250. The first ray tracing chip - the AR250 - represented a new class of graphics processor - the photorealistic rendering chip. Unlike other graphics chips that use simple 'painter' algorithms to generate images, ART's processors use the physically-based ray tracing algorithm to generate images of stunning quality. The AR250 was the first processor to use ART's dedicated ray tracing architecture, giving unrivaled rendering performance.
  • by bergeron76 ( 176351 ) * on Friday April 23, 2004 @09:32PM (#8956311) Homepage
    While the Renderdrive looks like a real rendering workhorse that can produce some gorgeous results (see images in page header), does it justify its lofty pricetag of 6950 (over $12,300USD)?

    A mirror of these spectacular images can be seen here:

    http://www.dashpc.com/renderdrive_mirror.png

  • by (H)elix1 ( 231155 ) <slashdot.helix@nOSPaM.gmail.com> on Friday April 23, 2004 @10:45PM (#8956680) Homepage Journal
    Just got back from a couple days in Vegas. NAB had tons of rendering demos and benchmarks. One of the more interesting things (I thought) was Nvidia letting you leverage their GPU's in the Quadro line of cards for rendering. (demo I saw was using Linux with python and C++ connections doing maya stuff, though they said Win32 was supported as well).

    Otherwise, lots of people with software that farmed out rendering to clusters of commodity blade servers. The dual CPU 1U x86-64 was a screamer, though not as compact as some of the other arrangements I saw.

    Better shop around a bit more...
  • Go General Purpose (Score:5, Interesting)

    by XenonOfArcticus ( 53312 ) on Saturday April 24, 2004 @12:00AM (#8956991) Homepage
    In my ten years in the computer graphics business, I can't tell you how many specialized hardware rendering platforms I've seen come, and go. Renderdrive has hung on longer than others, but I still feel that nothing beats the bang/buck of a stable of nice commodity hardware.

    A lot depends on what application you plan on running. Each app has their own approach to distributed processing, and their support (or lack thereof) for any given hardware is critical.

    I would lean towards AMD 64-bit CPUs at this time. Some renderers are optimized for P4, but the AMD chips seem to run P4 code quite well, and they run all other X86 code wonderfully.

    You can rack up a bunch of commodity boxes for a great price, and render to your heart's content on them. In some cases, depending on support from your rendering software vendor, you might even be able to run Linux on them.

    I will put in a plug here for an open-source program I created, SuperConductor (http://super-conductor.org/ [super-conductor.org] that is a multi-application portable render farm controller. It's written in Qt 3, and runs on Linux and Windows right now (no Mac Qt dev kit to try it on). It currently supports my rendering software (World Construction Set/Visual Nature Studio [3dnature.com]) but is designed to be extensible to other renderers. We could use someone to add support for Maya, POV-Ray, or other apps. The freshest source (a complete rewrite) is in SourceForge CVS right now!
  • Gelato (Score:5, Informative)

    by dFaust ( 546790 ) on Saturday April 24, 2004 @01:54AM (#8957390)
    Another possible consideration would be nVidia's newly announced Gelato [nvidia.com]. $2750 per license, plus the cost of a good Quadro card to make it worthwhile. It's yet to be seen what kind of performance and quality this will offer, but certainly something to keep an eye on.

    On another note, I haven't been keeping up with my 3D like I used to, but some software, such as Renderman, can do distributed rendering on a single frame, and then automagically merge the results. I don't think Brazil offers this yet (could be wrong?), but they're working on it (under the name of Banshee, bottom of page [splutterfish.com]. If your renderer of choice offers such a feature, you could build some serious distributed rendering for $12k.

    • If you are rendering more than single frames for lighting tests, you don't really need a renderer that can do subsections... I am all for having large groups of machines so that the whole shot can be rendered at once, so that in the time it takes to do a frame, the whole shot is turned around. Its true this doesn't scale as well under light load, but it allows for much easier render management and allows more freedom on the choice of renderer.

      -Tim
  • G5s (Score:5, Interesting)

    by thomas536 ( 464403 ) on Saturday April 24, 2004 @04:22PM (#8960726)
    Where do the G5s line up in the comparison?

Long computations which yield zero are probably all for naught.

Working...