Renderfarm Setup Tips? 253
"In the hardware side, we still haven't made a choice between using AMD's Opteron or Apple's Xserve G5 (they have some very nice and price convenient cluster nodes which seem to be ideal for this kind of job), with Linux. As for the networking between them, is Gigaethernet enough or should we be going for Fiber? The software used to manage the render queues is another important point as well: I've been looking into Rush, and even though it's a commercial package, it works on all of the platforms we currently use (W2k/XP, Irix, OS X and Linux). But then there is also Dr. Queue, which is open source and is supported on at least the *NIX members of the aforementioned OS's. Other options include RenderPal and Pixar's RenderMan, but I would prefer an F/OSS alternative. Finally, it's worth noting that we'll be using the renderfarm for Maya and Adobe AfterEffects."
Cinelerra (Score:5, Informative)
Re:Cinelerra (Score:3, Informative)
Re:Dearest AC, (Score:2)
Maybe you could follow your own advice.
Shoot man, sometimes I think I'm the only one with a sense of humor, but seeing how somebody modded grandparent Funny, I guess I'm not truly alone.
Unless you were trying to be funny also... in which case render me some of those chill-pill thingys.
BTW: I didn't post grandparent. I'm too lazy to check that Anonymously box.
Re:Cinelerra (Score:2, Insightful)
The render nodes only run cli tools and do not require local storage. A bootable Knoppix cd could be made to create a temporary render node. Imagine using 10 computers in your office to render video in the off hours and reboot into Windoze in the morning with the user not knowing.
Simple (Score:2, Funny)
(a) Wait for the next new processor technology to hit Slashdot.
(b) Build a Beowulf cluster of those.
node deployment: g4u! (Score:5, Informative)
I've used g4u to setup a ~50 node video rendering cluster, see my webpage on the Regensburg Marathon Cluster [feyrer.de].
Enjoy!
- Hubert
Re:node deployment: g4u! (Score:2)
One of these days I think I should make a "g4u testemonials" page... care to drop me a mail which describes what you do with g4u? (see the g4u homepage for my email address
- Hubert
I'm a Machead, but... (Score:5, Insightful)
Macs are very nice hardware, but you really don't need that for rendering. For workstations they make sense, but for rendering you really want to have a lot of fast computers rather than nice computers.
Re:I'm a Machead, but... (Score:3, Interesting)
I'm not a Machead, I'm an x86head. Always will be.
Re:I'm a Machead, but... (Score:3, Informative)
Re:I'm a Machead, but... (Score:2)
Re:I'm a Machead, but... (Score:2)
Rendering is rediculously parallel, so if one computer messes up, goes down, explodes, whatever, whatever it was working on can be picked up by another computer. Chances are you won't lose more than one frame on one pass of your render. Some other node can redo whatever was lost, it's not a big deal at all.
So having lots of unreliable but fast computers is more effective than having fewer reliable high quality ones. Two cheap PCs can do more wo
Re:I'm a Machead, but... (Score:3, Informative)
So for say 10 computers:
Cluster node version of Xserver 10 @ $2,999.00 = $29,990*
Shake 1 @ $2,999 = $2,999
Total = $ 32 989
Now the Linux version will cost:
Shake 1 @ $4,999 = $4,999
Render nodes 9 @ = $13 491
Total costs software = $18 490
This leaves you with $14 499 to buy 10 x86 boxes or
Re:I'm a Machead, but... (Score:3, Insightful)
The question with building a headless Intel render farm is going to be licensing. Great, you can build cheap machines, but are you going to have to buy extra licenses just because you chose to go cross-platform? It's not just the main rende
Why run Linux? (Score:3, Insightful)
Re:Why run Linux? (Score:4, Informative)
Because of its unnecessary flashiness? OS X is notoriously bloated. For the command line junkies among us, Linux fits the bill.
Re:Why run Linux? (Score:2, Informative)
Darwin (Score:4, Insightful)
As other respondants have suggested, I guess it would come down to which OS supported the entire collection of desired applications for the job.
Re:Why run Linux? (Score:2)
Re:Why run Linux? (Score:2)
For true 64 bitness, launch every Linux! (Score:3, Informative)
can't resist (Score:2)
All your Linux are belong to us!
Software? (Score:5, Insightful)
I'm sure a demo can be arranged.
I wouldn't go blindly marching in the direction of FOSS especially in something that is valuable enough to setup a renderfarm for.
Most importantly, find out what the people who will be using the software like and dislike about each package. And what works for them. If it saves you $30 per hour times 5 people software and hardware cost become insignifigant after one work week.
The biggest renderfarm in the world is useless if your people can't use it. Always remember that software is only good in its ability to meet the goals of the organization it supports.
Our experiences (Score:5, Informative)
We evaluated several render-queue management systems, and decided on Rush. The most persuasive arguments for using Rush were the very good experience we have heard from other users, and the simplicity of extending it to manage a variety of different tasks. I have to add Hammerhead to the list of happy customers. It did everything we could have hoped for. In particular, it was able to handle the inevitable crashing of machines pretty well.
While it's true that Rush is a proprietary, gotta-pay-for-it system; a robust render queue management system pays for itself very quickly in the ability to make your renferfarm productive. Perhaps a render queue manager is overkill when you have just 6 or 8 systems, but once you get up to 30 or 40 it is essential.
Our experience is all under Linux, but if you're going to be running After Effects that means that you're not going to be running Linux -- so there's not too much more I can help you with there. We did find that the dual Opterons worked much more efficiently than dual Xeons in multiprocessor rendering -- don't know about the Xserves, though. We were running mostly Maya, RenderMan, Shake, and our own in-house tools on the farm.
This farm is unfortunately powered down now that Riddick is done -- if you need some dual opterons, let me know at thad@hammerhead.com
I'm not an animator... (Score:2)
Re:Our experiences (Score:4, Interesting)
1.) The words 'dual' and 'Opteron' both surprised us. We were kind of under the impression that maybe single proc machines would be better for a render farm. We were really curious why dual was chosen over single. Did the extra cost end up being worth it?
2.) You mentioned that Opteron was more efficient than Xeons. I just had to ask: Was the particular software you were using particularly tuned to Opteron (i.e. 64-bit?) or was the 32-bit side of it just pleasant to work with? Any more insight you can share with me about the use of Opteron would be most helpful.
3.) Did you guys end up buying a bunch of machines from a place like IBM or something, or was it more like "we bought the components and assembled ourselves..?" If it's the former, how'd you like the service?
4.) Any regrets or things you'd do differently next time around?
5.) Why are you getting rid of the machines used for Riddick? Or did I read that wrong?
Appreciated,
NanoG
Re:Our experiences (Score:5, Informative)
1.) The words 'dual' and 'Opteron' both surprised us. We were kind of under the impression that maybe single proc machines would be better for a render farm. We were really curious why dual was chosen over single. Did the extra cost end up being worth it?
The computers are relatively cheap -- it's render licenses (especially for RenderMan) that are expensive. With the newest version of RenderMan, Pixar has deigned to let us use the two processors of a dual-processor machine with a single license. This lowers the cost of rendering by about 60%, if the machine rendered twice as fast with dual processors. In fact, for RenderMan, the Opterons were indeed almost twice as fast, where the Xeons were only about 50% faster.
Our other big rendering application was Shake, and it also allowed the use of two processors with one license.
2.) You mentioned that Opteron was more efficient than Xeons. I just had to ask: Was the particular software you were using particularly tuned to Opteron (i.e. 64-bit?) or was the 32-bit side of it just pleasant to work with? Any more insight you can share with me about the use of Opteron would be most helpful.
Yes and no. The Opterons are the first AMD machines to implement the SSE2 instructions, which are heavily used by RenderMan. Also, the HyperChannel communication between processors on the Opteron is light-years beyond the communication between Athlons and Xeons. On the other hand, there is absolutely no advantage in the 64-bitness of the Opterons -- we were running a 32-bit Linux (RedHat 9), and we weren't using more than 4 GB memory on any of the boxes.
3.) Did you guys end up buying a bunch of machines from a place like IBM or something, or was it more like "we bought the components and assembled ourselves..?" If it's the former, how'd you like the service?
We hired a beige-box manufacturer. We specced it out to various places, and PC Mall built them for the best price. If I had to do it over again, I'm not sure that I wouldn't go with IBM -- while they cost a lot more, I expect that they'd build more solid systems.
4.) Any regrets or things you'd do differently next time around?
We bought minitower machines instead of the more trendy, space- and power-efficient 1U or blade machines. We did that so that we could potentially use the new Gelato renderer from NVidia -- that software uses the current NVidia high-performance graphics cards as an external array processor, giving significantly better render performance.
As we didn't end up using Gelato, that was perhaps a mistake. We ended up power and HVAC constrained in the end -- as happens with almost every renderfarm I've heard of.
5.) Why are you getting rid of the machines used for Riddick? Or did I read that wrong?
No, you read it right. Hammerhead is a small company, typically working on just one show at a time. We don't see a use for the machines for another nine months or so, as we begin development of the next project -- and it just isn't right to leave all that compute horespower idle.
Thad Beier
Hammerhead Productions
Re:Our experiences (Score:2)
I just wanted to thank you for both your insight, and in taking the time to respond. You gave us some stuff to chew on here.
Have a good day!
NG
Re:Our experiences (Score:2)
Also, in the interest of understanding how much it costs to set up a significant render farm, how much does this sort of thing cost? Is it all in the PCs, or would the backplane infrastructure cost surprise us a lot?
Really I guess what I'm asking is if you could do a cost-per-node breakdown to
Re:Our experiences (Score:5, Interesting)
These particular machines were just used for The Chronicles of Riddick. Computer technology advances so fast, has lowered in price so quickly, and movie post-production schedules are so long (six to nine months, typically) that we typically don't use any particular machines for more than a couple of films.
Also, in the interest of understanding how much it costs to set up a significant render farm, how much does this sort of thing cost? Is it all in the PCs, or would the backplane infrastructure cost surprise us a lot?
In fact the dominant cost, at least for us, is not the render boxes themselves. The network is a significant expense, as is the data server system. An even larger expense, though, are the licenses for the rendering software. Top-of-the-line rendering systems like RenderMan (for 3D) and Shake (for 2D) cost thousands of dollars per node. And then there are significant infrastructure costs in just electrical wiring and cooling.
At least in the 10-to-50 server range, I would say that the costs are pretty linear. As you get bigger than that, you can start to see some economies of scale.
At some point, it becomes profitable to start developing in-house software tools instead of buying licenses. Digital Domain's Nuke system was originally developed as a renderer for Flame, for example, so that the expensive Flame machines could be used for the interactive work and the batch rendering could happen on commodity hardware. For Riddick, we developed our own smoke-rendering system rather than use RenderMan, to free up render licenses for other parts of the movie.
I'm afraid that an explict cost-per-node breakdown would get into details that we keep confidential, but this should give you an overview of our situation.
Thad Beier
Hammerhead Productions
p.s we don't do Videos, we make Films.
Re:Our experiences (Score:2)
My apoogies. I interchange the two words entirely too frequently... old habits die pretty hard.
Re:Our experiences (Score:5, Interesting)
What did you think of the freeware options, e.g. Aqsis [aqsis.com]?
Re:Our experiences (Score:3, Interesting)
We did try a couple of other rendering alternatives. We h
Re:Our experiences (Score:3, Informative)
The big difference between Pentia and Op
Re:Our experiences (Score:3, Informative)
Yes, that is correct.
" 2), head over to blanos.com, and check out his benchmarks."
Good idea!
"but I didn't see any P4 Xeon 2p results for the radiosity test scene"
Heh I just did that this morning. I ran the skull radiosity test with 8 threads. It was on a Dual P4 Xeon 2.4ghz 533 bus and Hyperthreading enabled. 119s. (I'll try to remember to log that at Blanos, time permitting...) That was running LW8, not sure if that make a
Re:Our experiences (Score:2)
Re:Our experiences (Score:2, Informative)
www.zoorender.com has a limited/single benchmark for Maya and Mental Ray on a variety of hardware. Pay attention to their comments about how the May
Re:Our experiences (Score:2)
I'd add that Greg Ercolano is a very smart guy with a lot of experience in render queue management - he wrote (in the '90s) the Race render queue management system used at Digital Domain. Productive use of computing resources was perhaps more important
Re:Our experiences (Score:2)
I work at . We used to use Rush. I was a bit clunky, but it got the job done. Unfortunately, once the # of employees got up in the hundreds and the # of render slaves got >500, Rush falls over. It just doesn't scale and it's central database can't keep up with the dispatch rate and query rate from the clients. Eventually it can't keep the farm full.
The end result was we wrote our own distributor. It's a pretty sophisticated package that can distribute pretty much any batch processing
Go with the G5's - your work is the important item (Score:4, Insightful)
Thats only if you desire maximum ease of use with minimum setup and running hassles. The same ease of use the regular G5's have is built into all their server stuff too. I'm sure the linux dudes will have something to say about that.....
I would take a really hard look at the ready made bio-information cluster they have all setup, and just load yer software as needed and off you go. But that's me. Some people seem to like futzing with computers.....After 20+ years doing that at work, I just wanna do what I wanna do when I wanna do it. Apple makes that easy.
I get paid to deal with headaches, I'm not gonna deal with them at home too.
Oops. (Score:5, Funny)
Re:Oops. (Score:2)
Either that or a very tragic example of caffeine deprivation.
XGRID (Score:4, Informative)
GroupShares.com [groupshares.com]
Re:XGRID (Score:3, Informative)
If you're considering Apple G5s, either in workstations or in Xserves, take a look at Apple's mailing list [apple.com] for help and resources. Folks there have been working about clustering and Xgrid on the Mac for a while now.
Do it properly (Score:2, Informative)
Don't bother with Intel/Linux, with dodgy hardware and the frequently-changing Linux code. Pay the money, get decent hardware with a support contract and a steady, stable, tried and trusted OS.
Apple *may* be an appropriate choice, now that Pixar have ported RenderMan to OS X, but I don't like the i
Re:Do it properly (Score:2, Informative)
Render nodes get input via simple render scripts, output frames get written to the file server one by one every X seconds as they get rendred. Textures are shared but it's never "massive" and never "thrown at them" (compute nodes).
The I/O loading is concentrated on the file server.
> Don't bother with Intel/Linux, with dodgy hardware and the frequently-changing Linux code.
So HP, IBM and other Int
Re:Do it properly (Score:2)
ILM Linux Switch [linuxjournal.com]
ILM Computers [linuxjournal.com]
More on ILM Linux switch [linuxjournal.com]
pvm is the way (Score:3, Informative)
Re:pvm is the way (Score:2)
My $0.02.. (Score:3, Interesting)
Peace
Re:My $0.02.. (Score:2, Funny)
Re:My $0.02.. (Score:3, Interesting)
I'm porting all of our animation code from Linux to OS X as well -- more or less as an exercise in code portability -- and it's going pretty well. OS X 10.3 is dramatically more standards- (or at least Linux-) compliant than the earlier OS X versions were. Almost every program I have compiles with virtually no changes.
OT Question for midifarm (Score:2)
Is there any cheap solution that will let me spread this work out? I don't use AE at all. My shop is incredibly cheap so it would have to be a free or nearly free solution. I know that's asking a lot. It just isn't worth it to get more macs and put FCP express on them. That's the only thing I can think of.
Any tips would be greatly appreciated.
Re:OT Question for midifarm (Score:2)
Peace
Light us up (Score:5, Funny)
Light us up the bomb!
I had a related question (Score:5, Interesting)
Re:I had a related question (Score:2)
Re:I had a related question (Score:3, Interesting)
I don't think a bunch of symlinks is ugly at all. If it works well - who cares?
Are you haveing trouble with any of your simlinked directory structures - in my little FreeBSD world, I've never noticed any problem at all except for a few utilities that are aware of simlinks and will allow you to chose to traverse them or not - like rsync.
Re:I had a related question (Score:3, Informative)
You'll get your performance through the SAN by utilizing high performance FCAL disks and multiple HBAs to your servers. You can have the load balanced across the HBAs to give you the bandwidth tha
Re:I had a related question (Score:2, Interesting)
All the SAN solutions out there are rather a bit wonky. Not saying they won't work, they're just all wonky. If Apple's xSan actually performs in the real world the way the claim (and the way it was working at NAB) then it could be the be-all-end-all. It'll be crossplatform and all, with file-level locking. Whether it really works or
Solution (Score:2)
Try a cluster file system [polyserve.com].
"Filers" create "hot spots" whereby often-accessed directories/files create IO bottlenecks.
I think you can use this CFS to create a directory tree with over 200TB of data (/home/lun1, /home/lun2, .../home/lun255). You can't "tie" them together like with LVM but you do get huge throughput as opposed to a single-host bottleneck with a volume manager and/or clustered NAS filers with the hot spot problem.
Re:I had a related question (Score:2, Informative)
If you could rebuild everything from the ground up (and had tons of money to throw at it), you'd most likely want to build a system based on a very expensive vendor solution [ibm.com].
Assuming that you can't do that, your best bet would be to go with
Re:I had a related question (Score:2)
We have a bunch of netapp/bluearc filers and a single symlink tree that distributes data across them, so it's NFS. Render nodes run Linux. I have no experience with SAN. Can you point me to more info?
Re:I had a related question (Score:2)
Re:I had a related question (Score:2, Funny)
Easy! Just use Gnome 2.6 - it has super-duper spatial browser behavior [osnews.com]. All your troubles will be solved.
Sun Grid Engine, at least for Maya (Score:5, Insightful)
It's since been taken down in favor of running Alfred (because I no longer use Maya's builtin renderer, we've moved on to MTOR and PRMan), but I still have all of the files and scripts for it. If anyone's interested, I'd be happy to share: sabretooth@gmail.com [mailto]
Grid Engine (Score:2)
it supports
Apple Mac OS/X
Compaq Tru64 Unix 5.0, 5.1
Hewlett Packard HP-UX 11.x
IBM AIX 4.3, 5.1
Linux x86, kernel 2.4, glibc >= 2.2
Linux AMD64 (Opteron), kernel 2.4, glibc >= 2.2
Silicon Graphics IRIX 6.5
Sun Microsystems Solaris (Sparc) 7 and higher 32-bit
Sun Microsystems Solaris (Sparc) 7 and higher 64-bit
Sun
Re:Sun Grid Engine - its good (Score:2)
Re:Sun Grid Engine, at least for Maya (Score:2)
High Performance Computing Perspective (Score:5, Interesting)
There is a fiber interface (Myrinet) to each node used by the MPI crowd, but our rendering group doesn't use it; they seem content with the performance of Ethernet over copper. Your needs may be different, of course, but latency isn't really an issue for rendering, and copper should provide all the bandwidth you need.
I'm not knowledgable regarding all the software packages you list there, but I'm wondering if any of them would really take advantage of a 64-bit kernel (either on Opteron or G5-PPC970). Of course you can put a PPC version of Linux on the Xserve, but not without sacrificing nearly all Apple-provided management. If you expand the cluster to a large number of nodes, or even if you keep a small number of nodes but place it in a remote location, Xserve running Linux would be painful to manage (no remote power-off/-on, remote console problems). Xserve is shiny and has the requisite blue LED's, but and AMD or Intel box (from the right vendor) would be much easier to manage remotely.
Server Install image (Score:2)
It seems to me that fiber is a waste of money. You implement gigE using copper. I would think that most of the data transfer is going to be the scene data and then an image transfered back.
The company i used to work for was developing a global illumination raytracer and we create
PCI-Express (Score:3, Interesting)
Seriously - this is going to become a huge issue, as more rendering is pushed out to the stream processor that is the GPU.
Re:PCI-Express (Score:2)
Render managers (Score:3, Interesting)
dynebolic (Score:2, Informative)
I can't say that it is great because I haven't been able to do much of the above. Maybe you will have better luck.
That's easy! (Score:2)
We used to run Lightwave for work-related projects, but we only had a few machines allocated to the media lab. So after hours, we'd sneak around the cube farms and pop a client boot disk in every machine we could find.
Re:That's easy! (Score:2)
Depends on your workstations (Score:5, Informative)
I've set up and administrated a number of farms over the years (doing it as I type. its.. what I do). One thing you really want to do, certainly with Maya's renderer, is to try to use the same OS and platform on your farm as you use on your user workstations. There can be subtle or even obvious differences in the render output between OS's, and since you'll have enough issues to deal with you'll want to keep cross-platform incompatabilities out of the mix. Please, trust me on this. Had to deal with Maya Irix/Win2k/Linux differences in the past.
As for queueing software, give Condor [wisc.edu] a look-see. Free and functional. I reverse-engineered a Perl version of it before they made their source available, and my version has been run quite successfully at several animation studios and an effects house over the years. It's a well architected system for distributed computing.
Feel free to contact me if you've got any other render system or management questions. I'm always interested in seeing how other studios approach the challenge.
Figure out what rendering engine first! (Score:2)
First, figure that out, then go to the forums for that engine and ask your question there.
I just built an 8-node renderfarm for using Vray, which means 3D Studio, which means Win2k and it was a piece of cake.
6-8 nodes is tiny. Figure your use, then ask on the right forum. Many, many people have done this already.
What kind of rendering? (Score:3, Interesting)
If you're at a university and you're doing some sort of bioinformatics visualization, use whatever the researchers are most comfortable with. The odds are good that this is whatever the CS department was teaching on 5 years ago. Probably Suns or Windows machines. Slave... errr, grad student labor is cheap, so use an OSS scheduling and job management system if you can.
At most other places, a similar rule applies: use whatever the users are most comfortable with. If you're using Mac workstations and software, then it may make sense to go with a G5 rendering farm. If you're using Windows... well, okay. Windows render farms still suck, but at least buy PCs to leave your options open. Unless you're a really large organization (that is, the sort that doesn't have to resort to Ask Slashdot for research), you probably want to use products that come with support contracts. $20k/year is a pretty good deal when compared to keeping a full-time support person for the same task.
Deadline Render Queue (beta) (Score:5, Interesting)
We did this because we primarily use Discreet's 3dsmax [discreet.com] (with Brazil [splutterfish.com] and V-Ray [choasgroup.com]) and Eyeon's Digital Fusion [eyeon.com]. We have found that most existing render farm solutions do not support these two packages very well -- thus we decided to develop our own custom solution. We also support After Effects [adobe.com], Alias|Maya [alias.com], AIR [sitexgraphics.com] and other RenderMan [pixar.com] compliant rendering packages.
Of interest to the general Slashdot crowd may be that this Deadline Render Management Solution is based on the open source (BSD License) Exocortex C# library originally released with this C# 3D Engine [exocortex.org]. Deadline is built with C# in the hopes that using Mono [go-mono.com] we will be able to start supporting Linux with minimal extra effort.
I'll be reading all the posts on this Slashdot thread but I would also appreciate any direct feedback on our current beta product. We also found solutions such as Rush and Smedge to be less than user friendly in many respects. Thus we have tried as best as we could to increase a 3D package that is not well supported by most render farm management solutions -- except for Discreet's Backburner (which we found not that that scalable.)
Re:Deadline Render Queue (beta) (Score:2, Interesting)
We have our own in-house solution as well: Assburner. Assburner is GPLed and we hope to do a public release in the next month or two. Along with our production trackin
Re:Deadline Render Queue (beta) (Score:2, Informative)
Few Comments.. (Score:2)
First off, the G5 XServe's are very fast machines and the cluster node is, of course, designed for just this sort of thing in the minimal amount of space. You should be aware that they are LOUD as hell though... louder than the G4 XServes, which sound like a plane taking off. That said, plan on hiding them away from your workspace because even one of them in the same room with you will drive you nuts. This is no different than a 1U server from Dell/Whoever though.
You'll only really need to buy a single n
Some Tips (Score:5, Insightful)
Whichever processors you go with, make sure the entire farm uses the same type. Otherwise peculiar rendering differences might occur, in things like particles, hair/fur and fluids.
I suggest going with the Opterons just for the PC compatibility. While the CG industry is becoming more diverse hardware-wise, it is still dominated by PC's and to a much lesser extent SGI boxes (5 years ago it was all SGI). Using PC's keeps your options open. Perhaps someday you will find 3ds max and its included distributed rendreing software more suitable for a task, and that can only be used with PC's. Same goes with the Mental Ray and Brazil renderers and the Combustion compositing software. Macs just have not been widely used in the 3d graphics industry, and so the vast majority of 3d content creation software is PC and SGI only (Maya Unlimited is only available on PC and SGI, while a lower end wersion is on Mac). And VirtualPC cannot be used to emulate 3d hardware acceleration (and it shouldn't be used for anything processor intensive anyways), though this only applies to the hardware rendered viewports in the apps. Having only Macs would be risky, and could limit your capabilities significantly.
Pixar's PRMan (Photorealistic RenderMan) is a full blown renderer, not just something to help distribute render jobs. It is generally considered the best in the industry, though MentalRay and Brazil have gained significant followings. For a cheap but effective render queueing system, check out Smegde [uberware.net]. Smedge was used by Manex Visual Effects for handling some of the effects shots in the Matrix trilogy. If you're running the Linux version of Maya (x86 only) it is not too difficult to distribute the render tasks yourself using shell scripts and the command-line renderer.
GB Ethernet should be fine, the bottleneck will be in the actual image processing not data transfer rates. 100Mb ethenet might even get the job done, thught I'd use GB for the added speed when copying large files. YMMV of course.
Overall I'd try to create a very flexible system, one that will definitely support the newest CG software down the road and one that ensures compatibility with everything, for those always short deadlines. Goodl luck with your rendering.
Mixing layers here... (Score:2)
Well quit mixing your media here. What do you mean GbE is enough, it runs over both copper and fiber. Now there are other layer 2 protocols that run over both copper and fiber (all though different cables) for example - what SAN are you going to use ? iSCSI, FC, SRP, NFS ?
What networking are you going to use ? GbE, TokenRing, FDDI, ???
What HPC/Interconnect are you going to use Infiniband, Myrinet, VI, ???
Some of these are the same networks, some of
Re:Mixing layers here... (Score:2)
nVidia Gelato (Score:3, Informative)
It runs under Linux, and "will function with whatever [render farm] management system you currently use.".
To reiterate, it's a SOFTWARE renderer, that is hardware accelerated by using the video card as a co-processor.
IMP (Score:2)
Job opening FYI (Score:2)
Just thought this was an appropriate discussion. Also, maybe a "slashjobs" would be an interesting addition to this site.
Advice gleaned from years of bitter experience (Score:3, Informative)
Regarding networking: you have to look carefully at the way the farm will be used. If you are doing any kind of compositing (which requires high I/O rates), you'll benefit from gigabit ethernet. You'll also benefit from gigabit if you have exceptionally short render times (less than 30 minutes per frame), since in this case I/O is a significant fraction of each frame's render cycle. But the longer your per-frame render time, the less necessary gigabit is. We've always used 100base and it still serves us well. Fiber is expensive and provides nothing you'll need that copper can't provide.
The individual machines should have identical configurations and be interchangable. Your goal is to not care when an individual machine dies. In light of this, there should be no local storage of data. You can save money on support if you buy spares instead of service contracts. Warranties also work, but the big manufacturers give their worst service to warranty-only customers.
Don't wire anything but ethernet to the machines. KVM wiring is expensive and unnecessary. Each machine should run unattended until it dies; when it does, you can wheel over a monitor and keyboard to diagnose it.
Opterons are fast, compatible, cool, lower-power and cheap. Xserves are nice, but we've found that Darwin doesn't integrate well into a pure Unix environment. You'd also be locking yourself into a single manufacturer.
Linux is cheap and effective, and easier to configure correctly as a server OS than as a desktop OS. There is so much commercial software available for it now that there is little reason to consider Windows or a commercial Unix. We haven't found Linux support from the big manufacturers to be all that great; if you use Linux, assume that you will have to solve most problems on your own.
Re:Slave Labour (Score:2)
Interesting post, but doesn't belong here. He's talking about redering images, not meat.
Re:Slave Labour (Score:2)
I didn't miss the point. Gibson's rendering farm is for a series of high-quality video clips from "footage" that appears on the internet. It's the prisoners' job to hand-render each frame. Sorry if I failed to make that clear, but I was trying to avoid divulging too much information about the plot.
I'm happy to admit that I haven't the first clue what goes into the rendering process, but Gibson gave the impression that it is a highly labour-intensive process.
Re:Small cluster (Score:5, Insightful)
This attitude bothers me, and not for the first time here on Slashdot. How the hell do you think those experts got to be experts? Do you think they just *poofed* into being with all their knowledge and skills already existent? No, at some point, they started with little or no knowledge of the subject and gradually accumulated enough knowledge and experience to become experts!
Sheeesh! If everybody listened to this advice they never would do anything new or different for fear of coming up with some sub-optimal solution.
Re:Small cluster (Score:4, Insightful)
Not a dig, just a remark on human nature
Re:Small cluster (Score:2)
Building a renderfarm has been done before and is a well-documented process. Choosing the software that you're going to be using should be based on the work you're going to perform -- not some random
Re:Small cluster (Score:2)
And besides that, a smart person could learn alot from doing the requirements gathering and then working closely with an SI to define and implement a solution.
Re:using google (Score:2, Interesting)
I don't see the problem with asking here. You can actually get a lot more insight from a lot of different backgrounds in one place. Yeah, you have to weed out some of the gems, but moderation helps some with that.
You elitist pigs are starting to bug me. We've all needed help from time to time, yourself included I'm sure
Re:gigabit ethernet likely overkill (Score:2, Informative)
Re:Why ask slashdot? (Score:2)
You know, the guys that did the graphics for The Chronicles of Riddick?
Yeah. Open mouth, insert foot. You got it.