Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Databases Networking Programming Software

Choosing Interconnects for Grid Databases? 31

cybrthng asks: "With all of the choices from Gig-E, 10 Gig-E, SCI/Infiniband and other connections for grid database applications, which one is actually worth the money and which ones are overkill or under performing? In a Real Application Cluster (RAC), latency can be an issue with cached memory and latches going over the interconnect. I don't want to recommend an architecture that doesn't achieve desired results, but on the flipside, I don't necessarily want overkill. Sun had recommended SCI, Oracle has said Gig-E and other vendors have said 10 Gig-E. Seems sales commissions drive many of what people recommend, so I'm interested in any real world experience you may have. Obviously, Gig-E is more affordable from a hardware perspective but does this come at a cost of application availability and performance to the end users? What has been your success or failures of grid interconnects?"
This discussion has been archived. No new comments can be posted.

Choosing Interconnects for Grid Databases?

Comments Filter:
  • Gigabit Ethernet (Score:1, Informative)

    by Anonymous Coward on Wednesday October 12, 2005 @12:53PM (#13774473)
    Switched, gigabit ethernet is going to offer the best performance. Gigabit ethernet is also cheap. Infiniband is too expensive and underperforms. Fibre channel is way too expensive and is no faster than Gig E. 10Gbps ethernet is only good on dedicated switches because a PC cannot drive it. Most PCs can't even drive 1 Gbps ethernet.
  • Gig-E (Score:5, Informative)

    by tedhiltonhead ( 654502 ) on Wednesday October 12, 2005 @12:55PM (#13774496)
    We use Gig-E for our 2-node Oracle 9i RAC cluster. We have each NIC plugged into a different switch in our 3-switch fabric (which we'd have anyway). This way, if a switch or one of the node's interfaces dies, the other node's link doesn't drop. On Linux, the ethX interface sometimes disappears when the network link goes down, which can confuse Oracle. To my knowledge, this is the Oracle-preferred method for RAC interconnect.
  • GigE (Score:5, Informative)

    by Yonder Way ( 603108 ) on Wednesday October 12, 2005 @01:03PM (#13774577)
    Until a couple of months ago I was the Sr Linux Cluster admin for the research division of a major pharma company. Our cluster did just fine with GigE interconnectivity, without bottlenecking.

    Make sure you tune your cluster subnet, adjusting window sizes, utilize jumbo frames, etc. Just the jump from 1500 MTU to jumbo frames made a HUUUUGE difference in performance, so spending a couple of days just tuning the network will make all the difference in the world.
  • by TTK Ciar ( 698795 ) on Wednesday October 12, 2005 @01:11PM (#13774639) Homepage Journal

    In my own experience, fully switched Gig-E was sufficient for operating a high performance distributed database. The bottlenecks were at the level of tuning the filesystem and hard drive parameters, and memory pool sizes. But that was also a few years ago, when the machines were a lot less powerful than they are now (though hard drives have not improved their performance by all that much).

    Today, high-end machines have no trouble maxing out a single Gig-E interface, but unless you go with PCI-Express or similarly appropriate IO bus, they might not be able to take advantage of more. That caveat aside, if Gig-E proved insufficient for my application today, I would add one or two more Gig-E interfaces to each node. There is software (for Linux at least; not sure about other OS's) which allows for efficient load-balancing between multiple network interfaces. 10Gig-E is not really appropriate, imo, for node interconnect, because it needs to transmit very large packets to perform well. A good message-passing interface will cram multiple messages into each packet to maximize performance (for some definition of performance -- throughput vs latency), but as packet size increases you'll run into latency and scheduling issues. 10Gig-E is more appropriate for connecting Gig-E switches within a cluster.

    The clincher, though, is that this all depends on the details of your application. One user has already suggested you hire a professional network engineer to analyze your problem and come up with an appropriate solution. Without knowing more, it's quite possible that single Gig-E is best for you, or 10Gig-E, or Infiniband.

    If you're going to be frugal, or if you want to develop expertise in-house, then an alternative is to build a small network (say, eight machines) with single channel Gig-E, set up your software, and stress-test the hell out of it while looking for bottlenecks. After some parameter-tweaking it should be pretty obvious to you where your bottlenecks lie, and you can decide where to go from there. After experimentally settling on an interconnect, and having gotten some insights into the problem, you can build your "real" network of a hundred or however many machines. As you scale up, new problems will reveal themselves, so incorporating nodes a hundred at a time with stress-testing in between is probably a good idea.

    -- TTK

  • Multiple Networks (Score:5, Informative)

    by neomage86 ( 690331 ) on Wednesday October 12, 2005 @01:47PM (#13774926)
    I have worked with some bioinformatics clusters, and each machine usually was in two seperate networks.

    One was a high latency high bandwidth switched network (I reccomend GigE since it has good price/performance) and one was a low latency low bandwidth network just for passing messages between CPUs. The application should be able to pass off thoroughput intensive stuff (file transfers and the like) to the high latency network, and keep the low latency network clear for inter-cpu communication.

    The low latency network depends on your precise application. I've seen everything from a hypercube topolgy w/ GigE (for example with 16 machines in the grid, you need 4 gigE connections for the hypercube per computer. It always seemed to me that the routing in software would be really high latency, but people smarter than me tell me it's low latency so it's worth looking into). Personally, I just use a 100mbit line with a hub (i tried with switch, but it actually introduced more latency at less than 10% saturation since few collisions were taking place anyways) for the low latency connect. The 100mbit line is never close to saturated for my application, but it really depends on what you need.

    The big thing is make sure your software is smart enough to understand what the two networks are for, and not try to pass a 5 gig file over your low latency network. Oh, and I definetly agree. If you are dealing with more than $10K-20k it's definetly worth it to find a consultant in that field to at least help with the design, if not the implementation.

A morsel of genuine history is a thing so rare as to be always valuable. -- Thomas Jefferson

Working...