Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

High Sustained HD Transfer Rates on a Budget? 18

aibrahim asks: " I need to be able to sustain at least a 23MB/s (That is MEGABYTES per second) transfer rate over the course of about 2 hours. I'd like to get 60MB/s sustained. I also have to be able to perform seeks quickly because I am going to need random access. Can ATA RAID arrays really do it? What would be the difference between using ATA/66 and ATA/100 arrays? What other budget conscious ways are there to get it done ? How about sharing this fast storage across a network? For some context, the application is non-linear editing of uncompressed standard definition television, multiple streams if possible."
This discussion has been archived. No new comments can be posted.

High Sustained HD Transfer Rates on a Budget?

Comments Filter:
  • by Anonymous Coward

    I have the Adaptec IDE raid card (AAA-UDMA) with two IBM 37.5 gig hard drives in a stripe set, and it can sustain 27 MB/s read. (It contains footage for a DV feature; it's about 80% full.)

    The new IBM 75 gig IDE drives are very fast; you might look at those.

    The total amount of data you are looking at is about 170 gig by my calculations. So you'll need three of the IBM 75 gig drives, plus the adaptec card (which can support up to 4.) That'll give you 225 gig at a cost of about $1600. Or you could try to use WinNT/Win2k software striping. This setup might even get you up near saturating the PCI bus, at about 90MB/s.

    Avoid Promise cards at all costs; I have lost a lot of time and data trying to use them.

    You might check out Storage Review [storagereview.com] as a good place for disk info.

    If you're really on a budget, you shouldn't be using uncompressed video. It's much harder to work with, eats a lot more storage, and isn't any better looking to 99% of the population than DV.

    Marshall Spight

  • by Anonymous Coward

    If the original seeker really, really does mean "random access", then they're shit outta luck. Our servers where I work use the hottest, greatest expensive Cheetah SCSI drives for extremely random-access reads and writes. We're bottlenecked on the rate at which the drive arm can bounce around on the platter. During peak activity, we're only seeing about 10MB/s. To get any better performance, you'll either need to restructure your database such that you are seeing sequential access more often, or you're going to have to spend some major bucks on a SAN product or buy enough RAM to cache up a sizeable fraction of your read/write requests.

    -- Guges --

  • Video streams shouldn't be random. This may not always hold true, but most times a single stream should be laid down contiguously.

    Running more than one stream off the same set of disks will cause the access patterns to go to crap & the throughput to go out the window. Doing multiple streams will probably need a stripe set per stream.

    Editing compressed video streams is fairly I/O intensive. Trying to deal with uncompressed streams is *not* going to be cheap.
  • actually i got way over 90megs/sec SUSTAINED. yup. you read that right.
    config :
    Sun A5200 fibre channel disk array
    18GB x 8 FC drives.
    Sun A5000 series machine with 8 x 450MHz ultrasparc-iis, 2 x 256MB cache FCAL cards.
    4 x 1 GB ethernet cards (1000BT).
    cost : if you have to ask.....you cant afford it.
  • please note the error in my last sentence... gigabit ethernet can do 20MB/s, 100BaseT can do 20Mb/s.... darn caps (and stupid me for mixing them up).

    --
  • I've seen those same drives on pricewatch before... every time I think "damn, I've gotta get an FC SCSI HBA..." - then reality hits again.

    Too bad, huh?
    --
  • Don't forget... no matter which route you go, you need to cram as much RAM into that machine as you can, and try to tweak your system to do as much pre-buffering as possible. Just because your choice of drives might be able to sustain, say, 35 meg/sec, it doesn't mean that they will ALWAYS sustain that. Remember the old "t-cal" calibration delays and stuff? At some point your drive(s) are going to get distracted for some reason or another, and having a huge RAM buffer is the only good way to get around that.

    I know cost is an issue, so I say all this assuming you don't want to buy an expensive caching RAID adapter.
  • Possibly slightly offtopic but there's a great article explaining different RAIDs on arstechnica here [arstechnica.com]

    Remember all RAIDs are not created equal.

  • I need to be able to sustain at least a 23MB/s (That is MEGABYTES per second) transfer rate over the course of about 2 hours.

    23 megs a second is a lot of bandwidth, whether we're dealing with a network or not. I'm curious as to what the intended use of this will be.


    =================================
  • Maybe you need to look at getting a good RAID card. I have a Perc 3 DI attached to a fwe 10K RPM drives, and it is supposed to get up to 80+ meg throughput. Check them out. You can buy them from Dell.com
  • "cheap, fast, good. pick any two."

    If you go with IDE (and software RAID) you've got cheap and fast down.

    SCSI RAID would, of course, be "fast" and "good".

    - A.P.
    --


    "One World, one Web, one Program" - Microsoft promotional ad

  • First off, AVOID:

    • The Promise FastTrak cards
    • Mainboard BIOS RAID (same thing as the crappy Promise cards)
    • Or more than 2 ATA drives in Software RAID if you need your CPU at all (too much CPU utilization).

    Additionally, what you need to consider:

    • PCI Throughput -- try putting your controller on its own PCI channel and go for 66MHz (or PCI64, but you'll need Linux 2.4 for 64-bit PCI) -- you'll need a more costly mainboard to do this, ones based on the ServerWorks (fka RCC) ServerSet chipsets come highly recommended. Otherwise, you'll easily saturate a traditional mainboard with a single PCI bus with a I/O rate that high.
    • SCSI since you're going to need more than 2 stripped drives -- try ~8 Ultra160 drives stripped over 2-4 channels on a 66MHz PCI64 card. The new Mylex eXtremeRAID 2000 is such a 4-channel Ultra160 board with a powerful StrongARM 233MHz at the core (compared to whimpy i960 66-100MHz co-processors on other SCSI RAID controllers).
    • If cost is a factor, look into a "Real" co-processing ATA RAID controller that acts like a SCSI disk/target like those from 3Ware [3ware.com] (which has full Linux support). 3Ware's new 6000-series has Ultra66 support and claims >100MBps read speeds and upto 84MBps write speeds (I'm sure that's the 8-channel board with 8 disks stripped ;-).

    -- Bryan "TheBS" Smith

  • According to this site http://www.iol.unh.edu/training/fc/fc_tutorial.htm l Fibre, or other places called Fiber, Channels could offer,in theory, a speed of over 4 Gigabit a second. I remember an add in some computer mag offering 2 Gig/s Fibre Hard drives... btw...at these speeds real-time tv-to-mpeg writing could be accomplished with no loss, that is if you could get a card to do some real time encoding.
  • FC is dope... but not cheap. (I work for a fc consulting company right now).

    I remember checking out the prices at pricewatch for 9.1 gb, 7200RPM FC drives -- I don't know how they are so cheap, but some their cost less than $100 with shipping.

    When you starting talking about HBAs (Host bus adapters) the money comes in ... they are around 1,000 bucks. Plus, you've got to get fibre channel hubs to connect the disks, etc... However, the speeds are incredible. Plus, a single card can access something like 16*8 different disks...

    driver support is also a problem... I don't know if linux currently has any good HBA drivers yet...

    Damn, this is an incoherent post. Main point: FC is incredible and incredibly expensive.


    willis.

  • ...here [medea.com] and their videoraid and videorack products. Don't know if they are within your budget, but they specialise in this area.

    bakes
    --
  • by toofast ( 20646 ) on Tuesday August 01, 2000 @10:08AM (#888170)
    On my Athlon 600 system w/ one UDMA 66, hdparm reports 18MB/sec (bah, what's it worth). But after a battery of tests, this same (one) HD was able to sustain 9MB of mixed small/medium/large files, using the standard UDMA controller.

    One lovely alternative is using a UDMA/66 hardware RAID, suck as the Promise RAID controller. Throw in four drives @ RAID 0 and I'm sure you'll get (at least) 35MB/sec sustained.

    Don't forget that 60MB/sec sustained is half the PCI bus theoretical bandwidth, so probably a 64-bit PCI solution would be best.
  • by bluGill ( 862 ) on Tuesday August 01, 2000 @10:06AM (#888171)

    In theory a normal PCI bus can reach 132MB/s. However you not only are reading (writing?) that data from your harddrive, I'm assuming that you also need to put that onto your display.

    Don't forget cache issues, you DMA that into memory, then read it out to the processor. Can your memory handle that kind of access? Your putting a lot of stress on the memory bus. If your main code doesn't fit into the processor cache (or isn't optomised to fit well) Sure the lastest gigahertz CPUs can deal with the data just fine, but typically PCs can't keep up with the data flow.

    For fast disks, SCSI rules. while ATA now allows taged queueing, AFAIK nothing impliments taged queueing in ATA disks, while scsi does this as a matter of course. Meaning that you will want to select disks based on that feature.

    Remember, your application is time critical. If a frame is late it matters.

    Now can ATA disks keep up? I don't know. Are scsi disks going to be better? Probably. Is the difference enough to matter? Maybe.

    In any case, no single disk can keep up with your requirements. What you need is a raid 0+1 so that data can always be read from two disks, in a good implimentation you read from which ever drive is less busy now. Unfortunatly your writing costs go up as you add more drive to make the reading faster. If you can put data on a different disk so that you never read from the same one you write to you will have better luck.

  • by Tower ( 37395 ) on Tuesday August 01, 2000 @09:55AM (#888172)
    have to go with a decent SCSI solution on this one... You can't depend on ATA/66 or 100 for any sustained transfer at all. The protocol doesn't support disconnect, so one drive can hold up the channel while you wait for another. Physically, ATA/66 *cannot* sustain 60MB/s.

    With 'older' UW SCSI hardware (2 1997/1998 9GB 7200RPM IBM drives) I can sustain ~12MB/s from each, and if I add in my 10krpm drive, I can sustain a total that essentially maxes out my 40MB/s UW SCSI link. If you *need* to keep near 60MB/s, U2W is really your only cost effective choice. Get 4 drives and a card... yeah, it'll run you a little $$$, but you actually will have the performace (striping the disk set, of course).

    If you have a dedicated 100BaseT Ethernet link, you might be able to get 20MB/s but not 60... certainly not onto the same system as the drives (PCI 32b/33MHz is ~132MB/s max).

    Best of luck.
    --

Intel CPUs are not defective, they just act that way. -- Henry Spencer

Working...