Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

What Sustained Disk Transfer Rates Do You Get? 94

Mr. Jackson asks: "What kind of disk transfer rates (MB/s) do people get in the real world when moving around large (100s MB) files? Either every machine in our building is mis-configured, or our notions about what we were getting are way off. I've tested half a dozen machines, mostly Win2k, some Linux, by just copying a large file and timing it with a watch. 8 MB/s seems to be about average for inter-disk copies. RAID 1 (stripped) got as high as 12 MB/s after fiddling with cache settings. RAID 5 was as low as 2 MB/s. We all thought the numbers should have been around 30 MB/s."
This discussion has been archived. No new comments can be posted.

What Sustained Disk Transfer Rates Do You Get?

Comments Filter:
  • We were shocked at some network file transfer speeds here -- if you are doing network copies and are shocked at how slow it is, make sure that your switch and your NIC agree about whether or not your connection is half- or full-duplex.

    Makes a HUGE difference.
  • RAID 1 is mirrored not striped. RAID 0 is striped.
    • RAID 1 is mirrored not striped. RAID 0 is striped.

      To be fair, he didn't say striped, he said stripped.. so I imagine that he ran strip(1) over the file before he sent it.. if it was an executable, it would have made the file smaller, and thus transfer faster :o)
  • Mega-what? (Score:4, Insightful)

    by itwerx ( 165526 ) on Tuesday August 06, 2002 @02:23PM (#4019804) Homepage
    What you're describing sounds about right, actually.
    Be sure you're keeping Mega-Bytes and Mega-Bits straight!
  • ...basically. I've found that sustained data rates are about 1/3 'maximum' rated speed on IDE disks with DMA enabled, and 1/2 rated speed on the various SCSI busses.

    Call me cynical, but all those rated speeds are just so much fiction. Disk bus technologies can be compared to each other in a 'soft' manner, but comparisons to absolute and consistent hard numbers are just not realistic.
  • Sequential reads on my drives can go from 25-65MB/sec (maxtor and cheetah), if the file is heavily fragmented I've seen it drop as low as 5-10MB/sec. Not so bad on the 15Krpm cheetah because the access rate helps, but on the lowely 5400rpm maxtor fragmentation destroys read/write speeds.
    • How are you measuring those transfer speeds? I can't believe that you're getting 65MB/sec on that cheetah without heavy cache usage, or with a very specific I/O operation. I definatly don't believe that you're getting 25MB/sec from the 5400 RPM drive unless you're doing a single 4k read and timing that.
      • http://www.seagate.com/cda/products/discsales/ente rprise/tech/0,1084,337,00.html

        There is no such thing as true sustained data rate, it will always peak at the outer side of the platter. Using Hdtach 2.61 the maximum is around 65, which slowly decreases to the low 40's on the inside. My maxtor 80GB (98196H8) drive gets 30MB/sec on the outer side of the platter, I might have highballed that number on the inside it goes to about 19MB/sec.

        Keep in mind that as platter density increases, and speed stays constant, transfer rate goes up. So a 5400rpm 80GB drive with 4 20GB platters will be slower than a 5400rpm drive with 1 80GB platter.
        • a 5400rpm 80GB drive with 4 20GB platters will be slower than a 5400rpm drive with 1 80GB platter.
          Really? I'd have thought that writing the data four times faster would more than make up the difference (they do write all eight heads at once, don't they?).

          • they do write all eight heads at once, don't they?

            Nope. Actually there were some drives which did this a long time ago, but nothing recently.

            The difficulty is that the platters expand due to heat, so it isn't possible to follow "parallel" tracks on different platters unless you put a separate actuator onto each arm; at that point, you've duplicated so much electronics that you might as well just get a second drive.
        • I don't doubt your numbers necissarily, but I want to know how you got them. I have a hard time believing that they are representative of real world performance. Nobody uses their disk as a contiguous string of bits, they put filesystems on them. My 7000 RPM disk is capable of giving me 30MB per second, but I never actually see transfer rates like that in practice. Granted I am using 18GB disks, but I really see like 10-15MB/second on average depending on the block size. Are you doing raw I/O, or is there a filesystem and a commonly used utility involved here?
          • This is almost identical to kernel scheduling problems, but much slower.

            Context switching time vs process time vs smoothness

            In response to your question the raw numbers are from HDtach (raw read), but real world numbers aren't far off. This of course depends on what you are doing. If I'm reading large (GB's) defragmented files, it's very close. I.e. VirtualDub, when the processing it's doing is less than 100% cpu utilization, leaving it to read as much data as it can get. Copying accross drives is only marginally slower. If I am copying many small files or heavily fragmented files, it's way off, but this isn't necessarily due to the drive itself. The filesystem is inefficient (ntfs), sending the head all over the place instead of sorting and sequentially reading the disc in an optimal manner, even though the data you read is not in the order you asked for it. I believe ext3 does this now, I don't know if it also does this for fragmented files.

            Mr. Jackson's cpu's might be loaded with overhead from the filesystem, and/or the systems are just trying to serve too many files with too small a block size at once. For example if you want to perfectly load balance 4 read request for 4 different files, you'll have 4 access for each cycle, which would be 1 bit. This perfect load balancing is not necessary and this is why large block size is important, for 4 simultaneous 100Mb files instead of 3.2 billion accesses, with a 32k block size only 12.5 thousand accesses. That's alot of access time. I bet he just needs to do some tweaking. How long can an application go without data for? How much data can you read within that time? Use a huge blocksize, if not hardware block, modify the software so it makes sure to read X amount of a sequential data before moving on to the next request. Not very smooth, but lots of "process time"

            • You *do* realize that a lot of the data involved has to be coming from a cache to get those kind of numbers? That this is *never* going to be the case when copying, say, a movie file?
              • No.. these are hdtach numbers, which bypasses the cache.. The from-media rates are actually higher, keep reading - I posted a URL for the cheetah datasheet.
                • Just because the drive reports that the cache is off doesn't mean that it is. As a matter of fact, there was a bit of a problem when drive manufacturers tried pulling this with write caching instead of just read caching a while back, resulting in potentially lost data.
        • Do hard drives write data from the outside in to take advantage of the fact that they're faster on the outer edges? I know CD-ROM's don't work like that (except for some console drive?).

          I ask since all these defragging programs could in effect be moving all your data to the "slow" sections of the disk and (practically) everyone thinks it's a good thing.

          (Obviously there are other advantages to defragging, but wouldn't it be possible to stick everything at the outer edge of the disk and take advantage of the higher transfer rates?)
  • by tps12 ( 105590 )
    A gentleman does not discuss his sustained disk transfer rates in public. I assure you, however, that they are adequate for my purposes.
  • Some real numbers.. (Score:3, Informative)

    by Outland Traveller ( 12138 ) on Tuesday August 06, 2002 @02:38PM (#4019928)
    I've benchmarked our various disk subsystems heavily.

    Once you get exhaust the on-disk cache and the filesystem cache, the raw disk access speeds are visible. Here's what I've found for Seagate ATA-IV 80MB drives:

    Sustained Sequential read: 40MB/sec
    Sustained Sequential write: 17MB/sec

    That was benchmarked on a 2Gz dual Xeon system under linux with nothing else running, and IDE tuned optimally. So, real life results are going to be worse.

    It would not surprise me to see most consumer level systems with sustained speeds on a single disk under 10MB/sec. Most systems that use IDE drives don't have the DMA/ATA mode settings tuned aggressively.

    Most systems with RAIDs have crappy implementations. Get a hardware RAID controller with its own processor and a large-bandwidth backend bus (ie, SCSI-160 or higher) and with lots of onboard battery-backed memory so you can safely turn on write caching.

    • I have to agree with Outland Traveller (because this is pretty much how I did it when building my home box)

      >Get a hardware RAID controller with its own processor and a large-bandwidth backend bus (ie, SCSI-160 or higher) and with lots of onboard battery-backed memory so you can safely turn on write caching.

      Personally I make the following recommendations :
      1. Make sure the RAID card supports write back caching using on-card RAM, and that you can increase the amount of RAM yourself.
      The Adaptec AAA-130 and AAA-131 card I played with were only configurable as write-through (all the RAM was read only) which sucked.
      The American Megatrends MegaRAID card is the one that Dell oem's (the PERC/2 cards, I believe) and whatever card Compaq was using as their RAID card on their 6500 machines both have settings to do write-back. Makes a BIG difference
      2. Put as much memory in it as it will hold. There is no substitute for cubic inches.
      3. RAID 1 is mirroring (slow but safe), RAID 0 is striping (fast but no redundancy) I recommend have good backups of your data, and go RAID 0.

      And if you want to go really, really, really fast you can load up your machine with RAM and shadow the entire drive in a hybrid ramdrive / readwrite cache - http://www.superspeed.com - look for the SuperSpeed product. Doesn't matter what kind of hard drives you use as it mirrors the entire partition in a ramdrive of the same size, writes data back the same way a write-back cache works.

      • RAID intricacies (Score:5, Informative)

        by photon317 ( 208409 ) on Tuesday August 06, 2002 @03:54PM (#4020498)

        On RAID technologies, speaking in general terms assuming vendors do a good job of implementing it, here's a summary:

        RAID 0: Pure striping, maximum performance, no redundancy. Cost is the same as concatenating disks to get the space you need.

        RAID 1: Pure Mirroring, full redundancy - reads can be as fast as a stripe of the same width as the number of mirrors (2-way stripe, 2-way mirror, same read speed, etc) if they do round-robin reading. Writes happen in parallel, and can be slower unless you've got the headroom and the disk spindle is the only write bottleneck. Cost is double a simple concat or stripe.

        RAID 2-4: Sometimes used for very special purposes, but generally ignored by all because one of the other raid levels does the same thing better. I've seen RAID-3 recently, there are occasionally valid uses for like 0.01% of people out there.

        RAID 5: You get some data redundancy to survive a single disk failure, but you don't pay the double disk cost of full mirroring. It's an N+1 type of configuration. Speed is generally the slowest compared to everything else.

        Now on top of those very basic things, there are other factors. Because RAID-5 is cheapest disk-wise, and (IMHO) because it has the highest number of the well-standardized RAID levels, RAID-5 is very popular. To make up for RAID-5's abysmal performance, people use hardware RAID-5 accelerators with cache and whatnot. The problem there is that the controller can add significant cost (in some cases enough to have paid for a full mirror in plainly controlled disks), and that the RAID controller itself can become a single point of failure.

        At my office (where a lot of bad decisions get made every day and I have to eat it) they built a Veritas cluster of Sun machines around a SAN. The idea was that no node was a single point of failure because of clustering (with veritas allowing all nodes to reach the SAN storage). However, the SAN storage was a big fat RAID-5 array with redundant controllers/disks/yadda/yadda. Of course, as much as the vendor tries to bury it in the fine print, the RAID-5 hardware is a single point of failure. Sure enough, our very reputable vendor's "redundant" hardware raid-5 controller did fully fail one, knocking our data offline for hours.

        For the same cost as the expensive raid-5 array and the disks in it, we could have bought two independant JBOD arrays (just a bunch of disks, no raid controller), placed them on the redundant SAN, with the redundant clustered machines doing software mirroring to the disks, and been truly free from single points of failure (assuming we do all the details right - that the mirrors are always across seperate arrays, and that the arrays are on seperate power, etc)..

        I've spent a lot of time on these problems, and it is my strong belief that the optimal solution for almost all normal situations where you want high availability is to do software mirror/stripe (1+0). Be careful that there is a difference between 1+0 and 0+1 when the 0 part's stripe is more than two disks wide... Consider two JBOD arrays of 5x 36G disks each...

        In 0+1, you first stripe each array into a 180G stripe, then mirror the two together. When your first disk fails, nothing so mcuh as hiccups. However, of your remaining 9 disks, if any of the 5 disks in the array opposite the one with the first disk fails, you will lose data. Thus there's a 5/9 chance that the second disk failure causes data loss.

        In 1+0, you first mirror each disk from the first array with its partnet in the second array. You then take your 5 36G mirrors and stripe them together for your 180G. Again, first failure, no hiccups. If a second disk fails, in order to cause data loss it must be the partner of the first failed disk - any of the other disks can fail and you still lose nothing. So the chances of data loss on a second disk failure are now 1/9 instead of 5/9.

        • PLEASE mod this one UP. It is one of the better replies I have seen on /. in the recent past. It is accurate, both theoretically and practically, and well written.

          Congrats Photon!!!
        • Re:RAID intricacies (Score:2, Interesting)

          by lewiscr ( 3314 )
          If you want more info, I googled a good site. The explainations/advantages/disadvantages are mediocre, but the diagrams of disk blocks are worth 1000 words.

          RAID Info [acnc.com]

          It took me a while to figure out, but the numbers ("0123456710530+1") in the upper right hand corner are links to different RAID level explainations.

          It even explained RAID 2, which I haven't seen before.

    • Replying to my own post with extra information for the benchmark-interested..

      The benchmarks were performed with IOzone on top of an ext3 file system. Write caching for the IDE drive was DISABLED, which adds latency to write calls and is somewhat responsible for the lower scores.

      I've learned the hard way that leaving the write caching on (the default setting!) on an IDE drive can hose even a journaled filesystem during a system hang or sudden power loss event. Does anyone know if there is any way around this problem, other than disabling write caching?
  • Massive databases or file size large :
    On my fairly new Dell Latitude C800 (30G OEM IDE drive, PIII/1GHz laptop) I have seen that sequential database reads with a little data crunching runs around 16 megabytes per second.
    Change that to read/write access (roughly 50/50) and it drops to 1.5MB/s read, 1.5MB/s write (total, 3MB/s).

    On my desktop, two IBM 9.1G u160 SCSI drives in a RAID 0 array using a American Megatrend MegaRAID card (428) and 32M of RAM for read/write cache the sequential read access only peaks around 10MB/s, but in read/write access it is something like 3MB/s read, 3MB/s write for 6MB/s combined.

    The SCSI drives were rated u160, but my card was only a 20 (68pin U/W, hell I forget what the 428 is rated for but I think 20) but even in a RAID 0 array it wasn't going to go any faster than 10MB/s peak sustained read.

    If the file sizes were less than 16M, the writeback cache on the SCSI RAID array skewed the benchmarks bigtime, access times were almost as fast as ramdrive. Goes REAL FAST.

    On a regular IDE drive, I would be insanely happy with anything better than 20MB/s unless you were doing some serious transaction based computing.

    If you have to get a stopwatch out to decide if one is faster than the other ... they are the same speed.
    • by gbnewby ( 74175 ) on Tuesday August 06, 2002 @09:03PM (#4022553) Homepage
      This topic is near and dear to me....truly "news for nerds, stuff that matters."

      My application is for information retrieval [unc.edu], I'm using some software that utilizes BerkeleyDB files at the back end. I spent the last week trying to figure out why I wasn't getting better throughput, and eventually figured out it's related to BerkeleyDB's handling of lots of tree duplicate pages. But that's not why I wanted to post.

      One thing people didn't mention: The file system. The file system can make a big difference. For larger files, think about ext2 or XFS. For lots of small files, think ReiserFS. ext3 does journaling and is supposed to have comparable throughput. There's a lot of information out there about filesystems, including a filesystem HOWTO at ibiblio.org. Pick the right filesystem for your application.

      Here's what I found. I was copying an 8GB file back and forth (this was one of my DB files; yes, it was sparse, I used "cp --sparse=always"). This was on a Dell 530 with dual 1.7Ghz Xeons, 2GB of PC800 RAM, an Adaptec 39160 controller (U160 SCSI) and JBOD (just a bunch of disks=no raid). Linux kernel is 2.4.18-64GB-SMP on a SUSE 8.0 distribition. The experiments were between different drives on separate channels on the same controller. The drives are 73GB 10KRPM Cheetahs.

      I copied the 8GB file and a few other multi-gig files, and used "vmstat" to track progress. This is NOT the way to benchmark for files of just a few meg or even a few hundred meg, because it only samples every few seconds. But for long-running processes, I would "vmstat 10 10000" (resample every 10 seconds; 10000 times) and watch as the files copied in the background on a quiescent system. The "bi" colums is blocks in (typically 4KB blocks, but you can tune this on your system); "bi" is blocks out.

      I did XFS to ext2 and back again. I also copied off a ReiserFS drive.

      Both XFS and ext2 were comparable for reading & writing. They peaked at about 35,000 bi or bo. 35,000 * 4096 bytes per block =~ 143MB/second. In other words, I was getting close to the max transfer rate for the SCSI bus (160MB/sec per channel). Long-term average was closer to 25K blocks or ~100MB/second.

      With a ReiserFS, either for reading or writing, the pattern was that it could peak at ~18K blocks bi or bo, but generally was far lower, on the order of 3000-8000 (i.e., sustained rates of about 12-35MB/sec). What seemed to be happening was the other drive (XFS or ext2) would outpace the ReiserFS' ability to read or write, then wait. If you read the ReiserFS info, they admit this is part of the design (ReiserFS is *great* for loads of small files, really really great). For longer files, they end up needing to basically chain it across a lot of blocks in their B-tree.

      I know the question was about IDE, not SCSI, but I'm sure that the filesystem matters for IDE as well, especially if very large files are involved. If you're working with large files and are willing to lose a percentage to block roundoffs, some filesystems let you choose a block size > 4096 (though I think Linux ends up chunking in 4K blocks anyway).

  • Fun I/O realities. (Score:3, Interesting)

    by ivan256 ( 17499 ) on Tuesday August 06, 2002 @02:43PM (#4019972)
    From your average on board IDE controller without any special configuration, the numbers you're seeing look about correct. The fastest you can really expect to get with any consistancy is like 15mb/sec, and that's with tuned interfaces AND tuned I/O. With a high quality IDE controller, or a reasonable SCSI controller, and fast discs (10,000RPM) you can get 50-75% better then that. The fastest I/O I've seen in linux was with 2 gigabit Fibre Channel, and an array of 15 striped 15,0000 RPM disks. I managed about 120MB/sec, and that was only with certain block sizes. The average was still in the 60MB/sec range.

    Bottom line, with a 7000 RPM IDE disk, and a regular cp command using a 4kB or so block size, you're probably not going to get better then 10MB/second. Disks are just too slow.
    • Not to flame but.... (Score:2, Informative)

      by Mad Quacker ( 3327 )
      Are you kidding me? That's why we have ATA/133 coming out, because IDE drives are getting that fast, oh wait, that must be 133Mbits/sec (*sarcasm*) (yes people have told me this)

      Try turning on DMA, you absolutely _need_ DMA turned on for modern drives, PIO Mode 4 maxes out at 16MB/sec with 100% cpu utilization, PIO Mode 5 isn't official and will most likely break your hardware. After you turn on DMA you can set your interface speed at 16/33/66/100/133MBYTES/sec.

      I hate to repeat myself but here are the Specs for ST318452LW, Cheetah x15

      Internal Transfer Rate (min) 548 Mbits/sec
      Internal Transfer Rate (max) 706 Mbits/sec
      Formatted Int Transfer Rate (min) 51.8 MBytes/sec
      Formatted Int Transfer Rate (max) 68.1 MBytes/sec
      External (I/O) Transfer Rate (max) 160 MBytes/sec
      Avg Formatted Transfer Rate 61 MBytes/sec
      • by ivan256 ( 17499 )
        I don't need a lecture about how to configure my system to use DMA, I write I/O device drivers for a living, and I'm fairly sure I know how to use them.

        Those specs you give are great, but the ST318452LW is a 15,000 RPM SCSI disk, not a 7000 RPM ATA-133 disk. Throw a filesystem on there, and do I/O in 4kB or smaller chunks, and you'll see 35MB/sec, which is exactly in line with the numbers I gave in my post. Sure you can get 61MB/sec average with that disk if the only thing you ever do is something like "dd if=/dev/sda of=/dev/null bs=4096k", but that's not a real world type use of a disk, is it?

        Now, take your 160MB/sec interface, and make it 133MB/second, make the spin rate of your disk half of that, and decrease your bit density, and you'll take another 60% off that speed. We're back down in the 10-15MB/second range that I was mentioning. This isn't rocket science.
      • Ahem "Mad Quacker" but ATA/133 has a theoretical burst limit of 133 MB/sec. That is extremely theoretical, and since most consumer boards are not built to maximize performance on the interface, the problems are compounded. A consumer-grade 7200RPM disk connected to a consumer-grade controller built in to a consumer-grade motherboard on a home-built computer with regular ribbons is extremely unlikely to be capable of reaching a sustained rate above 15MB/sec, even if only one disk is connected to the controller and the controller is entirely alone on its PCI bus (not a likely situation).

        Besides, the ST318452LW disk that you quote is a Ultra-160 disk with a 15,000RPM spindle - it's not even roughly comparable to a 7200 RPM IDE disk. Even there you'd have to have everything tuned to perfection, with a very capable SCSI card and good distribution of devices on your PCI bus to get 61MB/sec in realistic random writes of varying sizes in a linux or windoze OS.

        Remember, there's a huge gap between the theoretical limits and the realistic expectations of hardware.
        • extremely unlikely to be capable of reaching a sustained rate above 15MB/sec,...

          Am I reading this correctly? I bought a 160GB drive from compusa a few weeks ago that included a ata133 card for my old P120 throwaway Compaq. I'm getting a sustained 20MB/sec copying /dev/zero into a file. Is this the kind of transfer rate you are talking about? My 6 month old toshiba laptop has exactly the same performance. Both are running an untuned gentoo install, but both redhat and mandrake with a simple hdparm tweek did the 20MB/sec too.
          • Right. Copying /dev/zero into a file is not a fair test of performance since (depending on how you're doing it) it's almost nothing like regular day-to-day operations. Are you using dd? You'll see very different results on random read/write performance by users in various tasks - I'm talking sustained performance on randomly sized reads and writes from random types of user data.
          • (a) They're talking about megabytes (MB) not megabits (Mb), an (b) if you're doing straight file writes they're going to RAM and will be written from the buffer cache later.
        • Odd.

          On a ~4-year-old K6-2 350 with 64 megs of RAM, running FreeBSD and default IDE parameters, I get 19 megabyte-per-second I/O on the 30 gigabyte, 5400rpm (pre-Quantum) Maxtor hard drive it has, according to bonnie with a 500 megabyte test file.

          bonnie, for the unaware, is a benchmark written specifically to eliminate an operating system's buffer/cache from the equation, and does well at this task as long as you specify a test size which is significantly larger than system RAM (or whatever else the machine can use for caching).

          Therefore, a 500 meg test on a 64 meg PC is a fine measure of sustained throughput. It also conducts its testing at the filesystem level, and thus presents a valid measure of real-world performance in a cross-platform fashion - including disk fragmentation and other factors that really do slow things down in real life.

          We'd all be doing ourselves a favor if we used bonnie for such discussions of this sort. It's free, C, and easy to compile wherever things can be compiled.

          No idea what transfer mode things are happening at, except that it can't possibly be any faster than DMA33, as DMA66 and its funky 80-wire cable hadn't yet reared its ugly head toward the world.

          Cables are plain old 40-conductor jobs which came with the generic, bargain-hunter motherboard.

          Don't tell me I'm just lucky, here. Or that this is an isolated event.

          And don't tell me there's such a thing as a professional-grade IDE anything. It's all trash. But it's fast, cheap, and demonstrably has been for years.
    • I'm not sure why a regular cp command would be using a 4kiB block size. I doubt that it does on FreeBSD.

      For kicks, I just did some large file (around 450M) copies, some to /dev/null (to measure read throughput) and some to new files (to measure read+write+file allocation). I get a smidge over 24MB/s for reads, and a smidge under 20MB/s for copies. This is with the system "cp", through the file system (FFS+softupdates) on a P3-500 box. The filesystem is striped using vinum under FreeBSD 4-stable over a pair of Seagate 7200RPM 80G IDE drives (cheap). The drives are ATA-100, but the motherboard only knows the UDMA33 style. The drives are each on the separate controllers of a i440BX chipset. I.e., very standard motherboard, new drives, no special tuning done to the filesystem.

      Trying again with dd instead of cp, and bs=4k, the read rate drops to 18MB/s and the copy rate drops to about 18MB/s too, so don't do that...
  • Something around 10MB/sec sounds about right for application-level I/O with a good ATA100 drive using DMA, at least from my experience with Windows. If you bypass all the application layers (file system with potential for fragmentation, disk cache with possibly non-optimal prefetch) and go directly to the device you can get better numbers, but few apps can do that.

    • I use Linux on a T-bird 900Mhz with Asus A7V. My 2 harddrives are connected to the ATA-100 IDE. One hardrive is ATA100 7200RPM, and the other 7200 15BG ATA66. For typical use (application, file saving and such), average transfer is between 5-20MB/s, according to my xosview. When i've copied large files, I've seen a peak at about 70MB/s, but only momentarilly throughout the transfer. (source and destination on the ATA100)
  • Fast! (Score:2, Funny)

    My sustained data rate goes up to 11.
  • I have two WD200BD (Western Digital EIDE Protege 20GB) drives which I bought for cheap two years ago. Each drive is 20GB, turns at 7200RPMs, and has a 2MB cache.

    I used a Abit KT7-RAID (HighPoint HPT370 RAID controller) motherboard and set the drives up in a RAID-0 array. I was running in PIO (eww) until I ran some benchmarks which rated my drives at around 4-5MB/sec - I changed into UDMA 5 (ATA100) and I got the following results with Nbench.

    Disk Performance, MBytes/sec
    File size: 100.0 MBytes

    thread: 0
    write: 37.97
    read: 25.94
  • by Drew M. ( 5831 )
    /sbin/hdparm -d1 -c3 /dev/hd[abcdefgh] is generally safe for most IDE chipsets and could very easily double or triple your transfer rate.

    I get about 35MB/s copying between my IDE IBM 60G drives
  • If you're looking for nothing but speed, try solid state...I have seen file servers that use solid state. Yea, I guess you could try and find the fastest hard drive on the planet, but if you really NEED higher transfer rates, nothing beats a drive from one of the manufacturers like Curtis [mncurtis.com] or Imperial [imperialtech.com].

    But, most of these solutions are expensive...you could try to keep your costs down by just adding more memory to your systems and using a chunk of it for a RAM Drive (a 1GB RAM Drive on a modern system should be more than possible).

    Once you've experienced what it's like running your OS off of a RAM Drive, you'll never want to run off of physical media again.

    Ok, so that's what's good about it, now what's bad...well, you should still keep backups on physical media if you use the RAM Disk method...You should also purchase a UPS with power management features...And if you wan't a better solution, go with a much more expensive Solid State Drive.
    • Gee, thanks for the tip. So when someone is in the market for a sporty car, you can chime in and say "why don't you just try a Ferrari."
      • Really? Well, my intention was to point out the use of a RAM Drive. And since RAM is very cheap now (under $100 for 512M of Mid-Range DDR) and since some modern motherboards will take over 2GB of RAM, it's not too hard to figure this one out...

        However I actually like the Amiga's RAM drive the most...besides the RAD Drive (a RAM Drive that survives reboot and can even be booted from)...the Amiga's RAM drive dynamically allocated the required ammount of RAM to the drive...never too little...never too much...SO it just made sense to keep the RAM Drive mounted because if you weren't using it, it wasn't taking barely anything.

        Should we start naming the things that we can do with a 1GB RAM Drive?

        - Downloads don't fragment the Hard Drive
        - You can burn ISOs directly from RAM
        - You can put your entire OS on your RAM Drive
        - Use it for log files and caches
        - The uses are endless

        Consider the cost of 1GB of RAM (~$200) and think of it as investing in a drive that is faster than any drive even under development...not even SCSI RAID can come close to the speed that you'll see from a RAM drive...
  • by ScottG ( 30650 ) on Tuesday August 06, 2002 @03:15PM (#4020211)
    At least on IDE drives, using the hdparm tool can greatly improve performance of modern drives. I found my throughput went from 3 MB/sec to 22 MB/sec with just a few tweaks.

    Most distros use very conservative settings for the IDE interfaces which will work with just about any old drives, but do not take advantage of more modern hardware. hdparm allows you to activate those advanced features.

    There is a nice write-up about using hdparm here: http://www.oreillynet.com/pub/a/linux/2000/06/29/h dparm.html [oreillynet.com]

    Of course, all this only applies to Linux boxes.

  • The bottleneck is the design of the mechanical disk. You can minimize the bottlerneck by having more disk spindles handle the I/O.

    As you've found out it does matter which RAID scheme you use. RAID 0+1 will outperform RAID 5 substantially.

    Think spindles. Because each disk has only one spindle, the disk head can only be over one given track at any instant. If you want the heads to nearer to your where your data is stored you want to have more heads. With RAID 1 your read or write request can be handled by more than one disk spindle. That gives you the best performance.

    To get more spindles, use as many disks as practical. I've had some long conversations with my co-workers that now that disks are really cheap it doesn't matter that RAID 1 "wastes" half the disks. It does matter that disk I/O is a bottleneck and more disks will help ease that bottleneck..

    References:
    "In general, when cost is no object, RAID 1 or RAID 0/1 provides the best overall performance. Since striping spreads the I/O load across multiple disks, RAID 0/1 has the best overall performance characteristics of any RAID option. However, if you know ahead of time that the proportion of writes to disk is low, you can fall back on a less expensive RAID 5 configuration. In addition, if there is adequate battery-backed cache memory in the configuration, you may be able to support a moderate amount of disk writes under RAID 5. But even with large amounts of cache, a heavy write-oriented workload is likely to cause performance problems under RAID 5."

    To optimize your file layout, follow these...rules:

    1. Use RAID
    2. The more disks, the better"
    http://www.swynk.com/friends/israel/optimaldisk.as p [swynk.com]

    "If your SQL Server is experiencing I/O bottlenecks, consider these possible solutions: Add more physical drives to the current arrays. This helps to boost both read and write access times. But don't add more drives to the array than your I/O controller can support.
  • I used to have this line in my rc.local:

    hdparm -m16 -X66 -d1 -c3 -u1 /dev/hda 1>/dev/null 2>/dev/null
    hdparm -m16 -X66 -d1 -c3 -u1 /dev/hdb 1>/dev/null 2>/dev/null

    You can see what that means with `man hdparm`.
    That command would speed up my hard drive from 3mb/s to a 20mb/s transfer rate.

    The new KT333 chipset(uata 133) I use doesn't need hdparm to set the device mode(make sure you use 2.4.19) and transfer between two hard drives on different controlers is about 20mb/s. The lowest I got was 13mb/s...
    Hope this helps.
    • Definitely use 2.4.19 if you're running the onboard IDE on a recent Athlon chipset. 2.4.18 on my Sis735 motherboard had lots of CRC errors when I enabled DMA and it reverted to PIO4 and a pitiful 4MB/s.
  • We just build a 2 TB fileserver using two 3Ware 7850 controllers, with eight 160 G Maxtor drives per controller. Each controller has RAID 5 across all it's drives. We split each RAID 5 partition into "inner" and "outer" partitions, and striped inner-to-inner and outer-to-outer using software RAID 0. Bonnie++ benchmarks show the "outer" array is getting > 241 MB/sec sustained read, and > 81 MB/sec sustained write.

    Click here [vanderbilt.edu] for the Bonnie++ results

  • IDE vs. SCSI (Score:2, Offtopic)

    by mbyte ( 65875 )
    Bonnie++ Tests with U160-scsi and IDE:

    IDE promise ATA hardware raid, 2x 80 gb maxtor:
    Version 1.02c ------Sequential Output------ --Sequential Input- --Random-
    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    cotopaxi 1G 16356 99 34258 41 9183 7 14602 89 50924 16 351.6 1

    thats 50 mb/sec.

    now 3x fujitsu 15k rpm scsi-u160 drives, running software raid5:

    Version 1.02b ------Sequential Output------ --Sequential Input- --Random-
    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
    stromboli 3600M 9777 84 24733 68 18517 68 11916 98 53297 65 378.1 3

    Again, around 50M/sec ... but don't need to tell you that the 2x80 gb maxtor were quite a lot cheaper ;)

  • Linux: Install hdparm & smart-ctl (apt-get install smartsuite for debheds out there)

    I get 40 mb/sec sustained uncached sequential read on Maxtor D7xxX and about 20 mb/sec writing. Windows 2000: Windows 2000 Ultra DMA doesn't actually work right in pre-sp2 environments, most of the time. You can see if your drives are in DMA mode in device manager. Note that DMA / PCI Bus mastering involves a *LOT* of overhead. For example, Striping across onboard controllers and external controllers will create a nightmare for the bus controller. Even striping two Maxtor or Seagate drives on different PCI slots will hurt your bandwidth. I think this will probably all be covered reasonably though, in other replies. My question is this: On 35 or 40 servers with a myriad of drive brands, types, interfaces (scsi and IDE) I seem to get a lot of errors. Even pre-built new-in-box $10,000 dell servers generate lots of errors that seem to be non critical. I do not understand:

    #smartctl -v /dev/hda
    Vendor Specific SMART Attributes with Thresholds: Revision Number: 10 Attribute Flag Value Worst Threshold Raw Value ( 1)Raw Read Error Rate 0x000e 116 062 025 101838729
    ( 5)Reallocated Sector Ct 0x0033 100 100 036 0
    ( 7)Seek Error Rate 0x000f 056 037 030 127734127
    (194)Temperature 0x0022 056 065 000 59
    (195)Hardware ECC Recovered 0x001a 072 063 000 2448078
    (198)Offline Uncorrectable 0x0010 100 100 000 0
    (199)UDMA CRC Error Count 0x003e 200 200 000 0

    I'm not sure I understand? No udma CRC error count, but hardware ECC recovered?? Is this normal ??? At least there have been no relocations. But anyway, I frequently copy 100 mb files between drives (on different controllers) at a rate of about 60-70 mb/sec. In a dual CPU setup or P4 setup (with PCI irq transform) I get *much* better. I can literally copy 160 mb files from one Raid 0 set to another in about 3-4 seconds with some of those maxtor drives. The PCI bus maxes out at about 120 mb/sec. And in, say, a network environment, you'd have disk controller and network fighting over PCI bus mastering-- creating a lot of overhead.
  • I've seen 2 MB/s RAID5 writes on a Compaq SmartArray-SL, and I've seen 475 MB/s for RAID5 writes on an AlphaServer ES45 with four LP9000 2Gb/s fibre cards attached to a Compaq Enterprise Virtual Array...

    It also depends on the speed of your disks. Are they antiquated 7200 rpm disks, or are they newer 10000 or 15000 rpm disks?

    And of course if you are doing RAID in software...thats a whole different animal.
    • I would give my left nut for a tutorial on how to get Fibre Channel drives working. I have seen the drives themselves for sale dirt cheap but where do you get the cards, which cards to get, how do you connect them up, what cables do you use, are the cables fiber optic looking things that need special handling (cut and polish and glue like fiber optic networks) or are they plug and go, what is the topology of the drive layout in a multi-drive setup, what's this I hear about needing a 'hub' and what is the 'hub' ...

      Anybody have a newbie's premiere on how to from zero to operational with Fibre Channel drives?
  • Okay folks, this isn't a 'who's wanker is bigger' discussion. Understand than the difference between real life performance and peak theoretical performance is sort of like the difference between measuring horsepower at the REAR TIRE and measuring horsepower at the PISTON RINGS.

    I am guessing the original author doesn't spend most of his work day copying massive files on a recently defragmented hard drive to /dev/null. Safe bet? So lets quit using that as a benchmark.

    How about popping open perfmon (or whatever the *nix equiv is) and then starting whatever you do in real life - sequential read on a database or make updates to the database or compile a program or whatever floats your boat ... and then go back and look over the graph to see what the transfer rates were. Now we are talking about practical performance, typical of what the original guy wanted to compare to.

    Can we all get unrealistic numbers if we hand tune the drives (defrag em, move the test files to the outer tracks, kill all other processes, etc..)? Hell yea. On my SCSI RAID box with 32M of r/w cache anything under 16 megs moves at some insane speed like 400 megabytes per second. Is this real world performance? Depends on what I am doing (for surfing the web, all of the files are smaller than 16M so yea, it is real world performance) Am I ready to stand up and say my drives move data at 400 MB/s? D'oh (no.)

    If you are going to benchmark, use some real life numbers. We don't know you. We don't care. But give us valid numbers we can use to validate the real life data we are getting.

    Glonoinha.
    If you have to get a stopwatch to see one machine is faster, they are the same speed.
  • time dd if=/dev/zero of=largefile2 bs=1024k count=1024
    1024+0 records in
    1024+0 records out

    real 0m10.015s
    user 0m0.010s
    sys 0m7.810s


    This gives me about 107MBytes/second for writes.

    The RAID-1 system drive is significantly slower.

    - A.P.
  • I was just testing a bunch of SAN gear from 5 different makers... Running raid0 gave about 110MB/s sustained (at best) with an IO mix of 2/3 reads with 50/50 random/sequential mix. I believe in that case it was my 1Gb FC connection that was the bottleneck. Other raid configurations gave anywhere from 60-80 MB/s sustained with the same IO mix. Vendor names have been witheld to protect the guilty.

    • Running raid0 gave about 110MB/s sustained (at best) with an IO mix of 2/3 reads with 50/50 random/sequential mix. I believe in that case it was my 1Gb FC connection that was the bottleneck.

      It would have been. I've seen similar performance over a single FC channel (through a Brocade switch) to a Hitachi SAN. You'll need more FC cards if you really want to do performance testing. (The added failure protection is nice too.)

      - A.P.
  • One of the hidden advantages people don't notice in a multi-disk array is it's ability to handle parallel reads / writes.

    My single 40 Gig IDE drive has a sustained xfer rate (according to hdparm -t) of about 20 MByte/sec. My 4x2Gig SCSI 7200RPM drive software RAID-5 array has about an 11 MByte/sec xfer rate (it's pretty old Seagate Barracudas and a 2940 SCSI card).

    I use postgresql's pgbench tool, and with only one or two simo connections, the 40 Gig IDE spanks the RAID-5 pretty bad. But somewhere around 10 simo connections, the RAID-5 passes the single disk IDE in performance and never loses to it as the numbers climb.
    So, when people say RAID-5 performance is abysmal, they often are only looking at its ability to handle one data stream. The real beauty of RAID in general is its ability to spread those accesses out across many platters.
  • ~60/second sustained on a 6th-generation Cheetah. They're the fastest, IMHO (and most expensive).
  • You can get the rated performance off modern disks - I do it for a living, building video servers. But it doesn't come easy - there are a lot of traps which you have to be sure not to fall into.

    First thing is you have to have double-buffered commands - and I don't think ATA/IDE can do that. Scsican - you issue command 1, and while the disk is getting its at together, you issue command 2 before the data for command 1 arrives - and command 3, and command 4... To my surprise, pewrformance increased beyond just double buffering.

    Some people will tell you that his is not necessary - the disk reads ahead iin case you want the next block. Yes, but only usually for transfers = 64Kb.

    Fragmentation is a real killer. Avg access time, say, 5 millisec. At 50 Mbyte/sec, that it time enough to transfer 250Kb for each and every discontinuity.

    You basically need to adopt a double-bufferd, streaming apprough thoughout the process - and I bet that somewhere in the OS, something doesn't. And that one bottleneck can kill you.
  • Reading a 233MB file that happened to be on my disk: 233MB in 7.7s (about 30MB/s). I hadn't accessed this file in days, so it was not in buffer cache before doing this test.

    Copying the file: (Reading it all, and writing it): 233MB in 26.0s (about 9MB/s). This time it might have been in buffer cache, I ran this test right after the previous.

    Seems you have a batch of lousy disks. This is an IDE disk, "cat /proc/ide/hda/model" gives me "WDC WD400BB-32CLB0". The CPU is a 1.6GHz pentium 4.
  • I use IDE disks for video editing, and if I don't have appropiate data rates I get errors. So I am very meticulous. I do a mix of uncompressed video and DV25 work. Uncompressed video requires 23Mbytes/sec (NTSC) DV25 only requires 3.5 Mbytes a second, per stream. Frequently in editing or compositing I will use up to 9 data streams.

    My IDE drives can all sustain over 19.5 Mbytes/sec for reads in single configuration.

    I benchmark my sytems with Matrox Disk Benchmark provided with Matrox video editing equipment. Matrox Disk Benchmark writes random data to the disk Typically the generated data set runs very large, usually in the tens of Gigabytes. With such large data sets any cache is completely negated, be they OS or hardware.

    I ran a 16.5GB test for my reply, and just for giggles kept an instance of Winamp running as well as playing two separate full resolution NTSC video streams and surfing the web while the test ran.

    Here are the results for a Maxtor 160GB DiamondMax D540X 4G160J8 drive in a single configuration. The test platform is a dual Pentium 3 750 with 512MB RAM and Windows 2000 service pack 2. At the time of the test the drive had 53.22GBytes of 152.66 Gbytes free. Data is in MB/sec. The three measured values are: minimum/average/peak.

    Single Write
    8.43/30.28/39.96

    Single Read
    19.56/27.12/39.96

    Dual Read
    8.43/11.89/16.87

    I expect that the number that should interest you most is the average (middle number), though for my application the minimum data rate is critical.

    Aside from defragmenting drives regularly I do no special maintenance.

    I hope this information is of utility.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...