Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Linux Software

Minimum Seek Hard Disk Drivers for Unix? 58

Jonathan Andrews asks: "I remember back in the old days reading about a filesystem/device driver that had alomost no seeks of the physical disk. It worked by scanning the heads of the disk from track 0 to the end then back again in a constant motion. Disk reads and writes where cached so that they got written to disk only when the heads where on that part of the platter. My question is simple, now that disks are IDE, have lots of heads and even worse differing Heads/Cylander/Sector translation scemes is this type of system even possible? Would you have to fight the disk cache on the drive? I seem to recall it giving real throughput advantages, if the cache was large enough to hold 'one sweep times worth of data' then the cache almost never blocked and disk writes/reads sustained at the max throughput all the time. Best of all it gets rid of that blased seeking, chewing, seeking noise!"
This discussion has been archived. No new comments can be posted.

Minimum Seek Hard Disk Drivers for Unix?

Comments Filter:
  • maybe for write... (Score:3, Interesting)

    by 3-State Bit ( 225583 ) on Monday March 03, 2003 @08:22PM (#5428687)
    Disk reads and writes were cached so that they got written to disk only when the heads were on that part of the platter
    That would work fine for writes -- but we already have write-behind cache. We also have read-ahead cache, so that once you've sunch to the proper location, the first read will result in that whole general section's being read, in anticipation of future reads from that area -- if it turns out not to be necessary, it'll eventually be overwritten by future read-ahead caches.

    The problem with what you're proposing, of course, is that there's still the inital seek time to that location.

    Why would you defer your read until you got to where you were going "naturally", instead of doing so immediately? It would increase the total time until read.

    For example, suppose you are trying to read some data that's almost at the edge of the outer ring, but that you issue your request immediately after the read arm has hit the edge outward and started going inward, having already passed the data you need. At this point, a simple seek would be almost instantanious, since you could just move back to where you needed to be -- but under your "continual motion" scheme, you would need to wait all the way until the arm travelled to the inside of the platter, then all the way until it travelled back to where it needed to be again.

    Of course, your plan would work fine if you had a cache the size of the whole damn platter your reads were coming from -- then you could conintuously read in one swerving motion the whole platter, and write back to it only when necessary. This is not, however, what I think you meant.

    So take-home lesson: We already have more than adequate write caches (dangerously so -- sometimes power loss means that megabytes and megabytes of data that have been reported as written to disk are only waiting to be written to disk, and if you don't power up the hard-drive before the battery runs out protecting the cache, you risk corrupting your data.)

    As for "read-behind caches" (i.e. reads to data based on requests you're going to receive, not based on requests you've already received), isn't really feasable.

    Note: feel free to correct me, I'm no hard drive expert.
  • by Splork ( 13498 ) on Monday March 03, 2003 @09:52PM (#5429428) Homepage
    you think computers made entirely of commodity components are going to start adding additional ram and batteries to the mix as a standard feature?

    hahaha. tell another one.
  • by ComputerSlicer23 ( 516509 ) on Monday March 03, 2003 @10:28PM (#5429681)
    No I don't believe battery backed up RAM will be the standard RAM in a machine. However, I won't be surprised to see cards come out that have that (any high end SCSI card has it now). I wouldn't be surprised at all to see servers released with 128MB of battery backed up RAM at all. They are what makes high end SAN's and Network Storage work now. It'll migrate to lower end systems.

    There was a story about exactly that type of card sold by a company on slashdot. Google for them, they aren't that hard to find.

    They will become a standard component in machines when the time is right. I'd pay $1,000 for a 512MB one every day of the week if it had a driver under linux that made it look like a block device that was reliable. No questions asked. Having a pair of those and putting a copy of Oracle's Redo logs on them would probably double the speed of my Oracle database, and I just paid $15K to have 3 guys come in and get me a 20% increase in speed. No questions asked, I'd pay for one. If you could make a pair of highly reliable versions of those, I bet I could sell $1 million dollars at 80% profit of them in less then 3 months as soon as word got out about what they can do for Database and filesystem performance. I don't have the personal capital to do so, or the technical skills to pull it off. I'm a programmer. Just as soon as I figured it out, somebody in Taiwan would put me out of business in a week, because that is the land of faster, better, cheaper for computer components.

    High end, permanent storage that has no seek time, will become a standard feature on highend servers just as soon as journalling filesystems become capable of putting them on seperate devices. Right now ext3 can do that. I'm not sure about the others. Right now the only other real use is for Oracle Database. Everyones current opinion is just throw more disks at it. It's cute, but someday they'll figure out that it's highly cost effective to do have filesystems that do lots of fsync's on a SRAM based filesystem. It's just another layer of caching, but the caching is permanent storage.

    Eventually, they'll become incredible cheap. Battery backed up ram or SRAM isn't that expensive, and in volumne it would come down. Its a lot more useful then those damn USB drives that hold 64MB of data. Those things sell in volume. I'm not sure they'll be built into the motherboard, but I'd be surprised if they aren't available for sale within 2 years as an expansion card.

    Kirby

  • WAFL from NetApp? (Score:2, Interesting)

    by reuel ( 166318 ) on Tuesday March 04, 2003 @04:23PM (#5435535)
    Perhaps you are thinking of the Write Anywhere File Layout (WAFL) from Network Appliance. The situation is different for a network storage box because of large write-through caches on all the clients: most of the traffic the box sees are writes. This situation combined with a stripped level 5 RAID array where small writes are really expensive leads you to the battery-backed RAM and WAFL file system described in a paper by Hitz, et al: http://www.netapp.com/tech_library/3002.html

With your bare hands?!?

Working...