Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News

RAID2 Over TCP/IP? 9

Cheeze asks: "I was wondering if there are any working implementations of RAID2 over TCP/IP. This would be a logical solution to high availability and data redundancy. The ability to have two identical mirrored separate machines with identical data stored on them would almost remove the risk of a hardware failure. I haven't heard or seen any documentation on this, but it should be relatively easily. On a high bandwidth (>=100Mbps) private switched network, there should be no problem with keeping the bandwidth up to par with the hard drive transfer speeds. a software solution would be practical, but a hardware solution would be optimal. Any ideas or possible future projects on this topic?" Is such a thing practicle, or even possible?
This discussion has been archived. No new comments can be posted.

RAID2 Over TCP/IP?

Comments Filter:
  • Perhaps I misunderstand the question, but isn't this what NAS is all about?
  • Please see http://www2.linuxjournal.com/cgi-bin/frames.pl/ind ex.html

    From the article - A network block device (NBD) driver makes a remote resource look like a local device in Linux, allowing a cheap and safe real-time mirror to be constructed.
  • i hadn't heard about NAS. i read up on it a little, and it would be close to what i was asking. a software-based solution would be best for testing purposes. i basically want the ability to have two separate servers running exactly the same software. this would allow a machine to have a hardware failure, while,at the same time, keeping all data mirrored on a secondary live backup system. this system would be able to be a full permanent replacement, and would have all of the hardware of the original main server.
  • At least for databases, a conceptually similar scheme exists. Most real database systems (Sybase, Oracle, Informix, etc.) can replicate all their data to a standby server. If the primary server fails, clients will automatically switch to the standby server without data loss.

    This can also be accomplished with shared disks and OS failover solutions like IBM's HACMP, but I think you were looking for something that didn't require shared hardware.

    I don't know of anything similar for filesystems; databases naturally have a transaction model which provides atomic updates, guaranteeing against data loss. Implementing something similar at the filesystem level would probably take some serious kernel hacking.

    On the other hand, distributed filesystems like Coda, GFS, AFS, etc. might have failover capabilities.
  • If you've got a Solaris or NT box, Veritas has its Storage Replicator and Replication Exec [veritas.com] products.
  • Whoever said that databases can do this with standby servers what right on point. I would encourage that you think of this in terms of generic state-management. When you have non-disjoint partitions (replicas), standard state-management theory says you have two possible types of consistency; eager and lazy consistency. The fact is, most DB standby solutions are lazily consistent. Things like wolfpack clusters or shared EMC volumes on fibre could almost be considered eager consistency. DB techniques that do two-phase-commit across all volumes are also eager replication. The main issue, though, is that eager consistency is really unnecessary in most cases and defeats scalability. For your data consistency to be just a tiny bit lazy, you gain huge scalability -- purely eager consistency will always be EXPENSIVE. This is what IBM figured out at least 40 years ago, and no researcher denies it. Vendors will take advantage of you by telling you that you NEED their particular eager consistency scheme, but in reality mainframes and all other systems do fine with designs that use queued operations and other asynch. techniques. Spend $50 on a book that teaches you how to design for lazy consistency and you will save thousands or millions on hardware and customized proprietary solutions.
  • Comment removed based on user account deletion
  • Technically you can do that with the standard Linux/*BSD provisions.

    I don't know how usable it would be over even a pipe as fat as that, but you could technically use a dual-volume replicated RAID, with each machine obtaining one volume from the other.

    Make a file as large as the desired dynamic portion of the filesystem on each, export it via NFS/SMB/etc.Each mounts the others export, /dev/loop both the foreign and local files, and uses them to form the RAID.

    On drive failure, the remaining machine shouldn't skip a beat, aside from an initial timeout. The kernel should handle it gracefully (but cluelessly).

    Now how you would go about automating and easing the 'server repaired' condition is left for a reader exercise.
    (read: I don't have a clue how you'd do it. )
  • How does the NBD compare to, say NFS or SMB in terms of sustained throughput, reliabile and coherent error handling, and ease of use?

Syntactic sugar causes cancer of the semicolon. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982

Working...