Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage The Internet Hardware

Distributed Data Storage on a LAN? 446

AgentSmith2 asks: "I have 8 computers at my house on a LAN. I make backups of important files, but not very often. If I could create a virtual RAID by storing data on multiple disks on my network I could protect myself from the most common form on data failure - a disk crash. I am looking for a solution that will let me mount the distributed storage as a shared drive on my Windows and Linux computers. Then when data is written, it is redundantly stored on all the machines that I have designated as my virtual RAID. And if I loose one of the disks that comprise the raid, the image would automatically reconstruct itself when I add a replacement system to the virtual RAID. Basically, I'm looking to emulate the features of hi-end RAIDS, but with multiple PCs instead of multiple disks within a single RAID subsystem. Is there any existing technologies that will let me do this?"
This discussion has been archived. No new comments can be posted.

Distributed Data Storage on a LAN?

Comments Filter:
  • Win2k (Score:5, Informative)

    by SuiteSisterMary ( 123932 ) <slebrunNO@SPAMgmail.com> on Wednesday October 29, 2003 @05:16PM (#7341344) Journal
    I believe that Windows 2000's Distributed File System allows you to do just this.
  • rdist would work... (Score:5, Informative)

    by ZenShadow ( 101870 ) * on Wednesday October 29, 2003 @05:17PM (#7341353) Homepage
    The obvious answer for this is nbd, as pointed out in another post -- but I would have concerns about speed with that kind of setup. I'd be interested in hearing reports on that.

    But if you don't want to get into nbd, you can tolerate delayed writes to your virtualized disks, and all you want is the network equivalent of RAID level 1, then you could always just set up an rdist script that synchronizes your local data disk with a remote repository (or eight) every so often...

    --ZS
  • InterMezzo (Score:1, Informative)

    by Anonymous Coward on Wednesday October 29, 2003 @05:18PM (#7341373)

    Sounds like Coda or InterMezzo [inter-mezzo.org] would fit the bill, but they won't address non-linux systems directly. You'd have to export the InterMezzo file systems with Samba and mount them on the MS Win boxes.

  • AFS (Score:4, Informative)

    by Reeses ( 5069 ) on Wednesday October 29, 2003 @05:18PM (#7341374)
    It's called the Andrew File System.

    http://www.psc.edu/general/filesys/afs/afs.html

    There's another alternative with a different name, but I forget what it's called.
  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) * on Wednesday October 29, 2003 @05:19PM (#7341379)
    Comment removed based on user account deletion
  • Intermezzo (Score:5, Informative)

    by mikeee ( 137160 ) on Wednesday October 29, 2003 @05:19PM (#7341389)
    Intermezzo [inter-mezzo.org] is designed for this and a bit more - if one of the machines is a laptop you can take it away and work on it, and it'll resync when you get back.

    It isn't particularly high-performance, from what I know, and may be more complexity than you need.
  • by Trolling4Dollars ( 627073 ) on Wednesday October 29, 2003 @05:21PM (#7341412) Journal
    I imagine you'll need gigabit ethernet or multiple NICs in bonded mode. Then you have the performance of each individual system to take into account. Especially if one of the systems is heavily used. I would recommend getting one BIG HONKIN' SERVER and putting it in a central location. Give it gigbit and let everything else connect to it at 100. Then, make sure it has a hardware RAID controller. Use SAMBA for the cross platform connectivity you desire, and viola! protected data with redundancy and high speed performance. If you go with remote display (RDP with Windows Terminal Server or X with *nix) then you have an even better appraoch as all the data will exist on the secure RAID box.

    I get what you mean though... it's a nice idea, but it would be costly to implement vs. what I suggested above.

    When I went to see a presentation on HP's SAN solutions last year, I was very impressed with the ideas they had. One big hardware box with multiple disks that are controlled by the hardware. They are then presented to any systems over a fiber link as any number of drives you wish for any OS. Finally, their "snapshot" ability was pretty impressive. (Also called Business Copy) All they would do is quiesce the data bus, then create a bunch of pointers to the original data. As data is altered on the "copy" (just the pointers, not a real copy), the real data is then copied to the "copy" with changes put in place. I imagein something similar could be accomplished with CVS...
  • by JumboMessiah ( 316083 ) on Wednesday October 29, 2003 @05:22PM (#7341418)
    A perfect solution would be a form of network block device that mounts distributed NBD shares. The Linux DRBD Project [drbd.org] has this capability. From their website, "You could see it as a network raid-1".
  • Try Rsync or DRBD (Score:4, Informative)

    by oscarm ( 184497 ) on Wednesday October 29, 2003 @05:23PM (#7341436) Homepage

    see http://drbd.cubit.at/ [cubit.at] DRBD is described as RAID1 over a network.

    "Drbd takes over the data, writes it to the local disk and sends it to the other host. On the other host, it takes it to the disk there."

    Rsync with a cron script would work too. I think there is a recipe in the linux hacks books to do something like what you are looking for: #292 [oreilly.com].

  • by DrSkwid ( 118965 ) on Wednesday October 29, 2003 @05:24PM (#7341438) Journal
    [bell-labs.com]
    http://plan9.bell-labs.com/sys/doc/venti/venti.h tm l

    Abstract

    This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block's contents acts as the block identifier for read and write operations. This approach enforces a write-once policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems.

  • hyper scsi (Score:2, Informative)

    by blaze-x ( 304666 ) on Wednesday October 29, 2003 @05:26PM (#7341461)
    from the website:

    HyperSCSI is a networking protocol designed for the transmission of SCSI commands and data across a network. To put this in "ordinary" terms, it can allow one to connect to and use SCSI and SCSI-based devices (like IDE, USB, Fibre Channel) over a network as if it was directly attached locally.

    http://nst.dsi.a-star.edu.sg/mcsa/hyperscsi/ [a-star.edu.sg]
  • by backtick ( 2376 ) * on Wednesday October 29, 2003 @05:26PM (#7341463) Homepage Journal
    NBD *is* standard Linux kernel. It's built right in: /usr/src/linux-2.4/Documentation/nbd.txt

    If you're curious about using the enhanced NBD w/ failover and HA, you can read about it at:

    http://www.it.uc3m.es/~ptb/nbd/#How_to_make_ENBD _w ork_with_heartbeat
  • Rsync and Ssh (Score:5, Informative)

    by PureFiction ( 10256 ) on Wednesday October 29, 2003 @05:32PM (#7341521)
    This is the way I do it, and although a little clunky, it allows me to keep remote backups of certain directories one three different servers.

    First, setup ssh to use pubkey authentication instead of interactive password. You can read the man pages for details but it basically boils down to running keygen on the trusted source:

    ssh-keygen -b 2048 -t dsa -f ~/.ssh/identity

    Then copy|append the newly created ~/.ssh/identity.pub to the remote hosts into their /home/user/.ssh/authorized_keys file.

    Now you can run rsync with ssh as the transport (instead of rsh) by exporting:

    export RSYNC_RSH=ssh or also passing --rsh=ssh on the command line.

    So to sync directories you could use a find command to update regularly:

    while true; do
    find . -follow -cnewer .last-sync | grep '.' 1>/dev/null 2>/dev/null
    if (( $? == 0 )) ; then
    rsync -rz --delete . destination:/some/path/
    touch .last-sync
    fi
    sleep 60
    done

    Obviously this is pretty hackish and could be improved. But the point is that with ssh and rsync you could do automatic mirroring of specific filesystems or directories to remote locations securely.
  • Unison? (Score:1, Informative)

    by Anonymous Coward on Wednesday October 29, 2003 @05:33PM (#7341539)
    Not yet seen reference to unison:

    http://www.cis.upenn.edu/~bcpierce/unison/

    They say: "Unison is a file-synchronization tool for Unix and Windows. (It also works on OSX to some extent, but it does not yet deal with 'resource forks' correctly; more information on OSX usage can be found on the unison-users mailing list archives.) It allows two replicas of a collection of files and directories to be stored on different hosts (or different disks on the same host), modified separately, and then brought up to date by propagating the changes in each replica to the other."
  • Re:NBD Does this (Score:5, Informative)

    by dbarclay10 ( 70443 ) on Wednesday October 29, 2003 @05:37PM (#7341575)

    Just to clarify what this guy is saying:

    1) Make all your machines NBD servers. NBD for Linux [sourceforge.net], NBD for Windows [vanheusden.com]. NBD stands for "network block device" and allows a client to use a server's block device.
    2) Set up a master client/server (using Linux or something else with a decent software RAID stack). This machine will be the only NBD *client*, and it will use all the NBD block devices exported by the rest of your network.
    3) On the master set up in 2), create a Linux MD RAID array overtop all the NBD devices that are available.
    4) Create a filesystem on the brand-spanking-new multi-machine RAID array.
    5) Export it back to the other machines via Samba or NFS or AFS or what have you.

    Why does only one machine (the "master server") access the NBD devices, you ask? Because for a given block device, there can only be one client accessing it safely. Thus, if you want to make the RAID array available to anything other than the machine which is *running* the array off the NBD devices, you need to use something which allows concurrent access; something like NFS, Samba, or AFS.

    Hope that clears it up a bit.

  • by Cranston Snord ( 314056 ) on Wednesday October 29, 2003 @05:46PM (#7341650) Homepage
    Instead of xcopy, try RoboCopy, included in the windows NT/2k/xp/2k3 resource kit available here. [microsoft.com] It gives you almost as much control as rsync, including directory synchronization, touch control, ageing, network failure support, and others. I use this at work to move around copies of live production data to backup servers located offsite via vpn without any issues. More information on syntax can be found here. [ss64.com]

  • by richoid ( 180354 ) on Wednesday October 29, 2003 @05:51PM (#7341700) Homepage
    http://www.parl.clemson.edu/pvfs/

    "The goal of the Parallel Virtual File System (PVFS) Project is to explore the design, implementation, and uses of parallel I/O. PVFS serves as both a platform for parallel I/O research as well as a production file system for the cluster computing community. PVFS is currently targeted at clusters of workstations, or Beowulfs."

    "In order to provide high-performance access to data stored on the file system by many clients, PVFS spreads data out across multiple cluster nodes, which we call I/O nodes. By spreading data across multiple I/O nodes, applications have multiple paths to data through the network and multiple disks on which data is stored. This eliminates single bottlenecks in the I/O path and thus increases the total potential bandwidth for multiple clients, or aggregate bandwidth."

    Or there are many others to chose from, google for clustered filesystems:

    http://www.yolinux.com/TUTORIALS/LinuxClustersAn dF ileSystems.html

  • Slow? (Score:2, Informative)

    by cerebralsugar ( 203167 ) on Wednesday October 29, 2003 @05:54PM (#7341724)
    I certainly would attest that this is a cool idea. I have a few systems at my place and it would be neat to make a single filesystem spanning all the storage on the network.

    However, while small files would be fine, I would think the speed of the network would make for some fairly slow storage on a 100mbit network.

    Add more users saving files across the network to the equation and things would get out of hand fast.

    I guess I would just buy a serial ata raid motherboard (the intel D865GBFLK is one I have been thinking about), and just do 1:1 mirroring. Cheaper than scsi, and pretty darn fast.
  • Raid != Backup (Score:3, Informative)

    by Alan ( 347 ) <arcterex@NOspAm.ufies.org> on Wednesday October 29, 2003 @05:55PM (#7341737) Homepage
    Don't forget that RAID only protects you from hardware failures, it doesn't prevent you from doing an "rm -rf important_file" :)

    Personally I have a server with a RAID 5 array that is shared via SAMBA to windows and linux clients, which works fine, though I may adjust this if good suggestions are made here. The only real issue would be disk space, and all my computers now have 120G+ hard drives or RAID array....
  • by AllDigital ( 682202 ) * on Wednesday October 29, 2003 @05:58PM (#7341754)
    Groove workspace if a collaborative environment, but it does have a component that allows you to share an archive of files.

    Worth considering because:
    - Files are encrypted and sent in an encrypted format.
    - Files placed in the shared space are mirrored on all systems that are members of the worspace.
    - The software is free for non-commercial use.
    - Lot's of other interesting features to play with.
    - You can even mirror with a machine accross the Internet.

    Limited by:
    - The speed of your connection.
    - Windows users only.

    Go check it out at http://groove.net/

    Does anyone know if there are efforts in the open source community similar to...or designed to enhance this product?
  • by Ron Harwood ( 136613 ) <harwoodr@NOSPAm.linux.ca> on Wednesday October 29, 2003 @05:58PM (#7341756) Homepage Journal
    Obvious link [drbd.org].
  • Re:Intermezzo (Score:5, Informative)

    by laursen ( 36210 ) <laursenNO@SPAMnetgroup.dk> on Wednesday October 29, 2003 @06:12PM (#7341882) Homepage
    Intermezzo is designed for this and a bit more - if one of the machines is a laptop you can take it away and work on it, and it'll resync when you get back.

    We have looked at various distributed filesystems for use in a clustered setup of webservers. We wanted to remove the single point of failure from a central NFS server - Intermezzo was one of the filesystems we had a look at.

    The idea behind Intermezzo is fairly simple and the documentation is good. The Intermezzo system looked like an ideal solution for our setup (Coda and OpenAFS are far to complex for use in a distributed filesystem on a closed internal net).

    We tested the system but sadly it's not really production stable and I can't advise that you use it.

    If you are looking for a SAFE solution then Intermezzo is not for you - you will just end up with garbled data, deadlocks and tons of wasted time ...

    My 2 cents.

  • by steveha ( 103154 ) on Wednesday October 29, 2003 @06:18PM (#7341928) Homepage
    0) Mirroring (RAID 1) takes double the disk space; but you could use RAID 5 instead. A 4 disk RAID 5 would take 4/3 as much disk space as you get to use.

    1) You could make a partition that is 10% of your disk, make another identical one on another disk, and mirror those. Then put your 10% critical data in there.

    2) Do what I do: set up a RAID server, and keep all critical data on that. This is good if you have a home network with multiple computers. It also makes data sharing easy among the computers.

    steveha
  • by LookSharp ( 3864 ) on Wednesday October 29, 2003 @06:19PM (#7341938)
    ...as much as I dislike replying to T4D, he brings up an interesting scenerio to counter your suggestion of using multiple machines.

    I took a spare machine, added a 3ware 6800 ATA RAID controller ($130 on eBay), and installed eight 120GB Maxtor hard drives ($1200 when I bought them last year) and put them in eight Genica hot-swap trays ($60). For about $1500, I now have an 800GB formatted RAID5 array. (Had to throw in a dedicated 400W Antec power supply for HDs.) In a year, two of the drives have flunked, and the replacement drives have rebuilt beautifully.

    If you're going to lose the site, you're going to lose your data in either case. All you protect against with the network situation is the complete loss of one machine. Protect your server as much as possible and put your data on it.

    Just make sure you keep the "most precious" data offsite on tape of a sneaker-net external hard drive, in case the pop-tart that got stuck in your toaster burns down your house. (This apparently happens about 30 times a year, by the way, including one of my co-workers :)

  • Re:Intermezzo (Score:2, Informative)

    by laursen ( 36210 ) <laursenNO@SPAMnetgroup.dk> on Wednesday October 29, 2003 @06:21PM (#7341954) Homepage
    We bought a large Storegatek raid (2 x RAID 5) and used NFS.

    NFS is a proven filesystem and it has been tested for years. It's compatible with all major UNIX flavors and BSD/Linux systems.
  • Re:NBD Does this (Score:3, Informative)

    by caluml ( 551744 ) <slashdot@spamgoe ... minus herbivore> on Wednesday October 29, 2003 @06:23PM (#7341967) Homepage
    Hmm. How stable is it? From /usr/src/linux/Documentation/nbd.txt:

    Note: Network Block Device is now experimental, which approximately
    means, that it works on my computer, and it worked on one of school
    computers.

    That doesn't sound very promising to me. Usually stuff that's been in the kernel since 2.1 days is rock solid.

    Isn't AFS/Coda more like the guy wants (excluding Windows-ability, although I seem to remember there being something for Andrews for Windows)?
  • Re:AFS (Score:5, Informative)

    by Strange Ranger ( 454494 ) on Wednesday October 29, 2003 @06:24PM (#7341980)
    from karmak.org
    AFS is based on a distributed file system originally developed under a different name in the mid-1980's at the Information Technology Center of Carnegie-Mellon University (CMU). It was first publically described in a paper in 1985, and soon afterwords was renamed to the "Andrew File System" in honor of the patrons of CMU, Andrew Carnegie and Andrew Mellon. As interest in AFS grew, CMU spawned the Transarc Company to develop and market AFS. Once Transarc was formed and AFS became a product, the "Andrew" was dropped to indicate that AFS had gone beyond the Andrew research project and had become a supported, product quality filesystem. However, there were a number of existing cells that rooted their filesystem as /afs. At the time, changing the root of the filesystem was a non-trivial undertaking. So, to save the early AFS sites from having to rename their filesystem, AFS remained as the name and filesystem root. In the late 1990's Transarc was acquired by IBM, who subsequently re-released AFS under an open source license. This code became the foundation for OpenAFS, which is currently under active development.
    It's still running and running well at CMU (AFAIK - as of late 90's). Every student gets an "Andrew" ID. Actually the very first networked computer I ever logged into (other than dialing a bbs) was a 'node' on Andrew, in 1988. Very very cool at the time, and still is.
  • Re:AFS (Score:3, Informative)

    by Umrick ( 151871 ) on Wednesday October 29, 2003 @06:31PM (#7342045) Homepage
    Never mind that AFS has been in production for literally years, serving terabytes of data for 10 thousand + clients (in several installations of AFS).

    The Windows client did have some notable slowness issues, performance with Linux is excellent, and scales much better than NFS. Clients are available for a large number of OSs. Doesn't matter if it's the right time, just A time. So setup NTP on one machine as a primary, and the others can use ntpdate to set time once a day.

    AFS started around 1986 as a commerical offering, IBM made it opensource in 2001. It can be a serious pain to set up at first, documents are indeed very outdated. Other limitations are no support for >2gig files. You can have readonly duplicates of data on multiple machines. Administration can be a dream once it's running.

    You will need to have ext2 partitions available for storage (OpenAFS uses its own transaction system, and you WILL have race conditions if you put it on a journalling filesystem).

    Also note that as of right now, 2.6 kernels are not supported, though 2.4/2.2 are fine.

    www.openafs.org

    CODA which was a start at an open source answer to AFS way back when, has even more out of date documentation, has never been used in production (that I know of), and basically is not nearly as ready for prime time as OpenAFS.

    www.coda.org
  • Yes. (Score:3, Informative)

    by Ayanami Rei ( 621112 ) * <rayanami AT gmail DOT com> on Wednesday October 29, 2003 @06:38PM (#7342105) Journal
    Software RAID/LVM can detect which volumes go where by magic numbers written to them when you format them. But you still have to set up all the remote NBDs correctly on a new machine, and you need the old setup file from the old machine that tells it what block devices/partitions to use.

    NOTE!

    You shouldn't leave any NBD-exported volumes on the new master. Make it into a physical, local volume, but reference it in the "same place" in your RAID configuration.
  • by steveha ( 103154 ) on Wednesday October 29, 2003 @06:46PM (#7342171) Homepage
    No need for an "honest-to-dog hardware RAID". Linux software RAID is simply great.

    Set up a server with multiple hard disks in a Linux software RAID, and run Samba and NFS on that. The Linux software RAID HOWTO explains all you need to know.

    steveha
  • Re:Rsync and Ssh (Score:4, Informative)

    by adamfranco ( 600246 ) <adam@NoSPAm.adamfranco.com> on Wednesday October 29, 2003 @06:52PM (#7342220) Homepage
    Here is a nice page [mikerubel.org] that explains how do do this. Even better, it shows how to do nice incremental backups using only slightly more space than the source (for the differing file versions). This makes for a pretty cheap and easy backup solution.
  • Re:Rsync and Ssh (Score:3, Informative)

    by strudeau ( 96760 ) on Wednesday October 29, 2003 @06:58PM (#7342265) Homepage
    the original poster I think wants something that also works in Windows.

    Rsync and ssh can work with Windows using Cygwin. See this document [unimelb.edu.au] for example.

  • by dbarclay10 ( 70443 ) on Wednesday October 29, 2003 @07:00PM (#7342278)
    First off, you aren't going to be able to use this like a real RAID array (a drive can die and you keep on working). The latency and bandwidth of any network that could be reasonably implemented in your home is going to prevent your system from acting like a real RAID array.

    I'm currently running some benchmarks on an XFS filesystem built upon a Linux MD RAID1 array, which is in turn built upon a local disk and a remote disk (which is at the end of a switched 100mbit network, the NBD server itself having an 8-year-old drive and a controller which doesn't do DMA).

    [ dbharris@willow: ~/ ]$ cat /proc/mdstat
    Personalities : [raid0] [raid1]
    md1 : active raid1 nbd0[1] dm-5[0]
    1888192 blocks [2/2] [UU]

    It takes approximately 10 minutes for a 1.8G array to sync. That's respectable. It's not blazing fast, but it's respectable.

    The bonnie++ scores are:

    willow,1G,5086,31,4766,2,2873,1,6377,27,8655,2,1 58.7,1,16,878,18,+++++,+++,766,14,880,18,+++++,+++ ,595,13

    Which isn't amazing, but quite respectable, especially given that this type of thing wouldn't be used for mass storage of ISOs or whatever, but used for people's "My Documents" folders and their $HOMEs. Notable that a fully local array I have which is made up with an old SCSI controller and some old SCSI disks is about half this speed as far as the filesystem goes, and about a tenth the speed as far as syncing goes.

    So, I believe that your assertion of "you aren't going to be able to use this like a real RAID array" is quite incorrect. Especially given that my network isn't particularily fast, my NICs aren't particularily fast, and the remote disk I'm using is dog slow. Replace the NICs with parts that aren't pieces of crap, use Gig-E, and use controllers/drives that aren't 7-8 years old, and you'll get very respectable performance - ESPECIALLY given that the intention isn't to store everything on it, just people's individual files.

    P.S.: Yes. I'm repeating myself. I know this. It's deliberate :)

  • Check out HiveCache (Score:3, Informative)

    by Jim McCoy ( 3961 ) on Wednesday October 29, 2003 @07:17PM (#7342393) Homepage
    HiveCache [hivecache.com] is a distributed RAID system similar to what you are asking for, albeit one that is pitched to more of the enterprise backup environment than the home user. Strong security, error-correction and data replication, and multi-source data publiication and retrieval to eliminate the network hotspots that might otherwise occur.


    While a pure linux solution seems to score the most points here, this particular one lets you combine your windows, OS X, and linux systems into a single distributed storage mesh. There is safety in numbers, and the more systems you can add to these sort of distributed storage systems the more reliable they become.


    HiveCache is more of a backup solution, but I do know that it is possible to use this with a webDAV front-end for archival storage and other intersting storage possibilities.

  • Re:Win2k (Score:1, Informative)

    by Anonymous Coward on Wednesday October 29, 2003 @07:21PM (#7342421)
    The distributed feature would be quite worthless if there wasn't some synchronization taking place to make sure the data was synched across all servers in the DFS namespace.


    DFS uses the File Replication Service (FRS) to ensure that all DFS replicas are synchronized. Clients connect to the closes available server (based on Active Directory Site information) and will automatically fall back to another server if one goes down.

    It's actually very easy to configure. Just fire up the DFS admin tool and add a new share. When you add a second replica the admin tool will ask you if you want to synchronize the replicas. Just click yes and everything will be configured automatically. The same is true if you add more replicas.
  • by angst_ridden_hipster ( 23104 ) on Wednesday October 29, 2003 @07:26PM (#7342456) Homepage Journal
    As I always chime in at this point:

    Use rdiff-backup!

    http://rdiff-backup.stanford.edu/

    Configurable, secure, distributed, versioning incremental backups.

    It's not a replacement for RAID, but is good for nightly inter-machine backups.

    There's also a related project where the far-end repository is encrypted, so you can have it on any public server without fear of having your data read by the wrong people.

    Very cool. It's saved my ass a few times.
  • Rsync & Rdiff-backup (Score:2, Informative)

    by hrath ( 5792 ) on Wednesday October 29, 2003 @07:47PM (#7342590)
    Check out http://rdiff-backup.stanford.edu/ for the wonderful rdiff-backup.

    With the combination of rsync, ssh & rdiff-backup I have setup a very reliable incremental network backup infrastructure, allowing me to go back to any previous version of a file.

    regards,

    Heiko
  • HyperSCSI (Score:2, Informative)

    by Nicson ( 49884 ) on Wednesday October 29, 2003 @09:24PM (#7343356)
    I'm surprised to see nobody has yet mentioned HyperSCSI [a-star.edu.sg], which is:
    - opensource
    - based on raw ethernet (supposedly faster than iSCSI or other TCP/IP-based schemes)
    - has a Win2K client

    Check it out, I've tested and used it since about a year and it works quite well!
    --
    Nicson
  • by trawg ( 308495 ) on Thursday October 30, 2003 @08:35AM (#7346255) Homepage
    not really relevant, but may still be of interest to some (just sounds so neat): "Since disk drives are cheap, backup should be cheap too. Of course it does not help to mirror your data by adding more disks to your own computer because a fire, flood, power surge, etc. could still wipe out your local data center. Instead, you should give your files to peers (and in return store their files) so that if a catastrophe strikes your area, you can recover data from surviving peers. The Distributed Internet Backup System (DIBS) is designed to implement this vision. "

    http://www.csua.berkeley.edu/~emin/source_code/d ib s/

  • EtherDrive Storage (Score:2, Informative)

    by web_guy1000 ( 161569 ) on Thursday October 30, 2003 @10:56AM (#7347142)
    You might consider EtherDrive storage from www.coraid.com. I use it on Linux with software raid. Works like a champ.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...