Sharing an IEEE 1394 Device Between Machines? 28
groovemaneuver asks: "A question was posted recently regarding sharing a SCSI disk between multiple machines. Firewire was mentioned as an alternative, but there wasn't much elaboration. Is there anyone out there using an IEEE 1394 solution for shared storage between two or more boxes? I've managed to dig up ads for a bunch of enclosures that feature multiple firewire ports, but nothing to indicate that it was possible to connect any of them to multiple machines. The only thing close that I've found was the SANCube, and aside from being fairly pricey (defeating my purpose for using firewire), it is only officially supported as a Mac/Win device."
Not sure if this is waht you want (Score:4, Informative)
EXACTLY what he wants (Score:3, Informative)
It's my hope to use EVMS [sourceforge.net] as my stripe-manager on each side. It seems that this is one of the things EVMS was originally built for on AIX. I will treat this like RAID4, with all of the parity information on a single spindle.
The only problem I forsee with this is that - although FireWire supports "hot plugging" - replacing a failed drive will result in putting a break in the loop, causing a different number of drives to appear as having failed on each side of the cluster. The long-term solution for this is to use ATA swappable trays in the front of FireWire chassis designed for removeable media.
It 'aint my root filesystem, so one thing at a time!
Re:EXACTLY what he wants (Score:2, Insightful)
Re:EXACTLY what he wants (Score:2)
EVMS supports every imaginable volume configuration. Regular Linux software RAID, and LVM sets can all be manged under EVMS control, or migrated to native EVMS, if desired.
Dan Robbins (of Gentoo Linux fame) is running a good series on EVMS in IBM's Develper Network right now. Sorry, I don't have a URL handy...
Re:EXACTLY what he wants (Score:3, Insightful)
Use FireWire hubs to create a more tree- or star- like topology, that way each disk is in it's own branch and unplugging it won't affect the others.
Re:EXACTLY what he wants (Score:2)
See how SCSI for years and years fogs your brain, and prevents from seeing the obvious?
So -at least for certain applications- FireWire is not only advantaged over SCSI by price and speed. It has superior topological flexibility!
It's not easy (Score:2)
If you really want to share a drive between machines, and they are close enough for firewire, then you might as well use the tried and true method. First install the drive in one machine, then connect both machines together and transfer data over the network. If you really need it to be faster than that try gigabit ethernet.
Re:It's not easy (Score:3)
Just connect everthing together, but mount it on only one machine at a time. be sure to unmount it before mounting it on the other.
on a logical level, it's exactly the same as unplugging it from one machine and plugging it on the other.
you could also mount it r/o on more than one machine, but remember to unmount it from everyone before remounting r/w on anyone.
Of course, the real thing would be to use one of the few cluster filesystems out there, GFS, MFS, or the Oracle thing. Those supposedly solve the problem of keeping the directory structures in consistent and in sync.
Data Integrity? (Score:3, Insightful)
-- iCEBaLM
Re:Data Integrity? (Score:5, Informative)
Well, d'oh, you use a file system that supports simultaneous accesses, don't you?
There are good reasons for wanting such a thing. For example: say you have a mission-critical database server. You want instant failover if anything goes wrong. Your database is being continuously modified, so merely duplicating it won't do.
One solution is to do something similar to the above. Have two database machines plugged into the same drive (in the real world, RAID drive array). The database software is intelligent enough to cope with simultaneous accesses. Now you can send a query to either server and they'll access the same data, at full hard disk speeds. Pull the plug on one server and the other just keeps rolling.
Why not use ethernet sharing? Because there's a single point of failure. Your drive is attached to a file server. Your file server is attached to your database servers. If your file server goes down, your database servers are cut off.
Solutions to this? Duplicate your file server. Broadcast your data to all file servers, all attached on some high-speed network. This'll work. Unfortunately, you've just reinvented, in a heavy and expensive way, having one disk attached to several machines at once...
You see, your file servers are duplicating all the functionality of a RAID array but with a lot more overhead. Your high-speed network is duplicating all the functionality of your Firewire or SCSI bus, again with more overhead. Your databases now have to send their file accesses over that network, which will be slow. There's overhead everywhere.
By simply using a drive (or drive array) attached to several servers, you get the same functionality, much cheaper, and with a much simpler setup. Remember, complex == unreliable. You can buy certified, five nines RAID arrays off the warehouse shelf and they will Just Work. You can buy high-speed SCSI cards with multi-initiator support (this is the magic phrase to Google for) and they will Just Work.
Of course, it's not simple. You need a piece of software known as a distributed lock manager to handle the atomicity issues. But you can buy them, and they will Just Work.
This kind of setup has been around for years in the big iron SCSI world. I haven't come across anything yet for Firewire, though. Personally, I'd be a bit dubious as to whether you're going to anything fast enough or stable enough for Firewire; high-performance SCSI beats Firewire into the ground, and all the kit is available off-the-shelf. But I'd be interested to see if anything comes up.
Re:Data Integrity? (Score:1)
Not by different *computers*. File storage tables are normally stored in memory by the OS because they're small and it speeds up disk access. It's a race condition. Suppose you have this wonderous hard drive attached to two computers - power both machines up at the same time, they both copy the file system data into memory and now they want to write... They both allocate the same disk blocks and start writing over eachother turning your data into nicely destroyed swiss cheese.
Why not use ethernet sharing? Because there's a single point of failure. Your drive is attached to a file server. Your file server is attached to your database servers. If your file server goes down, your database servers are cut off.
Or you could get a network access storage (NAS) appliance...
-- iCEBaLM
Re:Data Integrity? (Score:3, Informative)
Re:Data Integrity? (Score:2, Informative)
Really bad idea (Score:1, Insightful)
Networking is a good idead because
It's scalable. as you can add more machines later with minimal fuss
It isn't any slower over ethernet (especially 100Mb and 1000Gb)
LAN parties!!!
easily setup multiple os's to see the drive.
It's a bad idea because
It's not scalabe (easily)
you risk data corruption (even if it works)
the drive can't handle more than 1 operation at a time
it's an IDE drive, therefore the drive handles the rules of IDE, the bus is Firewire, but the adapter handles the rules of that
some os's expect different things out of a filesystem, Windows expects it to be nice and neatly formatted in fat32 or ntfs. Linux expects etx2 or etx3 (or any slew of others) to be formatted in that method. Mac's expect to see a partition table and that's it (that's why formatting a mac hd takes only as long as hitting the "Initialize" button) Therefore it is not an easily cross-platform scalable data-safe method. Use the network option, it isn't noticably slower, it is cross-platform very scalable very data-safe.
No, it's a fundamental idea. (Score:2)
This is the way people with real uptime requirements do it. I know, because I've got systems that do this, and I've interviewed I can't tell you how many sysadmin candidates with the same experience.
As for networking, your points are all completely wrong and irrelevant for serious high performance, high availability applications. Ethernet has far too high an overhead (at least with any commonly available file service protocols. iSCSI *might* change all that, but it remains to be seen), scalability at the # of transactions per second IS LOWER over these protocols, and LAN parties aren't any kind of interest for people running serious databases.
As for why you think it's a bad idea, all of them are what the various software packages out there are for. I won't get into the IDE v. SCSI v. FireWire argument.
Please, stop assuming the desktop is where computing ends. It isn't.
Are you crazy? (Score:1)
Are you crazy? Use a network filesystem hosted on one machine, or network storage. Sharing the same physical drive between two concurrently-running operating systems will require special drivers and extra communication between the two (at best), and be totally unreliable at worst.
also see (Score:3, Insightful)
Also see Ask Slashdot: IEEE1394-based Storage Area Network? [slashdot.org]
Contrary to the consensus of the replies... (Score:2)
The software to manage multiple access via firewire was recently released GPL by Oracle [oracle.com].
Gimp (Score:1, Offtopic)
Not sure how this works with PC's.... (Score:1)
But we've hooked up 2 Powerbooks and a G4 Mac, and shared 2 Firewire HD's, 1 scanner, and 1 digital camera.
All the Mac's see it, and you also see the other Mac's HD's on your connection, if you have filesharing on.
SanCUBE not really it either (Score:1)
It also doesn't have OS X support. Right now it's been turneded into a JOBD for a single user machine. If we need to share it, we still use the LAN, so if we were sharing data full time, we would just leave file sharing on, or hook it to a file server.
As others have said, get some form of NAS/file server.
it works like this. (Score:1)
theirfor, and IDE device on a firewire chain functions properly and without any problems because, computer 1 says "write this here" and it happens, and computer 2 says, "write this here" and the drive says "their is data their, overwrite?" and then computer 2 says, "naw, just write it over their instead." get it?
nice and easy, IDE drives are "smart", they are self-controlling, they have a buffer to help keep things straight and arange data for reading or writeing and to improve performance. to show the difference of "smart" and "dumb" devices, a floppy is dumb, it only does EXACTLY what the floppy controller says to do, i can at most check to see if a disk is in or write protected.