Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Sharing an IEEE 1394 Device Between Machines? 28

groovemaneuver asks: "A question was posted recently regarding sharing a SCSI disk between multiple machines. Firewire was mentioned as an alternative, but there wasn't much elaboration. Is there anyone out there using an IEEE 1394 solution for shared storage between two or more boxes? I've managed to dig up ads for a bunch of enclosures that feature multiple firewire ports, but nothing to indicate that it was possible to connect any of them to multiple machines. The only thing close that I've found was the SANCube, and aside from being fairly pricey (defeating my purpose for using firewire), it is only officially supported as a Mac/Win device."
This discussion has been archived. No new comments can be posted.

Sharing an IEEE 1394 Device Between Machines?

Comments Filter:
  • by Merlin42 ( 148225 ) on Wednesday November 13, 2002 @05:57PM (#4663639)
    this [slashdot.org] might be waht you want ... The documentation wasn't immediately clear to me so I might be off the mark.
    • This is EXACTLY what he wants. I am building a VERY inexpensive solution with four 200GB ATA drives, on a shared IEEE 1394 loop between ywo Dell 2450's.

      It's my hope to use EVMS [sourceforge.net] as my stripe-manager on each side. It seems that this is one of the things EVMS was originally built for on AIX. I will treat this like RAID4, with all of the parity information on a single spindle.

      The only problem I forsee with this is that - although FireWire supports "hot plugging" - replacing a failed drive will result in putting a break in the loop, causing a different number of drives to appear as having failed on each side of the cluster. The long-term solution for this is to use ATA swappable trays in the front of FireWire chassis designed for removeable media.

      It 'aint my root filesystem, so one thing at a time!

      • Why not use 4 firewire ports on both machines and have one loop per drive? More cables, but as a bonus you'll get better throughput per disk. I couldn't tell on first glance at the site, does EVMS support software RAID5?
        • Yeah... Good Idea. I have limited slots for expansion on this 2U host... I'll try to find a 4-port FireWire adaptor that fits... Solve the swap/chain problem!

          EVMS supports every imaginable volume configuration. Regular Linux software RAID, and LVM sets can all be manged under EVMS control, or migrated to native EVMS, if desired.

          Dan Robbins (of Gentoo Linux fame) is running a good series on EVMS in IBM's Develper Network right now. Sorry, I don't have a URL handy...

      • by Kz ( 4332 )
        To avoid breaking the FireWire chain (not a loop!), just don't make a chain.

        Use FireWire hubs to create a more tree- or star- like topology, that way each disk is in it's own branch and unplugging it won't affect the others.
        • To avoid breaking the FireWire chain (not a loop!), just don't make a chain.

          Use FireWire hubs to create a more tree- or star- like topology, that way each disk is in it's own branch and unplugging it won't affect the others.

          You're right, too.

          See how SCSI for years and years fogs your brain, and prevents from seeing the obvious?

          So -at least for certain applications- FireWire is not only advantaged over SCSI by price and speed. It has superior topological flexibility!

  • It's not easy to have a device shared between two computers. Especially if it's firewire. What happens is two independent machines are trying to control the same piece of hardware. Of course both of them will think that the drive is messed up.
    If you really want to share a drive between machines, and they are close enough for firewire, then you might as well use the tried and true method. First install the drive in one machine, then connect both machines together and transfer data over the network. If you really need it to be faster than that try gigabit ethernet.
    • It's easy to do in a not-so-nice way.

      Just connect everthing together, but mount it on only one machine at a time. be sure to unmount it before mounting it on the other.

      on a logical level, it's exactly the same as unplugging it from one machine and plugging it on the other.

      you could also mount it r/o on more than one machine, but remember to unmount it from everyone before remounting r/w on anyone.

      Of course, the real thing would be to use one of the few cluster filesystems out there, GFS, MFS, or the Oracle thing. Those supposedly solve the problem of keeping the directory structures in consistent and in sync.
  • Data Integrity? (Score:3, Insightful)

    by iCEBaLM ( 34905 ) on Wednesday November 13, 2002 @06:05PM (#4663706)
    That's all I need, two computers with two different ideas of how the filesystem should look performaing simultaneous reads/writes on the same disk fubaring everything. Are you sure this is what you want? Why not just use simple ethernet sharing, NFS/Samba/whatever? I'm thinking it would be a lot more stable.

    -- iCEBaLM
    • Re:Data Integrity? (Score:5, Informative)

      by david.given ( 6740 ) <dg@cowlark.com> on Wednesday November 13, 2002 @06:27PM (#4663903) Homepage Journal
      That's all I need, two computers with two different ideas of how the filesystem should look performaing simultaneous reads/writes on the same disk fubaring everything. Are you sure this is what you want? Why not just use simple ethernet sharing, NFS/Samba/whatever? I'm thinking it would be a lot more stable.

      Well, d'oh, you use a file system that supports simultaneous accesses, don't you?

      There are good reasons for wanting such a thing. For example: say you have a mission-critical database server. You want instant failover if anything goes wrong. Your database is being continuously modified, so merely duplicating it won't do.

      One solution is to do something similar to the above. Have two database machines plugged into the same drive (in the real world, RAID drive array). The database software is intelligent enough to cope with simultaneous accesses. Now you can send a query to either server and they'll access the same data, at full hard disk speeds. Pull the plug on one server and the other just keeps rolling.

      Why not use ethernet sharing? Because there's a single point of failure. Your drive is attached to a file server. Your file server is attached to your database servers. If your file server goes down, your database servers are cut off.

      Solutions to this? Duplicate your file server. Broadcast your data to all file servers, all attached on some high-speed network. This'll work. Unfortunately, you've just reinvented, in a heavy and expensive way, having one disk attached to several machines at once...

      You see, your file servers are duplicating all the functionality of a RAID array but with a lot more overhead. Your high-speed network is duplicating all the functionality of your Firewire or SCSI bus, again with more overhead. Your databases now have to send their file accesses over that network, which will be slow. There's overhead everywhere.

      By simply using a drive (or drive array) attached to several servers, you get the same functionality, much cheaper, and with a much simpler setup. Remember, complex == unreliable. You can buy certified, five nines RAID arrays off the warehouse shelf and they will Just Work. You can buy high-speed SCSI cards with multi-initiator support (this is the magic phrase to Google for) and they will Just Work.

      Of course, it's not simple. You need a piece of software known as a distributed lock manager to handle the atomicity issues. But you can buy them, and they will Just Work.

      This kind of setup has been around for years in the big iron SCSI world. I haven't come across anything yet for Firewire, though. Personally, I'd be a bit dubious as to whether you're going to anything fast enough or stable enough for Firewire; high-performance SCSI beats Firewire into the ground, and all the kit is available off-the-shelf. But I'd be interested to see if anything comes up.

      • Well, d'oh, you use a file system that supports simultaneous accesses, don't you?

        Not by different *computers*. File storage tables are normally stored in memory by the OS because they're small and it speeds up disk access. It's a race condition. Suppose you have this wonderous hard drive attached to two computers - power both machines up at the same time, they both copy the file system data into memory and now they want to write... They both allocate the same disk blocks and start writing over eachother turning your data into nicely destroyed swiss cheese.

        Why not use ethernet sharing? Because there's a single point of failure. Your drive is attached to a file server. Your file server is attached to your database servers. If your file server goes down, your database servers are cut off.

        Or you could get a network access storage (NAS) appliance...

        -- iCEBaLM

  • Really bad idea (Score:1, Insightful)

    by nocomment ( 239368 )
    I remember this discussion from before. This is a horrible idea. Maybe in the future there will be some kind of controller on the firewire drive that lets you d that, but for now, it's just an IDE drive with an adpater. I'd forget about this option until drives get smarter. What is the point of this after all? If a machine is new enough to have firewire, it's new enought to have ethernet.

    Networking is a good idead because

    It's scalable. as you can add more machines later with minimal fuss

    It isn't any slower over ethernet (especially 100Mb and 1000Gb)

    LAN parties!!!

    easily setup multiple os's to see the drive.

    It's a bad idea because

    It's not scalabe (easily)

    you risk data corruption (even if it works)

    the drive can't handle more than 1 operation at a time

    it's an IDE drive, therefore the drive handles the rules of IDE, the bus is Firewire, but the adapter handles the rules of that

    some os's expect different things out of a filesystem, Windows expects it to be nice and neatly formatted in fat32 or ntfs. Linux expects etx2 or etx3 (or any slew of others) to be formatted in that method. Mac's expect to see a partition table and that's it (that's why formatting a mac hd takes only as long as hitting the "Initialize" button) Therefore it is not an easily cross-platform scalable data-safe method. Use the network option, it isn't noticably slower, it is cross-platform very scalable very data-safe.

    • You obviously have never done ANY sort of high availability clustering. This is an extraordinarily common and important concept that many people pay LOTS of money to get. Veritas Cluster File System (and the other components of the Veritas Cluster suite) do exactly this (and more) at a serious premium.

      This is the way people with real uptime requirements do it. I know, because I've got systems that do this, and I've interviewed I can't tell you how many sysadmin candidates with the same experience.

      As for networking, your points are all completely wrong and irrelevant for serious high performance, high availability applications. Ethernet has far too high an overhead (at least with any commonly available file service protocols. iSCSI *might* change all that, but it remains to be seen), scalability at the # of transactions per second IS LOWER over these protocols, and LAN parties aren't any kind of interest for people running serious databases.

      As for why you think it's a bad idea, all of them are what the various software packages out there are for. I won't get into the IDE v. SCSI v. FireWire argument.

      Please, stop assuming the desktop is where computing ends. It isn't.

  • Are you crazy? Use a network filesystem hosted on one machine, or network storage. Sharing the same physical drive between two concurrently-running operating systems will require special drivers and extra communication between the two (at best), and be totally unreliable at worst.

  • also see (Score:3, Insightful)

    by rakerman ( 409507 ) on Wednesday November 13, 2002 @07:57PM (#4664637) Homepage Journal
    The short answer is, to do this you need both machines to agree on the file locking and such, so they don't trash one another's files. This is not something you get built-in to FireWire, nor into most operating systems.

    Also see Ask Slashdot: IEEE1394-based Storage Area Network? [slashdot.org]

  • This is a *good idea* (tm). It means that if one machine goes down, the storage/service based on the storage is still available.

    The software to manage multiple access via firewire was recently released GPL by Oracle [oracle.com].


  • But we've hooked up 2 Powerbooks and a G4 Mac, and shared 2 Firewire HD's, 1 scanner, and 1 digital camera.

    All the Mac's see it, and you also see the other Mac's HD's on your connection, if you have filesharing on.
  • We have a SANCube here at work. It's not really full sharing of a device either. Only one computer can have read/write access at a time, and this has to be switched manually via software. The others connected can have read access. It's prone to problems, and the only way to fix file system problems is a reformat - a serious PITA if it's nearly full.

    It also doesn't have OS X support. Right now it's been turneded into a JOBD for a single user machine. If we need to share it, we still use the LAN, so if we were sharing data full time, we would just leave file sharing on, or hook it to a file server.

    As others have said, get some form of NAS/file server.
  • in IDE, the controller and computer do not tell the drive how to write data, they just tell it what to write. they say, burn this data to the MBR, then burn this here and that there. they do not "controll" the device but they "instruct" it. get it?

    theirfor, and IDE device on a firewire chain functions properly and without any problems because, computer 1 says "write this here" and it happens, and computer 2 says, "write this here" and the drive says "their is data their, overwrite?" and then computer 2 says, "naw, just write it over their instead." get it?

    nice and easy, IDE drives are "smart", they are self-controlling, they have a buffer to help keep things straight and arange data for reading or writeing and to improve performance. to show the difference of "smart" and "dumb" devices, a floppy is dumb, it only does EXACTLY what the floppy controller says to do, i can at most check to see if a disk is in or write protected.

"Gravitation cannot be held responsible for people falling in love." -- Albert Einstein

Working...