Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

IEEE1394-based Storage Area Network? 94

Hank asks: "I work for Hewlett-Packard and just recently installed my first SAN at a customer site. It was much fun, I was blown away by the ease of the storage device management and the allocation of storage space across the systems. Being a professional environment, it was high-available, ran over FibreChannel through switched fabric, and cost upwards of US$250k -- not really affordable for most households. Roughly at the same time I started looking at IEEE 1394 cards for some video-editing, and an idea came up: Would it be possible to build a lowcost SAN based on Firewire cards, hubs and devices? How would storage device mgmt look like (the (de-)allocating of LUNs / slices / partitions)? What about support of multiple OS's on the SAN? How about this: would it be possible to create a Linux-based disk-array with an IEEE1394 interface (Old P200, crammed with disks, software RAID, lots of RAM for caching, Firewire interface, looking/acting like a single disk to the outside world, storage device mgmt via web-frontend)?"
This discussion has been archived. No new comments can be posted.

IEEE1394-based Storage Area Network?

Comments Filter:
  • If you're setting up a single PC with lots of storage, then it's not a "storage area network", it's just a "plain old file server". The difference between the two is that SAN sounds more professional to PHB types, and it is generally seen as a turn-key solution for storage. You just plug in some Cat-5 and give it a password, TADA. I wouldn't be surprised to find out they're just linux boxen with custom software for storage management. That's the big Plus of SAN devices : easy to install and use. You don't need a linux admin to setup a Maxtor MaxAttach (sp?) rig, you just need any old NT/Novell twit who at least knows how many megs are in a gig.
    • by BitGeek ( 19506 ) on Wednesday September 25, 2002 @03:56PM (#4330326) Homepage

      You're missing the point: Using firewire you have the high performance of Firewire. Cat5 and you're back into ethernet space and packets.

      Firewire supports sustained high bandwidth transfers between multiple drives and multiple computers.

      I mean, if you don't need the performance of a SAN, then sure, use Cat5 and you have a fileserver.

      But if you're looking for something between FCAL and Ethernet, then Firewire is likely a great midrange choice.

      • Granted, firewire is cheap, but it still has distance limitations. You also need those special cables.

        I've thought about this some, and was thinking iSCSI as an option.
        If performance is REALLY an issue, I suppose you could invest in GigE.

        As for the SAN / NAS issues, what we are seeing in the industry is that people want / need both. Some vendors are starting to deliver devices that do both in one box. Raw disk for databases and such, and network file systems for other tasks.

        Frankly for home systems, NAS should be just fine.

        • I've been using a 12 foot FireWire cable for awhile no with no trouble.

          Firewire2 will allow optical fiber as an alternative which has essentially an unlimited run lenght. Well, unlimited inside of an office, not kilometers.

          I think that even Gigabit Ethernet will not provide the same performance as Firewire. Effectively you can get 500Mbps over GigE and thats' awfully close to current Firewire's 400Mbps-- - except that Firewire will sustain that transfer rate, and I'm not sure any ethernet can. But I could be wrong.

          There are tradeoffs for both. If you want to link a cluster of machines and drives at high performance without going to FCAL, Firewire is a great way to go. IF you want to link a lot of computer to the storage, then Gigabit ethernet is probably the way to go-- but that ethernet could terminate in a server that has a Firewire network behind it linking a bunch of drives.

          • Actually, Gigabit Ethernet should sustain something around 400Mbits/sec just like Firewire I. The difference is that next year we're still going to have Gigabit Ethernet but we're much more likely to have 800Mbits/sec sustained Firewire transfers. For some use cases, that's going to be a key feature.
            • Or I could have 10Gig ethernet and wait a half decade for you to catch up. Well not me personally since it costs well over $1,000/port. But I will use it for the backbone of my iSCSI SAN.
              • I hadn't kept up with the 802.3ae working group. You're correct there. However on pricing, try $80k [infoworld.com] per port.

                Ugh!

                I would expect Firewire-2 to be somewhere in the sub $1 per port range. At that price differential, I would expect Firewire to win out for quite some time.
                • I could be wrong but I believe the 10Gb module has more then one port. Plus a module would only cost me about $12,000 because of internal discounts.
                  • Ok, let's take your figures, instead of $80,000:$1 it's $12000:$1. Somehow I think that 10GB ethernet is going to stay out of the disk array field for quite some time while Firewire has realistic possibilities in the relatively near future (ie as soon as somebody releases the necessary drivers).

            • Gbit Ethernet can sustain more than 500Mbit/sec, but usually doesn't get the chance to do so. The problem is running TCP/IP protocol over it. TCP/IP rins, over dodg links, the breadth of the world. In order to achieve this, it has things like internal checksums, time-to-live etc which are required to achieve this - but require significant intervention at the driver level to manage. And then you probably have timeout and retry systems to manage as well.

              Firewire, Fibre Channel, Scsi etc. don't have this flexibility. They assume that the device you are talking to is is on this bus/ring and that messages will, give or take parity errors (which can be reported) get through reliably. Therefore they can have much ligher protocols. You can usee what, in Ethernet terms, is a MAC address not an IP address.

              The firewire suggestion at the head is perfectly snesible. But if you were to go into the ethernet driver at the packet level, not at the OP level, I reckon you could get 90% of theoretical bandwidth out of an ethernet-based connection. We currently get over 500Mbit/sec using UDP connections. Drop a layer of protocol and it could be even faster.

              • I thought CDMA was what caused most of the drop-off in efficiency. How are you going to stop that?
                • CDMA only causes a lot of slowdown when you have a lot of collisions - i.e. when you have a lot of small packets fighting for the bandwidth. Which is another of the consequences of TCP/IP - flow control packets etc., and obviously occurs on a "workstation" environ,ment with many independant systems talking to servers. In a SAN type network using direct Ethernet packets, you would notrmally send bulk data in large packets which would swamp the bus for a while, then back off. But, like a scsi bus, you would tend to get one logical transaction loading the net for a while, then backing off while it seeks to the next logical block. And SAN type applications tend to have a few disk users, rather than hundreds of light users.
        • ugh, and watch your bandwidth go down as your load goes up? no thanks...
      • And more importantly, with ethernet, nobody makes anything like Hubzilla [charismac.com].
    • by rpresser ( 610529 )

      Um, no. You've just described NAS [everything2.com] - Network Attached Storage. Shared storage from NAS devices appears as NFS (or Samba, Mac, or whatever) and you can mount it on any client.

      A SAN [everything2.com] - Storage Area Network - is when you have lots of RAID storage being shared by several servers. Each server believes it is directly attached to a physical disk, when actually it's just getting one or more slices of the pooled RAID units.

      • SAN? Ok. I know where the area is in a LAN, and WAN. Are we talking about peripherals here? Is my KVM the middle of my MAN (Monitor Area Network)?

        Please someone just try to sell me a SAN.

        -Felddy
    • You're confusing SAN (storage area network) with NAS (network-attached storage). Understandable mistake, but a mistake nonetheless.
  • Or clustering ... (Score:2, Interesting)

    by TRS-80 ( 15569 )
    A cool thing to do with a firewire SAN would be clustering, ala TruCluster, which presents a single filesystem across many machines (with kernel hacks to allow different files for different machines, eg hostname etc.).
    • Actually, that's not "a cool thing to do" with a SAN. That's the sole purpose of a SAN: to let multiple computers talk to the same set of storage devices. Depending on how your SAN is set up, you may use software to connect (logically) only one computer to each storage LUN, or you may be able to have multiple computers talking to the same storage LUN through some kind of arbitrated filesystem like CXFS.
      • Umm don't be a dumbass and go off half cocked. Clustering is not the only use for a SAN, in fact I run both NAS boxes and a SAN that have nothing to do with clustering. We use both to give our boxes managed expandable storage at a cost much less then direct attached storage and with the ability to expand far beyond what any case can hold and even beyond what most boxes could talk to directly using scsi.
        • Uh-huh. Sounds to me like you're using your SAN "to let multiple computers talk to the same set of storage devices." In your case, you may be using a shared filesystem, or you may use LUN mapping to assign a unit of storage carved out of a central system to each computer on the SAN. In either case, this is a good example of what I was talking about.

          SANs for storage consolidation, good. SANs for shared access to read-only data, good. SANs for shared access to read-write data, bad.
  • FireCube? (Score:2, Informative)

    by questionlp ( 58365 )
    I remember seeing a while back that a company had a storage device for Macs that allowed several users attach to the device using FireWire and the supplied software to access the drives in the unit. I can't remember who made it or what it was called... but a Google search can probably bring up a couple of hits.

    I'm not sure if I have seen any PC-oriented FireWire SAN solutions though as FireWire hasn't really been something you would see in a lot of computers until recently.

    I did find a couple when doing a search for "FireWire Network Storage":

    http://www.adept.net.au/1394/nas.shtml [adept.net.au]
    http://www.networkcomputing.com/1118/1118sp3.html [networkcomputing.com] (this is probably what I was thinking of)
    http://www.turnover.com/news/mdm/firenas.html [turnover.com]

  • by Yohahn ( 8680 )
    USB 2.0 could also be used.
    I think it would be cheaper.


    • USB is really slow. USB2 I mean. Its theoretical top performance is 480mbps, but Firewire is actually 400mbps-- sustained.

      You cannot sustain 480mbps over usb2.. Its really a very slow protocol. Especially if you plug in a 12mbps device into it.

      Even if you don't it really isn't up to speed for talking to more than one disk drive.

      • Are you sure that you aren't just hitting a bottleneck in a USB2 device running connected to a PCI slot? (PCI is the bottleneck)
        • Are you sure that you aren't just hitting a bottleneck in a USB2 device running connected to a PCI slot? (PCI is the bottleneck)

          Not likely (unless the USB controller chip is real crap, which does happen).

          USB2's bandwidth is 480 Megabits which equates to 60 Megabytes per second.

          PCI, in its slowest incarnation (32bit, 33Mhz, the most common flavour) does 132 Megabytes per second, aka 1.056 Gigabits/sec.
          So the culprit is hardly PCI (unless some other PCI card is hogging the bus).
          • These are all theoretical numbers, try it out and measure it.
          • those specifications are worthless. the only thing you should be interested in when measuring bandwidth is sustained throughput. a real-life 32-bit PCI bus (not just the slot, but the entire bus, which may have as many as 6 devices on it) has a sustained throughput of about 80 MB per second. even 64-bit 66MHz PCI 'only' does about 220MB/sec, depending on the implementation.
    • I paid $19.95 for my last FireWire card, and that was about a year ago. Looking at Pricewatch, the cheapest card I can definately say is USB2 is $16.85, vs $18.25 for a FireWire card that comes with a 6ft cable and Ulead software (which sucks, btw, but at least it's free).

      Not much of a price difference, especially when you consider FireWire's considerable performance advantage, but that's already been discussed by other responders.

    • IIRC., on a given USB chain/bus/tree/thingie/etc., there must be one *and only* one master device (generally a computer) that controls it and manages the transfer of data between devices, while other devices act as dumb peripherals waiting for the master to do something with them. Firewire, OTOH., resembles a peer-to-peer network, in that each device can be an intelligent controller and can initiate transfers to/from other devices on its own. Thus, Firewire is ideally better suited to building a SAN than USB. I'm not saying that a USB-based SAN would be impossible to build, but that it'd require some serious hacking in order to coax it into something more useful than a very fast serial port.

      • On the other hand, there are 1 chip usb-client solutions. Point me at a 1 chip firewire solution and I'll believe that firewire could do it cheaper and better.

        The engineering would be cheaper for usb2.
  • i know that it isn't ieee 1394, but if you want SAN capability hosted by an off the shelf linux box, you may want to take a look at some early [uml.edu] implementations [sourceforge.net] of the draft iSCSI spec [ietf.org]. basically, it'll let you present scsi devices over IP, giving you a SAN over any IP network (preventing you from dropping $$$ on fibre channel infrastructure).
    --------
    • The only downside of iSCSI is the fact that Gigabit Ethernet will give you a maximum burst rate of 125MBytes/sec (+/- 5MB/s), but that is without the overhead of the TCP/IP protocol and the fact that you may have other devices sharing the bandwidth on the same PCI bus. Of course, good Gig Ethernet NICs and switches aren't as expensive as FC switches and host bus adapters. 10Gig Ethernet will definitely provide a lot of available for iSCSI, but getting 10Gig Ethernet NICs and switches capable of 10Gig Ethernet isn't exactly cheap.

      Fibre Channel is expensive, but at least you get more of the 1Gb/s or 2Gb/s bandwidth than you would with Ethernet + TCP/IP overhead.

      Now what would be nice is a "SAN" or shared storage unit that can support multiple Serial Attached SCSI channels :) Of course, SA-SCSI isn't available just yet.
      • understood about throughput issues (even w/ gbit ethernet), but as as the question was for a home setup, he obviously isn't going to be dropping the quid on a new brocade switch and an emc symmetrix. iscsi can run _right_now_ over hardware he probably already has.

        just not ieee 1394 that i know of :)
        • That I definitely understand as I also fall into that category... somewhat :)

          I think iSCSI wouls be a nice solution for those who can't spend a whole lot of greens or quids... even if he needed special NICs (I think those NICs offload some of the processing onto a chip on the NIC rather than pelt the system processor) they aren't as expensive as 2Gb/s Fibre Channel adapters.
  • by Gruturo ( 141223 ) on Wednesday September 25, 2002 @04:18PM (#4330514)
    As others already pointed out, you are confusing the NAS and SAN concepts (but is it your fault? look at stuff like EMC Celerra HighRoad and then you'll be confused :-) )

    Anyway,
    Want to exploit 1394 (heck, we can finally call it Firewire!) to mount a disk? You just need a 1394 enclosure for your regular IDE disks. Example [1394store.com].

    Want to exploit 1394 to access a network share via SMB/NFS? You can, with Ip-over-1394 (works on Apple, Linux, Win ME and XP. Not on 2000).
    You just load the correct modules and it shows up like a network interface.

    Just my 0.02.

    I am not associated with the linked shop, I just happen to be a happy customer of theirs. Their Fire-I webcam is really cool (640x480x30fps) and it's amazing how well it can focus on extremely near objects, it's almost a microscope. I put it in contact with the screen and was able to focus on single pixels.... now that's a nice way to really study ClearType :-)

  • A true SAN is probably overkill for household use unless your hobies include rendering and editing Pixar-like shorts with your wife/girlfriend/dog/hamster working in tandem on one or more other workstations. That's about the only thing I can think of that a home user could use that kind of storage and speed for. Or you want to build some kind of 4 pc DIY TiVo with shared storage. Of course I don't know why you would.

    I'm agreeing with Billco. If you've got a Switched 100Mb Ethernet LAN in your house (Since you're toying with building a DIY SAN, I'm sure you do), just build a fileserver. The cost, effort and extra cable spaghetti just don't seem to be worth it. If you build a server, it can do a hell of a lot more than just locally share files too. (DHCP, LDAP, E-mail, HTTP.... ) And considering what you'd spend on a SAN implementation, you could get a pretty nice server for your home.

    As questionlp pointed out, if you've got Macs, the SANcube [sancube.com] is in a price range that's manageable for the hard core (employed) geek.

    Remember, use the right tool for the job. Don't kill flies with a bazooka.
  • My "fileserver" (also DHCP server) is a Pentium 166 with 32 megs memory and an old 10 mbit 3com509b card. Basically, $25 on ebay. A floppy drive was used to install linux (net install of Debian), then removed. There is no cd drive, and no video card in the machine. A 2 gig HDD is used for booting, and for most of the system files, and an 80 gig HDD is used for storage. (It was big at the time).

    Running Samba, I can saturate my 10 mbit network with the machine. With tests done on a 100 mbit network, I reach about 30% use. However, the bottleneck is not the CPU or memory, it seems to be the onboard IDE. With a PCI ATA 100 card, performance should go up.

    All in all, its a nice machine. Since its a desktop, it fits nicely under the printer it shares. An SSH server allows me to securely log in, change any system settings, and do updates. Its quiet, cheap, and effective. With only a power cable, an ethernet cable, and the printer cable, its neat. And did I mention upgradeable?

    A hardware RAID-IDE card should cost me about $250. [Haven't tried software RAID in a P166 and I have no urge too.] That shouldn't put any load on the CPU, and would provide redundancy. Getty on a serial port would be nice as well. If I want to, I can also swap the drive for something bigger without worry about the system supporting it. With ext3, it handles power outages well.

    It works.

    • by Anonymous Coward
      You stupid twat. Do you have the slightest fucking idea what a SAN is? Do you? You built a fucking file server, and from the sound of it a shitty one at that. (Only 30% of wire speed on a 100BASE-T network? Asshole, I can do 100% of wire speed between two fucking Mac laptops.)

      I don't ever want to hear you say "I did something similar" again unless you have some tiny, microscopic clue about the subject of conversation.

      You are hereby banned from posting to Slashdot for twenty-four hours. It's early autumn in the northern hemisphere and early spring in the southern; there is no habitable point on Earth where the weather is not absolutely beautiful right now. Go outside and exercise something other than your wanking hand for a while.
      • Well, either you're a troll or an idiot. Lets assume you are the latter.

        From the article -
        How about this: would it be possible to create a Linux-based disk-array with an IEEE1394 interface (Old P200, crammed with disks, software RAID, lots of RAM for caching, Firewire interface, looking/acting like a single disk to the outside world, storage device mgmt via web-frontend)?

        Your reading comprehension is horrible. So is your technical knowledge. Let me educate you.

        First of all, this guy wants to use a pentium 200 as the basis of the system. This places technological limitations on the system. For example, some pentium chipsets have a caching problem with anything over 64 Megs of memory. The kernel can work around this (basically by using anything above 64M as a swap file) but there are limitations to performance. There is also a limitation of how much memory you can put in the old pentium motherboards. If you're looking at a pentium-based solution, you're looking at something that's cheap and not appropriate for heavy loads.

        Now what disks are you going to put in this cheap system? SCSI? Only if you have the brains of the anonymous coward that I'm replying to. On pricewatch, a 146 GB SCSI drive is just under a grand. A 120 GB IDE driver is about $150.

        You talk about getting 100% speed between two laptops. Interesting. Not sure how that applies, since I'm willing to bet money that my packets traveled just as fast. If you're talking about bandwidth, we have a different problem. In a sustained read from a device, the limiting factors will be the HDD speed, IDE Bus speed, and ethernet card. Lets look at the HDD speed. Tom's hardware benchmarked a recent 120 GB HDD at between 20 - 40 mbytes/second. So, the hard drive should be able to saturate a 100 mbit/second ethernet network in a sustained read. Unfortuneately, when you run a new HDD over a 5 year old ide bus, the performance goes to hell. Say hello to 3 mbytes/second. If you don't realize that there will be a performance difference between a new mac laptop and some hardware that is a half decade old, then you're naive. No matter who you call 'Asshole', you won't get 100mbit/s out of a 5 year old IDE bus.

        So, how do we fix this? By either putting in a new IDE card, or RAID. Lets buy a nice card that does everything in hardware. The ATA-100 specifications gives us more then enough speed to match the hard drive. A RAID card allows us to combine hard drives and increase the maximum data transfer rate. However, we have a 132MB/s limit on the PCI bus (32 bit 33mhz bus). This should be fast enough to max out an ethernet network (even gigabit) or firewire.

        Up to now, this looks like building a bloody file server, doesn't it? Figuring out bandwidth and speeds of the components. *Slap* There goes your 'similar' complaint. In fact, some ethernet-based NAS are nothing more x86 Hardware and a custom BSD system. (Yes, I know SANs and NASes are different, but the storage media tends to be the same.)

        You can't replace the PCI bus, so if you're using a pentium based system, you're limited to 132mb/s, at peak efficiency. In practice, you'll get less then this. If the hard drive will be handling non-sequential reads (which it probably will), expect another drop.

        Basically though, building a NAS is nothing more then building a file server, then instead of running ftp/smb/nfs/afs or the like, hunting down or hacking something to provide NAS over firewire. From a hardware perspective, this project is easy. Software might prove a challenge.

        As for the weather outside, not all of us live between 45N and 45S. Some parts of the world are rather chilly this time of year.

        And that, sir, is why you are an idiot.

  • Don't use firewire. (Score:3, Informative)

    by stienman ( 51024 ) <adavis@@@ubasics...com> on Wednesday September 25, 2002 @04:37PM (#4330709) Homepage Journal
    The upcoming serial ATA standard will give you better performance at a lower cost. A firewire drive is large, expensive, and consumes slightly more power. All you gain over current IDE technology is hot swap, and that will be solved with serial ATA.

    But what you are really after are the tools to manage such a beast. The physical implementation shouldn't matter to the developers - all the software needs to know is that storage exists that the user needs to use, and how to read from and write to said storage. It shouldn't matter whether it's an IDE drive, a friewire, a usb, a scsi, a 1000 tape library, or any combination of storage devices which, IMHO, will be a great differentiating feature from commercial packages.

    Yes, the free SAN package handles your old room size tape robot as well as this rack of serial ATA drives, and will treat them accordingly - near line storage in the tapes (semi archive), on line storage in the HD, and off line (off site) over the WAN link to the storage cluster at your other shop. If you need an extra terabyte just go to officemax and plug in a firewire drive until the tech comes out and adds more serial ata devices to your drive chain.

    Of course, you could buy the SAN package available from x, or y, but you'll pay dearly for it, and you can't add storage to it yourself. Oh, and it only works with their hardware.

    -Adam
    • Adam, you've confused a SAN with an HSM (hierarchical storage management) system. They're not the same thing. In fact, they're really kind of incompatible ideas. In a SAN, you have a number of computers talking directly to a number of storage devices, without any arbitration in between. In other words, it's like talking to a file server, only without the file server*.

      An HSM system, on the other hand, uses software running on a file server to consolidate several different kinds of storage devices into one logical filesystem. As you write to the filesystem (over the LAN), the server puts the data on disks. When the disks start to get full, the server begins, in the background, moving data from the disks to an automated tape library, gradually freeing up disk storage as it goes. This happens without the client's knowledge; it looks like the server just has a whole lot of disk space available. When the client requests a file that's not on disk, the server stalls for a bit while it retrieves the data off of tape, then it returns the data to the client. So in an HSM system, client-to-server writes are really fast, but reads can be really, really slow.

      Since a SAN depends on directly attaching computers to storage without a server in between, and HSM depends on having a server there to manage the different types of storage devices, they're kind of incompatible ideas.

      * Reminds me of the old Einstein quote about radio. "You see, wire telegraph is a kind of a very, very long cat. You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this? And radio operates exactly the same way: you send signals here, they receive them there. The only difference is that there is no cat."
    • Just one point -- Serial ATA is going to run in parallel with vanilla ATA for a while, while vendors skim money off high-end users. It's not going to be "SCSI at ATA prices", at least not right away.
      • It never will be scsi at ata prices. Serial ATA will reduce pin count on chipsets (lower cost), reduce cable clutter and cooling issues (lower cost) and is still significantly faster than any physical hard drive out there right now anyway.

        It isn't meant to replace scsi, it's meant to replace parallel ATA.

        -Adam
    • you forgot to mention something he'd gain by using firewire - it's available now. he won't have to wait a year before native serial ata drives and controllers are out, and he can use 1.0 drivers with this new hardware...
  • by rakerman ( 409507 ) on Wednesday September 25, 2002 @05:31PM (#4331141) Homepage Journal
    Ok, it's not clear from your posting exactly what you want.

    Do you want NAS?
    That's Network Attached Storage. Currently almost entirely Ethernet based. You get a box with some disks and software, and it sits on the Ether looking like a fileserver, maybe just a CIFS server for Windows boxes, more likely both CIFS and NFS to support Windows and UNIX.

    Do you want a SAN?
    That's a Storage Area Network.
    A bunch of disk boxes connected together with a switched Fibre Channel network. Servers connect by Fibre Channel directly into the network.

    Do you want a NAShead on a SAN?
    A NAS device acts as a front-end to the SAN, so you have an Ethernet file-sharing frontend onto a Fibre Channel storage network backend.

    The problem with implementing any of these is they're about more than a transport medium. A NAS is more than Ethernet. A SAN is more than Fibre Channel. Those media mostly just pump the data around. It's a ton of software that handles the sharing of files.

    So sure, you can string a bunch of disks and CD burners and whatnot together with FireWire. No problem. I do it myself. "FireWire" disks are almost entirely just an enclosure with a normal ATA disk inside and an ATA-to-FireWire bridge. Adds a small cost onto the price of a regular IDE drive, that's it. You can buy the enclosures yourself and do it quite cheaply.

    However, the operating systems that you connect to the FireWire are going to have no freaking idea about filesharing. If you try to connect more than one host, it won't know what to do.

    What you need is FireWire ***PLUS*** filesharing software.

    Unibrain makes something they call FireNAS

    http://www.unibrain.com/home/

    That's about the closest thing in existence to what you describe.

    If you're wanting to use IP-over-1394 (RFC 2734), be aware that Microsoft's stack is the main working one. The Linux stack is in beta and Apple has no plans to implement IP-over-FireWire at all.

    You can find more info on IEEE-1394 at

    http://www.cs.dal.ca/~akerman/gradproject/projec t- links.html#IEEE1394
    Also check out the Linux 1394 project

    http://linux1394.sourceforge.net/
  • As others have pointed out, what you're talking about is a fileserver, not a SAN.

    However, interest in cheap SANs is rising, and I suspect it won't be long before a couple of projects start up to build these, then they get polished, then corporate types get interested in the big cost savings, and they start using these. It'd be particularly cool if Linux beat Windows to the gun here.

    Before you scoff, remember that that's what happened with the advent of clustering cheap PCs -- the custom supercomputer is nearly a dead beast now.

    There are enormous profits on SANs, so an open-source project could do wonders here.
  • Oracle has a modified iee1394 kernel module that would allow multiple hosts to use FireWire-attached drives just like shared SCSI. They did it for cheaply testing their cluster file system, but, hey, if it works ...
  • Speaking of this sort of thing, can anyone point me to a FireWire external disk box that holds more than one drive? I'm thinking something along the lines of this [sun.com] only using 1394 instead of fibre or SCSI.
    • places like firewire depot sell firewire raid enclosures, is that what you're asking about? it might be cheaper to get a mini-tower, some firewire bridge boards, and make your own disk box.
  • I appreciate everyone's comments about SAN and NAS, and apologize for not having made my point clearer.

    I'm not interested in NAS / Fileserver / anything running over Ethernet, simply coz it's a no-brainer to set them up. The whole post to ask-slashdot was probably more theoretical than anything else, to discuss what started as a crazy idea with fellow geeks.

    I know I can get a Firewire-IDE enclosure, but the question is, what happens if it hangs off a Firewire hub together with two computers? Will both be able to see it? How do you partition it (just normal fdisk I assume)? What if the two computers have different OS's? Then of course you've got to make sure that no two computers mount the same partition...

    Then I started taking it further: The device used at the customer site was a real disk array with RAID5DP and lots of cache. Would it be possible to build a low-cost disk array e.g. using linux - very much similar to the SanCUBE. Then you could do much more than with just a FireWire-IDE disk; think of "LUN security" - ensure that computer X only sees the partitions that it's supposed to see...

    Again, thanks for all the comments!
    • FireWire is not designed for such use. FireWire is optimized for large data transfers and the typical SAN transaction is under 2 kb in size. USB handles small transfers better, but requires significant amounts of CPU overhead that would not make it practical for anything like a SAN.

      I like the idea of a "cheap" SAN, but I think it still needs to perform well. I would have my doubts about doing anything like this with FireWire or USB.

      Now if only Xiotech would make an IDE version of their SAN!
    • if your 'sanbox' could run in target mode, and keep track of file locking issues, this might work. the only computers i know of that can operate in target mode with firewire are macs, and i don't think they can be used simultaneously by more than one machine when in target mode.
      I think you'd still need some sort of agent on the client (device driver, whatever) to communicate with the management device (or OS running on your single device) to get the right partition, or at least keep track of file-locking issues on a shared partition.
    • Hi,

      I was waving the same idea as you have. Since Firewire can support multiple masters (unlike USB) and technically there is no difference in hardware for client and master, it is technically possible to use a PCI hostadapter, given the correct driver software, to emulate a Firewire external disk (or many disks) to another computer. Apple calles it "target mode" for their notebooks.

      Now from what I heard and tried, this is not possible given the usual Firewire external disks, as those are "bridges" from IDE to Firewire, not full Firewire controllers. I was not able to see the disk on a second computer connected by the second Firewire port of the disk.

      However since you want to emulate the disk using a full blown computer, it's up to the software to do that. Some people have pointed to the Oracle project handling this (or somewhat handling this, as I cannot find out, since their "create a new account" page throws a JSP error... It might be not exactly what you need, but might be a good starting point.

      For myself, I came to the conclusion, that it's not as useful as I thought as first, since you cannot boot from Firewire (Macs can, I know, but my computers are no Macs). And for me a NAS featuring RAID, LVM, xfs/ext3 do support what I mainly want: resizing partitions/space in a flexible way for the client PCs.

      But having a SAN on top of Firewire would be nice and would be getting useful as soon as you have bootable Firewire cards. I anyhow wonder why no company does those. You can boot off USB (onboard USB), Ethernet, SCSI, floppy, IDE, but even on on board Firewire controllers you cannot boot from them. This would enable you to have a no-disc PC, running usual OS'es, not just special ones which can boot via NFS (dunno about Win2k).

      Harald

  • Here's a blurb from Oracle's Linux Page [oracle.com] about some patches they've done to linux for low-cost firewire SANs:
    Firewire Patches fixes some issues with Firewire on Linux and enables shared disk on top of firewire drivers. Firewire allows developers to easily and cheaply build a clustered system on a shared disk, which is useful for testing clustered applications and checking out the advanced features of Oracle's Real Application Clusters technology. The Firewire cards needed to build a cluster can cost as little as 10% as much as the required FiberChannel hardware.
  • Drop what you're doing and go to your local book monger. Go get the ORA book "Using SANs and NAS". Read the descriptions of each.

    Then come back here and ask that question without laughing hilariously.
  • Wow you are ignorant. The purpose of the SAN is certainly not ease of installation. And a Maxtor MaxAttach certainly is not a SAN! It's a NAS! Get it straight!

    A SAN as it is defined today, MUST be created using FC loops or fabric. There is no other topology (unless it could be done with firewire).

    Also, ease of *management* not installation, is the bonus of the SAN. Also, SANs use expensive FC HBA's which have almost 0% cpu utilization even when streaming data at 2Gb/s. Ever checked the CPU utilization on a P200 when streaming data over TCP/IP at 100Mb/s? Or even a P3-733? No comparison to fibre channel. nada.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...