IEEE1394-based Storage Area Network? 94
Hank asks: "I work for Hewlett-Packard and just recently installed my first SAN at a customer site. It was much fun, I was blown away by the ease of the storage device management and the allocation of storage space across the systems. Being a professional environment, it was high-available, ran over FibreChannel through switched fabric, and cost upwards of US$250k -- not really affordable for most households. Roughly at the same time I started looking at IEEE 1394 cards for some video-editing, and an idea came up: Would it be possible to build a lowcost SAN based on Firewire cards, hubs and devices? How would storage device mgmt look like (the (de-)allocating of LUNs / slices / partitions)? What about support of multiple OS's on the SAN? How about this: would it be possible to create a Linux-based disk-array with an IEEE1394 interface (Old P200, crammed with disks, software RAID, lots of RAM for caching, Firewire interface, looking/acting like a single disk to the outside world, storage device mgmt via web-frontend)?"
Lose the buzzwords (Score:1, Offtopic)
Re:Lose the buzzwords (Score:5, Insightful)
You're missing the point: Using firewire you have the high performance of Firewire. Cat5 and you're back into ethernet space and packets.
Firewire supports sustained high bandwidth transfers between multiple drives and multiple computers.
I mean, if you don't need the performance of a SAN, then sure, use Cat5 and you have a fileserver.
But if you're looking for something between FCAL and Ethernet, then Firewire is likely a great midrange choice.
Re:Lose the buzzwords (Score:2)
I've thought about this some, and was thinking iSCSI as an option.
If performance is REALLY an issue, I suppose you could invest in GigE.
As for the SAN / NAS issues, what we are seeing in the industry is that people want / need both. Some vendors are starting to deliver devices that do both in one box. Raw disk for databases and such, and network file systems for other tasks.
Frankly for home systems, NAS should be just fine.
Re:Lose the buzzwords (Score:2)
I've been using a 12 foot FireWire cable for awhile no with no trouble.
Firewire2 will allow optical fiber as an alternative which has essentially an unlimited run lenght. Well, unlimited inside of an office, not kilometers.
I think that even Gigabit Ethernet will not provide the same performance as Firewire. Effectively you can get 500Mbps over GigE and thats' awfully close to current Firewire's 400Mbps-- - except that Firewire will sustain that transfer rate, and I'm not sure any ethernet can. But I could be wrong.
There are tradeoffs for both. If you want to link a cluster of machines and drives at high performance without going to FCAL, Firewire is a great way to go. IF you want to link a lot of computer to the storage, then Gigabit ethernet is probably the way to go-- but that ethernet could terminate in a server that has a Firewire network behind it linking a bunch of drives.
Re:Lose the buzzwords (Score:2)
Re:Lose the buzzwords (Score:2)
Re:Lose the buzzwords (Score:2)
Ugh!
I would expect Firewire-2 to be somewhere in the sub $1 per port range. At that price differential, I would expect Firewire to win out for quite some time.
Re:Lose the buzzwords (Score:1)
Re:Lose the buzzwords (Score:2)
Re:Lose the buzzwords (Score:1)
Firewire, Fibre Channel, Scsi etc. don't have this flexibility. They assume that the device you are talking to is is on this bus/ring and that messages will, give or take parity errors (which can be reported) get through reliably. Therefore they can have much ligher protocols. You can usee what, in Ethernet terms, is a MAC address not an IP address.
The firewire suggestion at the head is perfectly snesible. But if you were to go into the ethernet driver at the packet level, not at the OP level, I reckon you could get 90% of theoretical bandwidth out of an ethernet-based connection. We currently get over 500Mbit/sec using UDP connections. Drop a layer of protocol and it could be even faster.
Re:Lose the buzzwords (Score:2)
Re:Lose the buzzwords (Score:1)
Re:Lose the buzzwords (Score:1)
Re:Lose the buzzwords (Score:2)
Re:Lose the buzzwords (Score:2, Informative)
Um, no. You've just described NAS [everything2.com] - Network Attached Storage. Shared storage from NAS devices appears as NFS (or Samba, Mac, or whatever) and you can mount it on any client.
A SAN [everything2.com] - Storage Area Network - is when you have lots of RAID storage being shared by several servers. Each server believes it is directly attached to a physical disk, when actually it's just getting one or more slices of the pooled RAID units.
Re:Lose the buzzwords (Score:2)
A storage area network consists of storage devices directly attached to computers, via a star, bus, or fabric topology. Computer-to-computer networks are not storage area networks. They are simply LANs, local area networks.
What you described-- a server with filesystem sharing software-- is technically just a file server. A file server that comes with software and storage preconfigured, all in one box, is called a filer or a NAS appliance. Since what you described isn't preconfigured, it's just a file server.
Right now, the vast majority of storage area networks use fibre channel as the physical interconnect. Fibre channel was designed to be switchable, in fact; you can connect multiple storage devices-- devices, not servers; things like RAIDs and tape libraries-- to multiple servers via a switch, and those things can all talk to one another via the fabric.
Re:Lose the buzzwords (Score:1)
I want to do this as well, for a small network of video editing machines. But have not found any project beond to planning stage. It seem that anyone that needs to do SAN has a crap load of money to spend.
Re:Lose the buzzwords (Score:2)
And what you're talking about isn't such a great idea, either-- no offense. Leaving aside for a moment the technical challenges-- how do you turn a Linux system into a FireWire target, anyway?-- there would he serious cache coherency issues. How would you remotely invalidate a filesystem buffer cache, or a cached inode?
This issue is far more complex than you think it is.
Re:Lose the buzzwords (Score:1)
There is a standard for firewire disks sbp2. You implement that as the disk and not as the host.
> This issue is far more complex than you think it is.
Well I think that it is very complex that is why I have not done it yet and selling TB ide arrays with a firewire ports that can be used as a SAN.
> -- there would he serious cache coherency issues. How would you remotely invalidate a filesystem buffer cache, or a cached inode?
There are patches by oracle to allow linux system to share a common firewire disk. How is it handled in a SAN normal envorment? Do it the same way. Big SAN arrays have internal cache so I think they like for the OSes to do as little cacheing as possible.
Re:Lose the buzzwords (Score:2)
Now you're starting to understand. Shared-access SANs are highly proprietary things requiring complex multi-node-aware filesystems, like Centravision or CXFS. It is not an easy thing. In most situations, it simply doesn't work at all.
Big SAN arrays have internal cache so I think they like for the OSes to do as little cacheing as possible.
First of all, there's no such thing as a "SAN array." There are disk arrays and storage systems that can be used on a SAN, but there's nothing special about them. They're just RAIDs, essentially, albeit sometimes with a few more bells and whistles.
And as for the caching thing, every operating system uses cached I/O for practically everything. (Direct, or unbuffered, I/O can be used in some situations where the data can be handled more efficiently by the application than by the OS; these situations are rare.) So let's say you and I are hooked up to the same hard drive. I open a file. Then you open the same file. I seek into the file and start reading. Let's say I read 1 MB of data into memory. Then you seek into the file and start writing. You write over the same blocks that I just read. I have no way of knowing this, of course, so I just keep doing what I'm doing, oblivious to the fact that the data I'm caching is out of date.
It gets worse. Say I decide to unlink the file... while you're in the middle of writing it. If we were talking about two applications on the same computer, I'd get an error back from the OS saying that you can't unlink an open file. (Or something, depending on the environment.) But since we're talking about two programs running on two different computers, I get no such error. Can't get one, in fact, unless my OS is keeping track of which files are currently open on your computer, and vice versa. Suddenly a normal filesystem won't work any more. We need something new, like CXFS or Centravision.
By this time, of course, we've given up on the whole damned thing and gone back to network-attached storage with Gigabit Ethernet interconnect. The last straw was the fact that my reading a file and your reading a file sent the shared drive into conniptions as the heads skittered all over the place. Disk contention is a bitch.
Shared-access SANs are incredibly complex. And, in general, they suck.
Re:Lose the buzzwords (Score:1)
Re:Lose the buzzwords (Score:2)
As for NFS-style locking, there is no server on which to run the lock daemon. If there were a server, we'd be talking about NAS instead of a SAN, which is a different thing altogether. With a SAN, there's nothing but computers and disks, and the disks are not smart. You can't do file locking at the disk level. You have to have a mechanism through which the two computers can communicate with each other directly... which puts us back into special filesystem territory yet again.
I'm not quite sure why, but it's clear that we're not communicating well here. What can I say to make this more clear to you?
Re:Lose the buzzwords (Score:1)
There is a "server" but it is emulating a disk with a standard file system.
Or with the lock deamon on the "server".
You try to open a file that is already open for writing and it fails. I am reading a file, some other system trys to open it for writing it fails.
You don't just open a file then decide what you are going to do with it. That is why the open function has arguments.
Now I never said it was easy(but you make it sound imposable) or that I was going to waste my time tring to do it. It is just that it is what I would like, and I am not the only one that wants something like this. But most of the Uber hackers out there today work for companies that want to make money off of the Big Boys and or people that can aford the current solutions not the little guys.
Re:Lose the buzzwords (Score:2)
Meanwhile, I come along and unlink the file.
In a single-access system, this isn't a problem. If I unlink the file while you've got it open, nothing really happens until you close it. After you close it, the file disappears and its space is reclaimed. This is because the kernel keeps track of who's got which files open, and prevents one process from pulling the rug-- so to speak-- out from under another process.
(This is also the source of the age-old trick of opening a file and immediately unlinking it. It's a good way of handling the automatic reclamation of temporary files.)
But if we've got two systems, each with a totally independent kernel, talking to the same disk, the problem suddenly gets a lot harder. When I unlink the file, my kernel will look at its tables and conclude that nobody else has the file open, so it will immediately try to reclaim the blocks on the disk. This will send your application, which is actively trying to read from the file, into shitfits.
No amount of lock files would save you in this situation. All I did was type "rm somefile." The rm command doesn't check for the presence of a lock file. Unless you want to rewrite all the file utilities, lock files aren't going to do the job.
(And extended attributes, of course, aren't portable. They're also tied to the inode, which raises its own set of problems.)
I just think you're grossly underestimating the complexity of this issue. If all you want is to share files across a network, then just use NAS. It'll work better and your life will be easier.
Re:Lose the buzzwords (Score:1)
Guys, I think you're both absolutely right about the complexity of simultaneous access to shared disks - something has to control that, and it's NOT the disk array. At my customer site, I was using HP's high-availability cluster software MC/ServiceGuard, and the operating system's logical volume manager controls the shared access (you prepare the volume group as shared, then mount it in 'exclusive' mode where it keeps track somehow how's mounting it - so that no simultaneous access is possible - MC/SG is for high-availability only, not for parallel performance like Oracle RAC).
Let's take a step back. Let's assume I don't want too much smarts, and that I will look after the mounting so that no two SAN nodes will attempt to mount the same volume / partition. That should make it easier, right? You mentioned something about reversing the IEEE1394 stack - can you be a bit more precise?
E.g. the home built disk array is partitioned into two slices ("1" and "2"). Computer A mounts slice 1, computer B mounts slice 2.
BTW: The disk array I used has a great feature - you associate LUNs (space allocated on the disk array) to the World Wide Name of the FC HBA (like a MAC address in FibreChannel land). It's called "LUN Security"... Do Firewire controllers have an Identification of themselves?
Re:Lose the buzzwords (Score:2)
Your HP solution sounds pretty much like IRIS FailSafe, from SGI. The storage device is shared between two (or more) hosts, but only one host has access to it at any given time. When that host fails, the other host automatically mounts the storage device and takes up the failed node's responsibilities. Shared storage is a requirement for that kind of configuration; in the old days, we used to do it with SCSI.
I have no idea whether FireWire can support multiple hosts on a chain. Obviously it can handle more than one target, but I don't know about more than one initiator.
Also, the "LUN security" feature you talked about is sometimes called "LUN masking" or "LUN mapping," depending on how you do it. You can map a LUN to a switch port-- that's LUN mapping. Or you can set it up so that a port specifically doesn't see certain LUNs; that's LUN masking. I haven't worked with fibre channel on Windows in an age, but back when I did we had to use LUN mapping/LUN masking. At that time, the Windows host would try to send a device reset command to every LUN when it initialized its HBA. This was, of course, a very bad thing. So you had to set up your switch so that the Windows machines only saw the LUNs that they would be mounting. It was a little bit of a pain in the rear to set up, especially if the switch only supported LUN masking and not LUN mapping. If you add LUNs to your fabric, you have to go in and adjust all of your masks to mask the new LUNs out. On the other hand, if you use LUN mapping, the mapped ports will just ignore the new LUNs unless you specifically tell them not to.
Re:Lose the buzzwords (Score:1)
Re:Lose the buzzwords (Score:1)
Please someone just try to sell me a SAN.
-Felddy
Re:Lose the buzzwords (Score:2)
Re:Lose the buzzwords (Score:1)
Or clustering ... (Score:2, Interesting)
Re:Or clustering ... (Score:2)
Re:Or clustering ... (Score:2)
Re:Or clustering ... (Score:2)
SANs for storage consolidation, good. SANs for shared access to read-only data, good. SANs for shared access to read-write data, bad.
FireCube? (Score:2, Informative)
I'm not sure if I have seen any PC-oriented FireWire SAN solutions though as FireWire hasn't really been something you would see in a lot of computers until recently.
I did find a couple when doing a search for "FireWire Network Storage":
http://www.adept.net.au/1394/nas.shtml [adept.net.au]
http://www.networkcomputing.com/1118/1118sp3.html [networkcomputing.com] (this is probably what I was thinking of)
http://www.turnover.com/news/mdm/firenas.html [turnover.com]
Re:SANcube (Score:1)
Re:FireCube? (Score:1)
USB2 (Score:2)
I think it would be cheaper.
Re:USB2 (Score:2)
USB is really slow. USB2 I mean. Its theoretical top performance is 480mbps, but Firewire is actually 400mbps-- sustained.
You cannot sustain 480mbps over usb2.. Its really a very slow protocol. Especially if you plug in a 12mbps device into it.
Even if you don't it really isn't up to speed for talking to more than one disk drive.
Re:USB2 (Score:2)
Re:USB2 (Score:2)
Not likely (unless the USB controller chip is real crap, which does happen).
USB2's bandwidth is 480 Megabits which equates to 60 Megabytes per second.
PCI, in its slowest incarnation (32bit, 33Mhz, the most common flavour) does 132 Megabytes per second, aka 1.056 Gigabits/sec.
So the culprit is hardly PCI (unless some other PCI card is hogging the bus).
Re:USB2 (Score:2)
Re:USB2 (Score:2)
Re:USB2 (Score:2)
Re:USB2 (Score:2)
Re:USB2 (Score:2)
Re:USB2 (Score:2)
Not much of a price difference, especially when you consider FireWire's considerable performance advantage, but that's already been discussed by other responders.
Re:USB2 (Score:2)
IIRC., on a given USB chain/bus/tree/thingie/etc., there must be one *and only* one master device (generally a computer) that controls it and manages the transfer of data between devices, while other devices act as dumb peripherals waiting for the master to do something with them. Firewire, OTOH., resembles a peer-to-peer network, in that each device can be an intelligent controller and can initiate transfers to/from other devices on its own. Thus, Firewire is ideally better suited to building a SAN than USB. I'm not saying that a USB-based SAN would be impossible to build, but that it'd require some serious hacking in order to coax it into something more useful than a very fast serial port.
Re:USB2 (Score:2)
The engineering would be cheaper for usb2.
how about iSCSI over ethernet? (Score:2)
--------
Re:how about iSCSI over ethernet? (Score:1)
Fibre Channel is expensive, but at least you get more of the 1Gb/s or 2Gb/s bandwidth than you would with Ethernet + TCP/IP overhead.
Now what would be nice is a "SAN" or shared storage unit that can support multiple Serial Attached SCSI channels
Re:how about iSCSI over ethernet? (Score:1)
just not ieee 1394 that i know of
Re:how about iSCSI over ethernet? (Score:1)
I think iSCSI wouls be a nice solution for those who can't spend a whole lot of greens or quids... even if he needed special NICs (I think those NICs offload some of the processing onto a chip on the NIC rather than pelt the system processor) they aren't as expensive as 2Gb/s Fibre Channel adapters.
Re:how about iSCSI over ethernet? (Score:1)
You haven't been very clear, but you can do both (Score:3, Informative)
Anyway,
Want to exploit 1394 (heck, we can finally call it Firewire!) to mount a disk? You just need a 1394 enclosure for your regular IDE disks. Example [1394store.com].
Want to exploit 1394 to access a network share via SMB/NFS? You can, with Ip-over-1394 (works on Apple, Linux, Win ME and XP. Not on 2000).
You just load the correct modules and it shows up like a network interface.
Just my 0.02.
I am not associated with the linked shop, I just happen to be a happy customer of theirs. Their Fire-I webcam is really cool (640x480x30fps) and it's amazing how well it can focus on extremely near objects, it's almost a microscope. I put it in contact with the screen and was able to focus on single pixels.... now that's a nice way to really study ClearType
The linked example is quite expensive. (Score:2)
The linked example is quite expensive. It is better to buy an empty firewire enclosure and put a 120GB WD drive in it.
Re:You haven't been very clear, but you can do bot (Score:2)
Overkill for household use. (Score:1)
I'm agreeing with Billco. If you've got a Switched 100Mb Ethernet LAN in your house (Since you're toying with building a DIY SAN, I'm sure you do), just build a fileserver. The cost, effort and extra cable spaghetti just don't seem to be worth it. If you build a server, it can do a hell of a lot more than just locally share files too. (DHCP, LDAP, E-mail, HTTP.... ) And considering what you'd spend on a SAN implementation, you could get a pretty nice server for your home.
As questionlp pointed out, if you've got Macs, the SANcube [sancube.com] is in a price range that's manageable for the hard core (employed) geek.
Remember, use the right tool for the job. Don't kill flies with a bazooka.
Re:Overkill for household use. (Score:2)
Come on, dude, don't be silly. Everybody knows hamsters are no good at animation.
Re:Overkill for household use. (Score:1)
You need to upgrade your hamster. Mine is great at animation. He's working on this awesome shot right now with some alien warships bombing the Earth from orbit. Great stuff.
Ah yes, hamster upgrades. (Score:1)
You forgot to provide a link for hamster upgrades. I hope they are downloadable. Can an upgraded hamster pull a bulldozer out of a muddy hole?
Re:Overkill for household use. (Score:1)
I did something similar (Score:1, Offtopic)
My "fileserver" (also DHCP server) is a Pentium 166 with 32 megs memory and an old 10 mbit 3com509b card. Basically, $25 on ebay. A floppy drive was used to install linux (net install of Debian), then removed. There is no cd drive, and no video card in the machine. A 2 gig HDD is used for booting, and for most of the system files, and an 80 gig HDD is used for storage. (It was big at the time).
Running Samba, I can saturate my 10 mbit network with the machine. With tests done on a 100 mbit network, I reach about 30% use. However, the bottleneck is not the CPU or memory, it seems to be the onboard IDE. With a PCI ATA 100 card, performance should go up.
All in all, its a nice machine. Since its a desktop, it fits nicely under the printer it shares. An SSH server allows me to securely log in, change any system settings, and do updates. Its quiet, cheap, and effective. With only a power cable, an ethernet cable, and the printer cable, its neat. And did I mention upgradeable?
A hardware RAID-IDE card should cost me about $250. [Haven't tried software RAID in a P166 and I have no urge too.] That shouldn't put any load on the CPU, and would provide redundancy. Getty on a serial port would be nice as well. If I want to, I can also swap the drive for something bigger without worry about the system supporting it. With ext3, it handles power outages well.
It works.
Re:I did something similar (Score:1, Funny)
I don't ever want to hear you say "I did something similar" again unless you have some tiny, microscopic clue about the subject of conversation.
You are hereby banned from posting to Slashdot for twenty-four hours. It's early autumn in the northern hemisphere and early spring in the southern; there is no habitable point on Earth where the weather is not absolutely beautiful right now. Go outside and exercise something other than your wanking hand for a while.
Re:I did something similar (Score:1)
Well, either you're a troll or an idiot. Lets assume you are the latter.
From the article -
How about this: would it be possible to create a Linux-based disk-array with an IEEE1394 interface (Old P200, crammed with disks, software RAID, lots of RAM for caching, Firewire interface, looking/acting like a single disk to the outside world, storage device mgmt via web-frontend)?
Your reading comprehension is horrible. So is your technical knowledge. Let me educate you.
First of all, this guy wants to use a pentium 200 as the basis of the system. This places technological limitations on the system. For example, some pentium chipsets have a caching problem with anything over 64 Megs of memory. The kernel can work around this (basically by using anything above 64M as a swap file) but there are limitations to performance. There is also a limitation of how much memory you can put in the old pentium motherboards. If you're looking at a pentium-based solution, you're looking at something that's cheap and not appropriate for heavy loads.
Now what disks are you going to put in this cheap system? SCSI? Only if you have the brains of the anonymous coward that I'm replying to. On pricewatch, a 146 GB SCSI drive is just under a grand. A 120 GB IDE driver is about $150.
You talk about getting 100% speed between two laptops. Interesting. Not sure how that applies, since I'm willing to bet money that my packets traveled just as fast. If you're talking about bandwidth, we have a different problem. In a sustained read from a device, the limiting factors will be the HDD speed, IDE Bus speed, and ethernet card. Lets look at the HDD speed. Tom's hardware benchmarked a recent 120 GB HDD at between 20 - 40 mbytes/second. So, the hard drive should be able to saturate a 100 mbit/second ethernet network in a sustained read. Unfortuneately, when you run a new HDD over a 5 year old ide bus, the performance goes to hell. Say hello to 3 mbytes/second. If you don't realize that there will be a performance difference between a new mac laptop and some hardware that is a half decade old, then you're naive. No matter who you call 'Asshole', you won't get 100mbit/s out of a 5 year old IDE bus.
So, how do we fix this? By either putting in a new IDE card, or RAID. Lets buy a nice card that does everything in hardware. The ATA-100 specifications gives us more then enough speed to match the hard drive. A RAID card allows us to combine hard drives and increase the maximum data transfer rate. However, we have a 132MB/s limit on the PCI bus (32 bit 33mhz bus). This should be fast enough to max out an ethernet network (even gigabit) or firewire.
Up to now, this looks like building a bloody file server, doesn't it? Figuring out bandwidth and speeds of the components. *Slap* There goes your 'similar' complaint. In fact, some ethernet-based NAS are nothing more x86 Hardware and a custom BSD system. (Yes, I know SANs and NASes are different, but the storage media tends to be the same.)
You can't replace the PCI bus, so if you're using a pentium based system, you're limited to 132mb/s, at peak efficiency. In practice, you'll get less then this. If the hard drive will be handling non-sequential reads (which it probably will), expect another drop.
Basically though, building a NAS is nothing more then building a file server, then instead of running ftp/smb/nfs/afs or the like, hunting down or hacking something to provide NAS over firewire. From a hardware perspective, this project is easy. Software might prove a challenge.
As for the weather outside, not all of us live between 45N and 45S. Some parts of the world are rather chilly this time of year.
And that, sir, is why you are an idiot.
Don't use firewire. (Score:3, Informative)
But what you are really after are the tools to manage such a beast. The physical implementation shouldn't matter to the developers - all the software needs to know is that storage exists that the user needs to use, and how to read from and write to said storage. It shouldn't matter whether it's an IDE drive, a friewire, a usb, a scsi, a 1000 tape library, or any combination of storage devices which, IMHO, will be a great differentiating feature from commercial packages.
Yes, the free SAN package handles your old room size tape robot as well as this rack of serial ATA drives, and will treat them accordingly - near line storage in the tapes (semi archive), on line storage in the HD, and off line (off site) over the WAN link to the storage cluster at your other shop. If you need an extra terabyte just go to officemax and plug in a firewire drive until the tech comes out and adds more serial ata devices to your drive chain.
Of course, you could buy the SAN package available from x, or y, but you'll pay dearly for it, and you can't add storage to it yourself. Oh, and it only works with their hardware.
-Adam
Re:Don't use firewire. (Score:2)
An HSM system, on the other hand, uses software running on a file server to consolidate several different kinds of storage devices into one logical filesystem. As you write to the filesystem (over the LAN), the server puts the data on disks. When the disks start to get full, the server begins, in the background, moving data from the disks to an automated tape library, gradually freeing up disk storage as it goes. This happens without the client's knowledge; it looks like the server just has a whole lot of disk space available. When the client requests a file that's not on disk, the server stalls for a bit while it retrieves the data off of tape, then it returns the data to the client. So in an HSM system, client-to-server writes are really fast, but reads can be really, really slow.
Since a SAN depends on directly attaching computers to storage without a server in between, and HSM depends on having a server there to manage the different types of storage devices, they're kind of incompatible ideas.
* Reminds me of the old Einstein quote about radio. "You see, wire telegraph is a kind of a very, very long cat. You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this? And radio operates exactly the same way: you send signals here, they receive them there. The only difference is that there is no cat."
Re:Don't use firewire. (Score:2)
SAN and NAS are fundamentally different implementations. They have the same basic purpose: computer A and computer B need to access the same data on the same storage device at the same time. But that's where the similarities end.
NAS is strictly a client-server system. All the clients talk to the server, but not to each other. Clients make read requests to the server, which queues and handles the requests, caching the data along the way. The server handles things like file permissions, access control, locking, and synchronization issues. The server also arbitrates contention situations, by putting I/O requests into a queue.
Shared-access SANs are completely different. In a shared-access SAN there is no server, which means there's no central arbiter of things like permissions, access control, locking, synchronization, and request queuing. Instead, each client computer simply talks directly to the disks. In theory, taking out the middle man this way decreases latency and increases bandwidth, but in practice contention issues arise that eliminate any gains. Since there's no arbiter for things like permissions and access control, the clients all have to talk to one another somehow; that's where cluster-aware filesystems like CXFS or Centravision come in. These filesystem are complex in ways that most people fail to realize, and they are highly prone to failure. In particular, an election-based system like CXFS doesn't tolerate the coming and going of nodes to and from the cluster very well. At any time, any node can be the control node, and if it disappears, the filesystem can become wedged for a time until a new election occurs. So these types of filesystems work best in tightly coupled groups of systems, like highly available clusters, or parallel processing clusters.
SANs and NAS are only similar on the surface. Beneath the surface, they're very, very different.
Serial ATA not the answer (Score:2)
Re:Serial ATA not the answer (Score:2)
It isn't meant to replace scsi, it's meant to replace parallel ATA.
-Adam
Re:Don't use firewire. (Score:1)
terminology, technology (Score:3, Informative)
Do you want NAS?
That's Network Attached Storage. Currently almost entirely Ethernet based. You get a box with some disks and software, and it sits on the Ether looking like a fileserver, maybe just a CIFS server for Windows boxes, more likely both CIFS and NFS to support Windows and UNIX.
Do you want a SAN?
That's a Storage Area Network.
A bunch of disk boxes connected together with a switched Fibre Channel network. Servers connect by Fibre Channel directly into the network.
Do you want a NAShead on a SAN?
A NAS device acts as a front-end to the SAN, so you have an Ethernet file-sharing frontend onto a Fibre Channel storage network backend.
The problem with implementing any of these is they're about more than a transport medium. A NAS is more than Ethernet. A SAN is more than Fibre Channel. Those media mostly just pump the data around. It's a ton of software that handles the sharing of files.
So sure, you can string a bunch of disks and CD burners and whatnot together with FireWire. No problem. I do it myself. "FireWire" disks are almost entirely just an enclosure with a normal ATA disk inside and an ATA-to-FireWire bridge. Adds a small cost onto the price of a regular IDE drive, that's it. You can buy the enclosures yourself and do it quite cheaply.
However, the operating systems that you connect to the FireWire are going to have no freaking idea about filesharing. If you try to connect more than one host, it won't know what to do.
What you need is FireWire ***PLUS*** filesharing software.
Unibrain makes something they call FireNAS
http://www.unibrain.com/home/
That's about the closest thing in existence to what you describe.
If you're wanting to use IP-over-1394 (RFC 2734), be aware that Microsoft's stack is the main working one. The Linux stack is in beta and Apple has no plans to implement IP-over-FireWire at all.
You can find more info on IEEE-1394 at
http://www.cs.dal.ca/~akerman/gradproject/proje
Also check out the Linux 1394 project
http://linux1394.sourceforge.net/
Cheap SANs (Score:2)
However, interest in cheap SANs is rising, and I suspect it won't be long before a couple of projects start up to build these, then they get polished, then corporate types get interested in the big cost savings, and they start using these. It'd be particularly cool if Linux beat Windows to the gun here.
Before you scoff, remember that that's what happened with the advent of clustering cheap PCs -- the custom supercomputer is nearly a dead beast now.
There are enormous profits on SANs, so an open-source project could do wonders here.
SAN (as opposed to NAS) is possible with FireWire (Score:1)
Multi-Drive Enclosures? (Score:1)
Re:Multi-Drive Enclosures? (Score:1)
A SAN is what I'm after... (Score:1)
I appreciate everyone's comments about SAN and NAS, and apologize for not having made my point clearer.
I'm not interested in NAS / Fileserver / anything running over Ethernet, simply coz it's a no-brainer to set them up. The whole post to ask-slashdot was probably more theoretical than anything else, to discuss what started as a crazy idea with fellow geeks.
I know I can get a Firewire-IDE enclosure, but the question is, what happens if it hangs off a Firewire hub together with two computers? Will both be able to see it? How do you partition it (just normal fdisk I assume)? What if the two computers have different OS's? Then of course you've got to make sure that no two computers mount the same partition...
Then I started taking it further: The device used at the customer site was a real disk array with RAID5DP and lots of cache. Would it be possible to build a low-cost disk array e.g. using linux - very much similar to the SanCUBE. Then you could do much more than with just a FireWire-IDE disk; think of "LUN security" - ensure that computer X only sees the partitions that it's supposed to see...
Again, thanks for all the comments!Re:A SAN is what I'm after... (Score:1)
I like the idea of a "cheap" SAN, but I think it still needs to perform well. I would have my doubts about doing anything like this with FireWire or USB.
Now if only Xiotech would make an IDE version of their SAN!
Re:A SAN is what I'm after... (Score:1)
I think you'd still need some sort of agent on the client (device driver, whatever) to communicate with the management device (or OS running on your single device) to get the right partition, or at least keep track of file-locking issues on a shared partition.
Re:A SAN is what I'm after... (Score:1)
Hi,
I was waving the same idea as you have. Since Firewire can support multiple masters (unlike USB) and technically there is no difference in hardware for client and master, it is technically possible to use a PCI hostadapter, given the correct driver software, to emulate a Firewire external disk (or many disks) to another computer. Apple calles it "target mode" for their notebooks.
Now from what I heard and tried, this is not possible given the usual Firewire external disks, as those are "bridges" from IDE to Firewire, not full Firewire controllers. I was not able to see the disk on a second computer connected by the second Firewire port of the disk.
However since you want to emulate the disk using a full blown computer, it's up to the software to do that. Some people have pointed to the Oracle project handling this (or somewhat handling this, as I cannot find out, since their "create a new account" page throws a JSP error... It might be not exactly what you need, but might be a good starting point.
For myself, I came to the conclusion, that it's not as useful as I thought as first, since you cannot boot from Firewire (Macs can, I know, but my computers are no Macs). And for me a NAS featuring RAID, LVM, xfs/ext3 do support what I mainly want: resizing partitions/space in a flexible way for the client PCs.
But having a SAN on top of Firewire would be nice and would be getting useful as soon as you have bootable Firewire cards. I anyhow wonder why no company does those. You can boot off USB (onboard USB), Ethernet, SCSI, floppy, IDE, but even on on board Firewire controllers you cannot boot from them. This would enable you to have a no-disc PC, running usual OS'es, not just special ones which can boot via NFS (dunno about Win2k).
Harald
Oracle worked on this already (Score:2)
Make it STOP! (Score:2)
Then come back here and ask that question without laughing hilariously.
Re:Make it STOP! (Score:2)
RE: Lose the Buzwords (Score:1)
A SAN as it is defined today, MUST be created using FC loops or fabric. There is no other topology (unless it could be done with firewire).
Also, ease of *management* not installation, is the bonus of the SAN. Also, SANs use expensive FC HBA's which have almost 0% cpu utilization even when streaming data at 2Gb/s. Ever checked the CPU utilization on a P200 when streaming data over TCP/IP at 100Mb/s? Or even a P3-733? No comparison to fibre channel. nada.