Sharing a SCSI Drive Between Two Boxes Using Linux? 112
yppasswd asks: "I'm looking for a (cheap) solution for filesystem sharing between two linux servers and, since the target is just redundancy, I've come to the following idea: two SCSI controllers, one per machine, with different IDs (say 7 and 6) sharing the same disk. Only one of them would mount the disk, the other is just ready in case of failure. I've googled this around, and I've found many different opinions (Yes, no, perhaps, don't do it or it'll explode,...) but nobody saying 'Ok, I've tried this and here is what happened...'. Suggestions are welcome, but keep in mind that many other solutions (Fiber Channel, SSA, NFS mounts, various network filesystems) were already rejected because they were either too expensive, unreliable or not supported under Linux."
Two boxes?? (Score:2)
Call me an ignorant but I don't understand.
Re:Two boxes?? (Score:2)
I believe large systems sepererate the data to a whole seperate server and have the processing done on redundent machines.
I.E. You connect to the website, depending on load you're passed to one of multiple webservers, the webservers connect to an internal fileserver for pages and then pass it on to you. So if one of the webservers goes down, they others can keep going without any loss of content.
Tho in large systems they probably also have redundent data servers..
I'm guessing he's looking for a way to do this on the small scale.
But really, what the hell do I know?
DRDB network raid system anyone? (Score:2, Interesting)
I'm building a heartbeat [linux-ha.org] cluster to serve WebGUI [webgui.nl] pages and files via samba [samba.org].
This going to be presented at a congress for the Netherlands Network User Group [www.ngn.nl] November 13th (a mostly Novell and Microsoft NT association).
I have been looking for a solution to mirror files between the two cluster nodes. SCSI is just too expensive for this, since low cost is one of the requirements. I've been trying to compile DRDB [slackworks.com] on my gentoo [gentoo.org] 1.3 systems but the 2.4 kernel isn't supported by the default DRDB distibution yet.
Does anyone know about any other projects like these that actually work?
Re:Two boxes?? (Score:1)
I don't see why you couldn't just dup the hardware, disk and all, and just have a hot-spare server ready to go if you need it.
.
Re:Two boxes?? (Score:1)
You need a pigtail (Score:2, Informative)
Re:You need a pigtail (Score:4, Informative)
External SCSI Cables for the RS/6000 [ibm.com]
Re:You need a pigtail - or two SCSI cables (Score:1)
IEEE 1394a (Score:2, Interesting)
Re:IEEE 1394a (Score:1)
If I'm wrong, please correct me, as I'd really like to get something like this going. Maybe what's needed is some sort of Firewire SAN switch. Hmmm....
Re:IEEE 1394a (Score:1)
Re:IEEE 1394a (Score:1)
Is there currently support for this sort of setup in Linux/BSD?
Personally, I have not seen a 1394 enclosure w/ more than one port with the exception of the SANCube [sancube.com]. But this seems to be a Mac-entric device.
I suppose I should stop posting and just do some research...
Re:IEEE 1394a (Score:1)
http://www.attotech.com/fcaccelware.h
Re:IEEE 1394a (Score:1)
OK, so as soon as I finished posting, I did a search for firewire enclosures. Wouldn't it figure? Every single freaking box had two ports. That'll teach me to post without researching...
So if it's mostly a software issue, would something like a firewire RAID box and GFS/OpenGFS work?
Re:IEEE 1394a (Multiple-host File Systems) (Score:1)
Re:IEEE 1394a (Multiple-host File Systems) (Score:1)
Re:IEEE 1394a (Multiple-host File Systems) (Score:1)
From what I've understood from the other posts on this topic, the biggest problem with the scenario you mention seems to be when a primary machine hangs but mysteriously recovers.
When the primary hangs, the secondary takes over (figuring that the primary is down) and mounts the drive. But when the primary recovers from the hang, it still has the drive mounted, so both systems have the drive mounted.
I believe they call that a "Bad Thing" (or massive FS corruption -- your choice).
There seems to be a bunch of ways around this scenario -- GFS/openGFS, STONITH switches, etc. -- but this is why it's not such a good idea to just let the secondary take over without being totally sure that the primary is DEAD.
Re:IEEE 1394a (Score:2)
The connection isn't the problem in SCSIland anyway, the problem is actually software-related. Dual-attach is easy, you just connect to both ends and use different SCSI IDs on each controller. If you want to access narrow SCSI devices, make sure all HAs use an ID of 7 or less.
Re:IEEE 1394a (Score:1)
my 2 cents (Score:2, Interesting)
Re:my 2 cents (Score:1)
i personally think the guy with the pigtail idea is on the right track....
Check the Adaptec line (Score:3, Informative)
The way the manual reads, it seems it should work in all supported OS's, but I cannot confirm that.
Rob
Re:Check the Adaptec line (Score:1, Funny)
tuba!
Re:Check the Adaptec line (Score:2)
-s
Re:Check the Adaptec line (Score:2)
A lot of SCSI cards support this, and it does work with almost any OS. The problem is that "works" means that both hosts can see and access the block device. This doesn't in any way provide any of the synchronization neccesary to support a shared filesystem. For that, you need software like Global FileSystem. Apparenlty GFS was a GPL project, then it went commercial with Sistina, but there's now an OpenGFS Project that's picking up from the last GPL release and trying to make it work well.
In any case, the hardware is easy, lots o fhwardware supports multiple hosts hitting a block device - the hard part is some sort of shared filesystem and/or block-level locking stuff.
Comment removed (Score:3, Informative)
Re:Bad Idea... Need good software. (Score:2)
Basically the servers monitor each other, and if the server that has mounted the drive goes down, the 2nd server picks it back up. (Oh, you only have to buy one server. Mirroring licenses are built into the product)
We streamed a video off the disk, then downed the server, and after a couple seconds the video picked right up where it had paused..Very cool.
Of course, that's actually while working on a third workstation....
I know this isn't helpful to the topic (Linux solution needed), but many people don't know it's possible.
Reliability of the disk (Score:2)
The way I see it, the only thing this avoids is kernel failure. If the server fails, you're better off having something to restart it and a single box. If the *disk* fails (IMHO, by far the most likely, unless you're running a pretty flaky bit of server software), you're out of luck either way.
It seems like it might be a better idea to get two drives and one server (or two servers with two drives).
Good to see a good "Ask Slashdot", too.
Re:Reliability of the disk (Score:2)
I don't think this proposal avoids even that. If Server 1 and Server 2 are connected to Disk 1, and Server 1 goes belly-up, there is bound to be information in RAM cache that Server 1 didn't get to write back to Disk 1 before it went down, even it it syncs every millisecond, which would be horrible performance-wise.
So when S2 detects the crash of S1, D1 is unclean and an FSCK is required before D1 can be cleanly remounted. That's going to take a while.
So, the common case is software crashes, and the uncommon case is disk failure. This solution doesn't seem to save you much, if anything, in the common case, and saves you nothing at all in the catastrophic case. I think you'd be better off with one server which can be quickly rebooted, easily debugged.
Re:Reliability of the disk (Score:2)
Journeled filesystem, or, better yet, a filesystem that doesn't report a 'successful write' until the bits are on the hard disk.
Similar to a ACID database; transaction logs vs data files.
Try it, the only way. (Score:3, Informative)
You have to try it yourself. Scsi supports it, and technically nothing can be scsi compliant if it won't work this way, but in practice... That is something else. I won't be at all surprized if one device fails to work that way, but a different from the smae manufacture does. So test your setup before you go to production.
I've met people who claim to have done this, and even gone so far as half the disk used by one comptuer, half the other (seperate partitions), but those start to get into friend of a friend so I wouldn't put much faith in my claim that it has been done.
Scsi cabling is still some of a black magic, but use good cables, no pig tails, good termination, and you should be fine. There should be no need watch for same length cables, just get the termination right, and follow the rules. Note that I said should, SCSI cables are still mystical enough that I wouldn't call you a fool for following rules that appear technically bogus.
Re:Try it, the only way. (Score:2, Insightful)
No, it isn't.
It's all normal signal theory.
Re:Try it, the only way. (Score:2)
In theory there is no difference between theory and reality. In reality there is.
Scsi cabeling is much better than it used to be, and it should obey all the laws of physics (unfortunatly we do not know all the laws of physics, though we should know enough to solve this).
Hook it up. Should work. (Score:2)
Not all controllers or drives may be very excited about this setup, but I believe the standard says it should work. I know I've read about people doing it before (not sure about OS or hardware tho). Plug and chug. You should be able to find some combination that works, and since you aren't trying to mount at the same time from 2 machines - no problem.
You may even be able to mount different disks to different machines on the same chain - share a scanner, tape drive, cdrom, or Zip drive even. Just give it a shot man....
Re:Hook it up. Should work. (Score:2)
We're just not quite there yet (Score:2, Interesting)
In the linux world take a look at GFS.
http://www.sistina.com/products_gfs.htm
The hardware they use to make it work will probably support what you're trying to do. Your typical off the shelf (At Frys) SCSI controller won't do the trick.
For what you're trying to do I highly recommend you work out some kind of sync between two networked machines with separate storage. If you're running a database it gets really fun. HINT for MySQL, script the replay of the SQL "update" log on the hot standby machine.
Good luck. My company just spent 150k+ on a sun/veritas solution to do exactly this. Our storage is all SAN.
--Chris
Re:We're just not quite there yet (Score:2)
No offence, but your company spent too much. Your typical off the shelf PCI scsi adapter, in fact ANY scsi adapter that can set it's own ID works in a multi-host setup. If somebody from Sun or Veritas told you otherwise, they were lying. There are multiple companies (including the one I work for [missioncriticallinux.com]) that make software to manage the setup under linux. In fact our software is included in Debian 3.0 and RedHat advanced server, so you don't even need to spend any money on software unless you want bells and whistles (Graphical setup, support, NFS lock maintanance across failover...). It even comes with scripts to interoperate with your favorite database server.
Re:We're just not quite there yet (Score:2)
While I'm sure you already know this, there's a big difference between having a "working" multi-initiator setup, and having one that doesn't corrupt your data. It's fairly easy to get it working in an active/passive setup. But to get both nodes actively accessing the same SCSI devices requires a little more care. DG (now EMC) CLARiiONs were great at this, even if they were somewhat pricey...
Re:We're just not quite there yet (Score:2)
Agreed that once you want active/active you need to be a little more careful, but the cheaper SCSI adapters are actually more likely to work in these cases, because they have no cache on them. Once you're using RAID boxes or host based RAID, multi-initiator SCSI becomes a much harder problem. Almost every vendor that has redundant RAID controllers in their storage box does it correctly (Clariion included), but you really need to be careful with PCI RAID adapters. Most of them won't work as expected no matter what the configuration.
Re:We're just not quite there yet (Score:1)
Doh Doh Doh doh Doh Doh doh!
Sure it will (Score:2)
Dammit! People need to stop ignoring Novell.
Building a Poor Man's SCSI-Based Cluster Hardware System [novell.com]
There's much more information buried on their site, of course it applies to NetWare, but just because you don't have a Linux answer, doesn't mean it doesn't exist at all.
I've done it although with different systems (Score:5, Informative)
I guess I'm saying that I don't see why it wouldn't work on today's GNU/Linux systems.
An easy part and a hard part (Score:2)
Keep in mind that although the electrical connections are OK (so long as only one thing is talking at a time on the SCSI bus), the filesystem is a different matter entirely: Without some sort of distributed lock manager, your data WILL get horked. Generally DLMs are part of larger packages like GFS, AFS/DFS, Coda, or Veritas ClusterFS. Tivoli's SANergy is probably the closest thing to a standalone product to do this, although there are others - I haven't looked a the market in nearly a year.
Filesystem consistency may be a serious enough problem to keep this approach from even being valuable for backup servers: If one server goes down unexpectedly, it leaves the disk in a corrupted state, which must first be fixed with fsck or the like. If you have ot wait for that anyway, then there's not a whole lot of advantage to all that extra cabling and the weirness that accompanies SCSI length.
Generally, the three best solutions today for this sort of thing are 1) Cheap, easy: to use external RAID boxes and just switch then over physically to a backup server, if required, 2) to use iSCSI or other Storage over IP (SoIP) (or NAS, if you don't need performance) to allow disks to be easily reconnected, or 3) buy a fully virtualized SAN-type solution (which ay be SCSI, Fibre Channel , or SoIP) that will allow you to re-connect everything in software - some of these can work with distributed lock managers.
If you really want to do this sort of thing, do it right: check out FalconStor or DataCore, or HPAQ's VaporStor, I mean, VersaStor...
Re:An easy part and a hard part (Score:3, Funny)
Great word man, Im gonna add that to my lexicon :D
Re:An easy part and a hard part (Score:2)
A truly nasty bug, and one that continues to bite those that bounce back and forth between various Mozilla/Netscape derivatives foolishly thinking they can use the same profiles. Sounds reasonable enough, but it can't be done reliably today...
Re:Ahork (Score:2)
Also - horking ugly - ugly enough to make you want to puke.
Been used in Canada for over 40 years.
network drive? (Score:2)
why don't you just find an extra comptuer and make an NFS server? the reason that you are not finding much information on sharing a SCSI drive is that there are a lot of better ways to do it. what sort of speed are you looking for? a 100Mbps network can deliver data comparable to having the drive attached locally, and you won't need an incredibly fast computer to serve it.
Re:network drive? (Score:2)
Also 100 megabit/second ethernet does not give speed comparable to having the drive attached locally... unless the drive only sends data at 10 megabytes/second (which is kinda slow these days)
It *will* work, if... (Score:2)
The difficulty you will have will be the software. You sound like you're not planning to have the same drive mounted on both systems at the same time, and that's good, and since you're using a Unix it sounds relatively simple to make sure that a drive is fully dismounted from one box before you mount it on the other. But very very bad things happen if, by some chance, both boxes do decide to mount a filesystem at the same time. If you have any sort of automatic failover between systems you have to be really really certain that the other box won't spring back to life and start writing to the filesystem while the other guy has it mounted. Supposedly reliable "failover" systems have this happen all the time if not designed correctly - remember, 99% of your failures will be software failures, not hardware failures, so if you design a hardware failover system without taking into account the flaky custom-written software you're making a mistake.
How to do the failover... (Score:3, Interesting)
On the backup machine, write a script that repeatedly does the following actions:
1) mounts filesystem on shared disk read-only
2) if the mount fails becase of an inconsistency, skip to 9
3) checks the mdate of a file called
4) determines if "too long" a period has gone by since that
time... if not, go to 8
5) remounts the filesystem read-write
6) creates a file called "/.failover"
7) starts the application assuming the other computer has died, stops this script loop
8) umounts the filesystem
9) sleep for a short period of time
10) go back to 1
The main machine does the following things in a loop:
1) Update the date of
2) sleep for a short time (shorter than the one in the above loop)
3) Check for the existance of
Now, a better idea might be something like this:
Create a small partition on the disk (1 cylinder) in addition to the shared partition.
Have the main machine write timestamps directly into the partition (date +%s >
machine would read that directly rather than trying to
syncronize on a file (whose mtime will only be updated when
the main machine's buffer cache is flushed to disk).
Also, you may want to consider some way to avoid needing a script loop on the host machine; a custom device driver that fits into Linux's watchdog timer framework is probably better.
Re:How to do the failover... (Score:1)
I'm still trying to get it to work. The idea is to have two mail/dns/dhcp servers, one a failover for the other. The problem is indeed in the details. The filesystem cache gets in the way every time. You can't check the mtime of, say, /data/.watchdog. I've tried mount flags, I've tried tune2fs tricks, nothing seems to work.
I still hack on the setup from time to time. Maybe I should _NOT_ use EXT3, but then journaling the metadata is EXACTLY what I want, so that in a catastrophy all I need to do is clear out the journal log before remounting rw. Maybe I should just adopt the tried and true serial line heartbeat monitors.... Even with that, the fun starts when the other machine decides to come back online, and steals both the drive and the IP back.
Slashdot is spooky sometimes, I was thinking of working on the problem today, and then saw this thread. b)
Re:How to do the failover... (Score:1)
Re:How to do the failover... (Score:2)
Actually:
5a) kills power to the primary machine using a serial/networked power bar to avoid any possibility of the other computer doing something like trying to mount the FS
5b) remounts the filesystem read-write
Slightly cleaner.
Why you'd do this (Score:5, Insightful)
Re:Why you'd do this (Score:2)
Avoiding Single Point of Failure (Score:2)
Consider a situation where you have (a crude ASCII graph slashdot's lameness filter does not let me pass thru, depicting ~)
where n,m,o,p,q are integers bigger than one.
Each of the above is independently connected to each device in the next group.
Now take away any all but one machine from each group (stupid luser access, sudden administrator movement, coffee pourance, spontaneous smoke escapitation event, divide intervention, anything you can come up with as long as it is considered a Fatal Failure on behalf of the conserned device). Does the system fail?
(Examining other setups of similar reliability is left as an exercise for the reader, except for that one who's already fed up with my style of writing.)
This is what the original question is about. I find it quite interesting that such a setup apparently could be achieved with commodity hardware and Free software.
Of Course you need off-this-machinery-and-rather-off-the-continent backup. It without the former, however, does not HA make.
Re:Why you'd do this (Score:2)
Multi initiated SCSI Array? (Score:2)
What is the point? (Score:1)
Hell of a good will your two redundant servers do you if your hard drive decides to take the day off.
Depending on the data shared, it may be safer to replicate and set up some sort of load balancer.
Re:What is the point? (Score:2)
Actually, what's most likely to fail is the software, but NIC failures, accidental cable pulls, and other hardware failures do happen. The trick is setting it up so there is no single point of failure. There are lots of papers available on the web that describe how to do this, and many of them talk about how to do it cheaply. You can set up two systems with no single point of failure (Redundant shared SCSI driver, host based RAID, dual NICs in each system, remote power control) and automatic failover for under $2500.
Been there, done that. (Score:1)
1 SCSI bus, and 5 devices shared between the two systems. (Tapes, disks, CDs, etc. with each systemn using a different SCSI ID.)
Of course, the systems had distributed locking (also done over the SCSI bus) allowing full access to all the devices at the (nearly) same time.
In terms of hardware, the only things you need to watch for is going over the allowed bus length and no extra terminators along the bus.
But, all of this is moot, as disk prices have fallen so fast and so far, that it doesn't make much sense to worry about all the operational problems you will have to solve. This was a reasonable solutin when a 1 gig drive was a few thousand dollars, but today a multi-gig disk can be had for pocket change.
You probably would be better off using some form of shadowing disk software between the two systems. Backup, operational simplicity and support are a lot more important today than the cost of just one extra drive.
Re:Been there, done that. (Score:1)
Possible, Easy, Reliable, and -FREE-! (Score:5, Informative)
If you use debian, installation is as easy as apt-get install kimberlite. If you want to use it as an NFS server, you'll need to buy the commercial version for full support, but it's not very expensive.
Ignore the people in this thread who are talking out of their asses and saying multi-host scsi doesn't work well. They just didn't know how to set it up right or have never actually tried it. It's very common, and people have been using it for decades.
Re:Possible, Easy, Reliable, and -FREE-! (Score:1)
As someone who once worked for an also-ran in the Linux HA field, I can back this up. My impression is that the MCLinux people know what they're doing (although I haven't tried the product). I believe they recommend using remotely controllable power switches so that one server can kill the power to the other in the event of failover, ensuring that a dual-mounted filesystem cannot occur; this is a sign that they've thought about the overall solution.
If you're going to implement a manual solution (and you're careful) then you don't need to worry about this (but HA vendor manuals are still useful for the hardware setup details). If you decide to employ failover software, get a good book on HA (Marcus/Stern is recommended) because there certainly used to be a lot of FUD amongst vendors.
Yes, you want a journalling filesystem, otherwise recovery times could be horrendous.
Ade_
/
Re:Possible, Easy, Reliable, and -FREE-! (Score:2)
Because we're talking about failure situations, there is no guarantee that a failed node will be well behaved. There absolutely MUST be some sort of I/O barrier preventing the failed node from corrupting your data. Other vendors use SCSI reservations, which can be just as effective, but is more difficult to work with. Bottom line: use either the power switch option, or the reservation option, but do something or you'll be sorry.
consistency checks will take too long (Score:1, Interesting)
If availability is your goal, this is not the way to get it. Get a good journalling file system, hardware RAID and then just replace a drive if it fails. You will find that bringing your one server back online will be faster than managing the switchover when a primary server fails. If you're running a database, the issue will be even more pronounced as any switchover will require that the server perform a consistency check on the database as well.
Does anyone else remember... (Score:2)
Re:Does anyone else remember... (Score:1)
Don Lancaster, I do remember. And I remember him talking a lot about how his setup was an Apple II and a Laserwriter ;)
You can still find him at http://www.tinaja.com/ [tinaja.com], wacky as ever.
Be careful who you listen to (Score:5, Informative)
As someone else said, you want to look at "multi-initiator" support. Since there's not much point to using SCSI if you can't interleave requests, your going to be talking about "split transactions" where the initiator arbitrates for the bus, selects a target and sends a command and possibly data (write case) over the bus and then disconnects. Later, the target arbitrates for the bus, selects the initiator (hopefully the same one that sent the request), and sends data (read case) and status back. IIRC, SCSI-I didn't support tagged queing and out of order returns, but later versions do. This has got to be negotiated just like synchronous transfer rate. I can think of lots of ways that this could be screwed up (typically in firmware) and never effect the single initiator case, so as I said, you have to test.
If the drive fully and correctly supports the spec, it should respond correctly to requests from any initiator and keep everything straight when it agrees to handle tagged queing. That means you should be able to use different parts of the disk for a filesystem on each disk, as long as you keep everything straight. You can even have one device write and another read, or use some blocks on the disk to coordinate dynamic sharing, but all of that gets complicated quickly, so unless this is what you really want, it won't be worth it.
A couple of comments implied that some music systems do this sort of thing, maybe between the sound recording system and a computer mixer/processor system. Doing this can't break the drive, but it certainly could hose up the format enough to make it unusable without a reformat (if you break the usage rules, that is).
As to cables and such, SCSI is a bus, although you are allowed short taps from the bus to the drives/controlers (maximum is in the spec). If you have some sort of 'Y' cable that connects a host in two directions, you can't have more than one device inside the host (i.e. no drives inside the case connected to the internal port of the controller), and the internal cable has to be short enough (and of course no termination inside either). External drives and multi-drive modules will almost always have two connections for both ends of the bus, so just chain all the drives together and put the hosts on each end. Now you just have to be sure the total cable lenght is within spec (6 meters, I think).
The final topic is why do it in the first place. Keep in mind that drives and power supplies are your most likely failure points in any case, so you want to mirror, or raid. Mirroring with one drive in each box (or many pairs split between the boxes) would reduce the single points of failure pretty well. You could even have both boxes active and mirroring to different pairs sharing the load until there is a failure, then switch over. Manual switch over is probably safest and cheapest, just shutdown the broken system (If not already hard crashed), and mount the other filesystems on the still working box. If you have confidence in your monitoring system, you could script this on certain events.
It looked like some comments had good links to some multi-initiator stuff, or just google that as suggested (it helps when you know what to ask for), YMMV. Oh, one more thing to worry about: terminator power. Usually the controller supplies it to the bus, but it is very bad for more than one device or initiator to supply it. Of course, you also have to worry about still having it at both ends even if one of the machines is off or dead.
I wouldn't recommend it. (Score:2)
Secondly, when the primary machine goes down, it may take cached disk information with it, so your secondary system will need to perform a fsck before mounting the drive, and the lag time probably won't help your situation.
What I would recommend is two separate systems, each with its own IDE (or SCSI) drive, and a gigabit network adapter in each machine. (I'd recommend using this as a secondary to your uplink ports, to make security easier and keep the bandwidth open.) Have the primary machine mount the secondary's drive over the network and mirror everything as it's written.
Set the ID of the SCSI cards to be different (Score:2)
The essential trick that you may not think of yourself is to set the SCSI ID of the 2 SCSI host adapters to _different_ SCSI IDs. Most people forget this. Remember, the PCI SCSI you use takes 1 SCSI ID in the chain, even if it's on the motherboard. So if you connect 2 PC to the same SCSI chain, the ID of each PC's SCSI adapter needs to be different, otherwise it's no different than having two hard disks both set to ID 3.
2nd, make sure you terminate both ends and put both PCs inside the termination.
So your chain should look like this:
T-P7-6-5-4-3-2-1-P0-T
Where T is a Terminator, a number is a SCSI ID, and a P designates the SCSI adapter in a PC.
Good luck, and make sure you have enough goats!
Re:Set the ID of the SCSI cards to be different (Score:1)
.
removable disk? (Score:1)
That or a full blown SAN. but I'm all-or-nothing like that
Been there, done that, hated it. (Score:2)
In our case, we used Fibre Channel, but SCSI doesn't see anything interesting about controller vs device, so you should be able to have multiple machines connected to one SCSI chain. Machines at the end of a chain should be properly terminated.
We also used 'canned' failover software It basically had a committed channel between the two boxes where they talked to each other and fibured out who was up and who was was 'active' (kinda like the protocol used by timed (( BSD protocol before ntpd)). If the 'active' box died, then the backup box would take over as the server -- this included stealing the MAC and IP addresses and the disks.
Obviously, if the backup machine thought that the primary was dead when it wasn't, then all hell would break loose (yes, I had it happen to me).
Should you accept this mission, a journaling FS is obviously the better idea (faster FSCK before restarting the disks). -- and you REALLY want to make sure that the other machine is really down before the backup system grabs hold of the disks. IMHO, you're better off to err on the side of caution... Far easier to recover from the backup machine backing off from failover than trying to figure out what got destroyed by both machines writing to the same disks.
My best suggestion is to find some hardware hack to allow the two machines to pull each other's reset lines low. That way you can avoid the pathalogical case where the primary machine stalls long enough for the secondary to think it's dead, then coming to life thinking that it's still primary (zombie servers -- appropriate for halloween night, don't you think?)...... Instant toasted disks.
Beyond making sure you don't end up with zombie servers, there shouldn't be anything special for Linux to do... Just FSCK the disks and mount them.
Re:Been there, done that, hated it. (Score:1)
Shared SCSI bus pitfalls (Score:1)
have done it on both Sun/sparc & Linux/x86 machines. There are a
number of things to watch out for when trying to do this...
A SCSI bus is just that - a bus, which needs to be terminated at both
ends. Each device on the bus must have a separate address. This
includes the controller board - sometimes called the SCSI initiator.
As supplied by the manufacturer a controller will normally be set to
the highest numbered address on the bus - 7 for a narrow (8 bit) bus,
15 for a wide (16 bit) bus. When connecting two controllers to one
bus, you must change the address of one of the controllers.
Things to check include:
Can the initiator ID be changed on the controllers you are using (it
can on the Adaptec 2940, I don't know about other boards).
Can the controller & device driver cope with unexpected events on the
bus ? eg. if one machine does a bus reset (perhaps during a reboot),
does the other machine carry on ?
Are both ends of the bus properly terminated ? If one machine is
powered off, will it fail to correctly terminate it's end of the bus ?
It is possible for both machines to access the disc, and indeed having
different partitions mounted on different machines will work, though
throughput may be poor (think of what happens to the seek scheduling
algorithms when another machine is also accessing the disc). I am not
aware of any filesystem which will cope with two machines accessing it
at the same time. Trying to do this is a great way to get a corrupt
filesystem.
It is possible to unmount a filesystem from one machine, & then mount
it on the other. When doing this be very careful that the disc &
filesystem caching doesn't mess things up. It's not just a matter of
flushing the write cache on unmount - a read cache which persists
through unmount then mount will also cause problems. If this cached
data is wrong because another machine has changed what is really on
the disc, filesystem corruption can result - I have seen this happen.
Good luck !
Paradox (Score:2)
Before you spend a single dollar, ask yourself: if your system is important enough to require fault tolerance, why can't you spend money to get a professional solution? If your system isn't important enough to spend money on, then ordinary bidirectional file replication should be good enough for you. You could do it with rsync and ntpd in a few minutes, for free.
You will need GFS (Score:3, Informative)
If you're going to share a disk/fs between multiple machines, you will need a filesystem capable of performing proper file locking in order to avoid data corruption and race conditions.
Global File System (aka GFS) [sistina.com] can do this. I believe that it was originally developed under a OSS license, but eventually went commercial [lwn.net]. There's rumors of a GNU/GPL GFS (called OpenGFS [sourceforge.net]) but I don't have many details as to the maturity of the project, or any experience with it at all.
I found GFS's learning curve to be pretty steep, but if I was able to set it up, I'm sure that you can work through it.
Lastly, I have only used GFS with a SAN cluster, connecting multiple machines via fabric fibre channel (you might want to consider into using a third box as a RAID host). I know that you are using a very different solution than I did, on a different budget -- so YMMV.
I hope that this is helpful to you.
I've done it lots: It works (Score:1)
Now we only mount one at a time using FailSafe to detect failure and handle fail-over.
If you really want reliability, though, you have to put the external storage behind an external (redundant, of course) RAID controller(s). Or just buy a Compaq cl380. They run Linux just great and everything is all set up.
For testing the software, etc., use ieee1394 because it is MUCH less expensive than SCSI.
My use for this... (Score:2)
The real question is what you're trying to do... (Score:1)
The problem I have with this scenerio is that server hardware problems are much less likely to occur than a disk problem, and I think you'ld be *much* better served by using RAID and mirroring your disks than worrying about a non-disk problem in your server. Using RAID you come close to solving the problem of the disk being a single point of failure, and given that disk problems are more prevelant, it's generally a better choice than redundant machines.
That's a smaller cost than buying a second server, and I suspect it'll give you better results, if for no other reason than what you're suggesting is a fairly non-standard configuration. (Regardless of whether or not it is supported.)
That doesn't solve the possible problems, but it at least places the most likely single point of failure in the right place. To really solve the problem is to make totally redundant systems, but that gets much more expensive.
Sean.
ICP Vortex (Score:1)
Man the level of bad advice (Score:1)
Use STONITH (Score:1)
A WTI RPS-10M is an ideal unit for this: it is a power switch controllable with a serial port. You can even chain up to 10 of these together with phone lines and control them with one serial port.
There is lots of info on STONTH on the the Linux-HA site.
High-Availability File Server with heartbeat (Score:1)
--
euph
www.euphnet.com [euphnet.com]
NFS (Score:2)
Re:NFS (Score:1)
Although you'll only likely notice these things in disk intensive and/or HPTC applications, its always a Good Idea (tm) when choosing connectivity options to keep any protocol overhead to a minimum.
Linux-HA (Score:1)
Lots of information on using shared storage with a bias toward setting up highly available clusters.
Oracle Real Application Clusters (Score:1)
This is a technology that allows you to set up several commodity intel boxes (or solaris, or whatever) as a cluster, with a shared storage device to hold the data files. The clever bit is that it appears to all intents and purposes to be a single instance of the database, meaning apps don't have to be rewritten to take advantage of clustering.
The kicker though is trying to source a shared storage unit for less than £50k. All quotes from Dell (our supplier) are for fibre-channel devices that cost a fortune, but I know deep down that we can accomplish this with a SCSI unit with simultanious connections to each server. The Oracle RAC software takes care of the synchronisation between writes to the disks, so things shouldn't get out of sync.
I'd be interested to hear if anybody has been able to source a shared storage SCSI unit, and in particular which brand etc. I'm trying to set up a low cost RAC cluster using Dell PCs, SuSE SLES-7 and the Oracle software, and I need the storage solution to be cheap as well.
Re:Oracle Real Application Clusters (Score:1)
IEEE1394/FireWire (Score:1)
Done it, but wouldn't recommend it (Score:2)
Or, if you're after a cheap solution for failover (and it sounds like you'll be doing a manual failover) I'd just use external devices plugged into a SCSI card, and if you need to failover, manually unplug the disk from one machine and attach it to the other and boot it up. Not quite "hot standby", but quite warm...
highly dangerous without clustering software (Score:2)
Imagine the following scenario--
- the node "owning" the disk hangs
- the backup node takes over the connection and starts working
Damn! didn't mean to hit submit (Score:2)
- node A, which owns the drive, hangs
- node B takes over and starts writing
- node A recovers from its hang, thinks it still owns the disk, and goes back to writing happily
- you experience massive data corruption and get fired
- the economy sucks so you can't find a new job with "corrupted critical files because of my cheap-ass attempt to share a SCSI drive" on your resume
- your significant other leaves you, because he/she doesn't want to date an unemployed loser
- you spend the rest of your life alone and friendless, wishing you had heeded my sage advice
Clustering packages typically include features to STONITH (Shoot The Other Node In The Head) to prevent problems like this one. Red Hat Advanced Server and Kimberlite (on which RHAS clustering is based) include a number of other nice manageability features as well.
Good luck!
--JRZ
We did this (Score:1)
We had 4 Intel 286 multibus boxes, each containing 6 graphics display controllers, connected to two disk servers, each of which had (IIRC) four SCSI drives. The data on both disk systems was identical. When all went well, any of the graphics systems requested data and either disk system would respond, providing faster response. If either one failed, the other one did all the work.
We wrote our own SCSI drivers, and had the data (mapping vector and image data) striped physically on the disks to optimize sequential fetching when a user 'panned' across the map space.
Unfortunately I can't provide much technical detail, in part because it's been a long time. It's doubtful if it would be useful anyway as SCSI has grown and changed a bit since then! However I believe it now allows multiple masters on the bus, which is the key to doing this stuff. I think we faked it somehow, I don't recall how. A big question is whether Linux drivers have this kind of capability.
The primitiveness of Xenix provided an interesting advantage - we had a Unix development and operating environment, but had a very primitive timesharing method. This allowed us to 'take control' of the machine for significant amounts of time giving us a kind of pseudo-realtime capability for talking to the disks. This is similar in concept to some of the present-day Linux Realtime distributions(?)
For those who are curious, this was all done for a 911 Emergency Response mapping system at Fairfax County, Virginia, deliverred in 1985 or 1986. Given a street address, we could figure out the location and present a 1Kx1K 8-bit color display of the incident area based on maps our system had scanned and vectorized, with several additional layers (the 8 bits were used as bit planes) for additional data such as police locations, within 7 seconds guaranteed with all 24 terminals in use.
In practice response was typically 1.5 seconds. Not bad for a bunch of 286's with 2MB RAM - and umpty $K worth of display hardware - I think the boards were about $4000 each.
Striping the data, which was organized in 256 or 512 pixel (I forget which) square patches, along with some fancy paging of these tiles into the 2Kx2K frame buffers, allowed the user to 'pan' vertically and horizontally across the entire 1400 sq. miles. of the county, seamlessly.
Of course, once we had built and delivered the system, I was unable to convince the chairman of the company to attempt to sell this to anyone else and $700,000 worth of development time was essentially tossed and an entire market ignored. I left the company a few months later.
Note to cliff: (Score:2)
This one is an extreme case in point.
This is NOT off-topic, nor is it flame bait. Too many "ask slashdot" topics are themselves redundant.
no (Score:2, Interesting)
Re:no (Score:1)
Horse pucky (Score:2, Informative)
(Or the gentleman is painfully ignorant).
Having done this myself in the real world I can say with complete authority that one should definitely use cards which support this configuration (e.g. Adaptec's). The reason being that these cards will actively negotiate which one has access to a given device at any particular time.
If you don't have cards that support this (which I didn't, so I found out the hard way) the SCSI devices will get confused and hang if they're accessed by both cards at the same time. Interestingly enough it did work, I just had to be careful what I did on the two machines.
(Better just to get the right cards and not have to worry about it constantly).
Re:Cable length. (Score:1)
Some drives, at least IBM SSA-attached ones, have 4 connections to each drive, bus A and B, port 1 and 2, I think was how they were named. These were usually attached multipath, so there was an SSA adapter on each end of the bus, in the same system. The second bus was either to a second pair of adapters in the same system, or in another system (or were unused). Serious redundancy, and with 4 paths to each physical disk, bus waits really weren't significant on reasonable-sized arrays. Anyway, as I said, you can do just the next level down with commodity SCSI hardware.
There are ways to have independent systems share physical disk concurrently, but last time I worked on one (mid-y2k), it was pretty kludgy and unreliable.