Forgot your password?
typodupeerror
Businesses Hardware

Server Redundancy for a Small Business? 81

Posted by Cliff
from the disaster-preparation dept.
SadPenguin asks: "I am currently working for a small company of about 15 people each with one to two workstation/laptop machines a piece. We are looking for a new server solution, as our last one crashed, and lacking any server redundancy, we nearly lost all of our data since our last backup (it was only a few days, but an important few). What the kind of server (and redundancy) solution would be appropriate for a company of my size? Most advertisements are for large scale enterprise serving solutions, but these are costly and excessive for my situation. I'm sure that there is a simple Redundant Server technology out there that is a bit less costly, but won't result in any downtime in the event of a motherboard component failing (like we faced this time when our mysterious surface soldered VRM failed). So what do you use? What should I use?"
This discussion has been archived. No new comments can be posted.

Server Redundancy for a Small Business?

Comments Filter:
  • by chuckcolby (170019) * <chuck AT rnoc DOT net> on Friday June 04, 2004 @03:49PM (#9338451) Homepage
    Excellent question!

    I actually run a computer consulting firm specializing in small businesses. I'll outline some of the more common recommendations - with what I think is the most important first.

    From my experience, the best approach is to layer your defenses. I'd REALLY recommend a UPS (I generally assume this is purchased with a server, but it isn't always) at very least. Your local power company is only required to provide you with something CLOSE to 120v. They generally can't keep it consistent enough for power supplies (and electronic componentry in general). Protect your investment, UPSes are generally relatively cheap.

    The fact that you've got a backup solution is good, but (as you've seen) not enough. Evaluate it, and see if it's consistent with best practices - i.e., is it a tape (or optical) backup system that is done in rotation and taken offsite by somebody in the company? If not, set that in motion first.

    Next, some sort of drive redundancy is in order. At very least, mirror your drives. I generally recommend RAID5 (or one of its variants), but in very small companies RAID5 isn't either required or affordable or both. IMO, the jury's still out on the long-term viability of IDE RAID, but I think it looks promising.

    Finally, redundant power supplies and NICs (for those of us that are REALLY paranoid ;) ). I've had a couple of servers' power supplies die on me, but the server kept right on ticking thanks to a redundant unit.

    If it's affordable to your company, consider hot-swappable server components, as well. This significantly reduces downtime to your coworkers... and expense to your company.

    Hope this helps. Good luck!

    Oh yeah, FP ;)

    • "Oh yeah, FP ;) "

      Excellent one as well... anything else that follows will largely be redundant...

    • These are all very good solutions. More ideas include high availability network stuff such as UCARP [ucarp.org] or HA-Linux. Many of these depend on specific details of your setup, these recommendations are definitely biased towards Unix/Linux systems. In my experience, having backup systems for your important is a crucial idea, having employee downtime as a result of system failure is a nightmare for any company, especially a small business.

      A mysteriously soldered VRM sounds a bit odd, you might think about going wi

    • Is there any sort of RAID 5 available in the range of between $1000 and $2000? I have a small law firm, and would love to be able to have the redundancy capability offered by RAID 5. I would think there is a market for a stand alone Firewire box that I could pick up. The box could either come with 3, 4, or 5 harddrives, or allow me to pick up my choice of harddrives separately and just plug them in.
      • A quick Google turned up the following sites:

        PC Pitstop [pc-pitstop.com]
        Cooldrives.com [cooldrives.com]
        Adaptec [adaptec.com] (DuraStor line... a bit beyond your stated price range, though)

        Hope this helps!

      • RAID 5 is supported through the Linux natively, so all you'd need are the seperate drives to add to the virtual array. Just make sure to ALWAYS have every disk available. Rebuilding an array isn't that fast.

        As for a standalone box, I don't know anything that's sub-SCSI, but I imagine someone sells something similar. It'd be really slow though. Firewire can average out around 14MB/s and firewire 2 is still missing critical mass. If in doubt, get a 4 channel firewire card, 4 drives and tape them to eachother
        • Adaptec and Escalade[?] both make IDE/SATA Raid cards. They're not the cheap $100 ones like Promise makes, but they're much cheaper than the SCSI options and use standard IDE drives. For a matter of fact most of the midrange "Enterprise" data tank solutions Utilize a big box of 8-10 disks set up this way with some extra "sauce" in the mix.
          • Obviously, but the parent wanted an EXTERNAL option . He's looking for a chassis that has multiple drives inside, and the raid is done outside the machine. (Or inside Linux's subsystems as my example listed).

            If you look in the External storage solutions at Adapter you'll find that there are NOT SATA/IDE interfaces. As for internally, you can use SATA hard-drives in the setup, but no matter how you slice it, a 12-drive storage array is not a viable option for:
            Small businesses (5k for the chassis, n for
      • Re:RAID 5? (Score:3, Interesting)

        by pbox (146337)
        I need to point out that your selection criteria should include multiple firewire ports, and firewire controllers on both the drive and ant the server end. Should add only marginal cost to your setup.

        I have a maxtor FW single HDD backup solution, but I definitely would not recommend that particular one for constant on situation (for lack of ventillation). It seems that when the drive does the temp calibration the FW insterface hiccups, and the ongoing transfer gets interupted. All is well after diconnectio
    • by llefler (184847) on Friday June 04, 2004 @05:33PM (#9339806)
      While your suggestions are good, some of them might be a little expensive for a company this size. Depending on what kind of business they are.

      The first red flag I saw was that although they had backups, they were three days old. If the data is worth saving, it's worth doing it right. Full backups on the weekends and incrementals nightly.

      Ok, the redundant stuff... power supplies, hot swap drives, RAID5. You're approaching a $10k configuration. That, BTW, would have still gone down because they had a motherboard failure. And since they needed backups, their drives were corrupt, so the RAID probably would have been too.

      Really though, this whole question is about designing their new server without any idea of the load required. Based on the info that is available, I think I would lean towards purchasing two servers. Make them a little smaller than what you would purchase if you only had one, and divide the load between them. If one fails, you can temporarily transfer to the remaining one until you can get it fixed. You could even go so far as to move drives and RAM temporarily if necessary. Just make sure the equipment is server rated. IE: my Dell 400sc Poweredge servers are rebadged desktop machines. My Compaq Proliant 800s are definately not. Even good equipment is getting pretty cheap if you have reasonable requirements.

      Above that; daily backups. The UPS equipment like you suggested, just keep in mind that UPSs are consumables. And possibly IDE RAID-1. Drives are cheap and 15 users shouldn't need the performance of SCSI.
      • Forget incrementals, if the data is worth backing up its worth backing up correctly. You main expenses will be in the drive/changer and manpower. Tapes are kind of expensive but not as expensive as losing your data, 90% of businesses that suffer a catastrophic loss of data go out of business within 5 years. As for server solutions a pair of 2U Dell's with RAID5 and redundant PSU's can be had for under $10K, unless this is an unprofitable company that is cheap. I have quite a few companies with 35-50 employe
        • If I remember correctly, the survey that I read was 90% of small businesses....

          And $10k is a huge investment for a company of 15 employees if they aren't technology based. Most would start to squeal long before you hit $5000. Sometimes you just have to be happy that the 'server' isn't the owner's PC.
      • We have a server here with dual-200GB drives in RAID-1. It's primarily used to backup several offsite servers on a nightly schedule. Assuming that there was space elsewhere in the building, putting another server in there with RAID-1 drives and doing networked backup should be fine.

        With 'nix, even software RAID-1 works well. RAID-5 is also a choice. Doesn't take an insanely fast CPU (or a monitor that that matter)... so you can manage a multidrive backup machine for under $2000 or even under $1000.
        • We have a server here with dual-200GB drives in RAID-1. It's primarily used to backup several offsite servers on a nightly schedule. Assuming that there was space elsewhere in the building, putting another server in there with RAID-1 drives and doing networked backup should be fine.

          Let's be clear about what you have here... You've taken a box, stuck a pair of IDE drives in it and called it a server. While not necessarily a bad solution, it's not in-line with the post I replied to that suggested redundant
          • Actually, the server is running a PROMISE raid controller, and dual 2200 CPUs, etc. The point was that one can just as easily make a cheap "backup machine" that will handle offsite backups and/or be able to swap in the event of an emergency. Other servers still do the primary work, but this one makes sure that if one of them goes down - the data is still around.
  • It all really depends on how much money you want to spend. You could roll your own dual opteron server and thrown in a bunch of small (20-40 GB) hardrives and RAID 5 'em. That would be my solution. It would cost you like 2 grand if you do your homework and get a good hardware raid card. 3ware makes good stuff that's compatible with Linux.
  • Daily backups (Score:5, Insightful)

    by chrismcc@netus.com (24157) <chrismcc AT gmail DOT com> on Friday June 04, 2004 @03:53PM (#9338503) Homepage
    >> we nearly lost all of our data since our last backup (it was only a few days, but an important few)

    Daily backups !

    general recomendations:

    quality server (Dell/HP/etc)
    NO ide drives!
    SCSI in software raid5
    minimum software install (e.g. no compilers)

    get second 'devel' server to test/compile software before using on production server
    If it is not broken, don't fix it. as in screw with the devel server.

    • Forget software raid. The extra money you spend on hardware raid will be immediately recovered the very first time you have a drive fail.
      • Forget hardware raid. The extra time you spend on software raid will be immediately recovered the very first time you have a hardware controller fail and corrupt all the drives at once.
        • Re:Daily backups (Score:3, Interesting)

          by itwerx (165526)
          Heh, touche'! :)
          But I've been in the industry for over fifteen years with thousands of clients and the last time I had a hardware raid do that was almost six years ago.
          Software raid on the other hand inevitably takes more time/effot/energy to recover from failures (especially if you're so foolish as to use what's provided by Win2K!).
          Hardware hot-swap RAID is easy, just change drives and nobody knows anything happened.
          Software RAID usually requires at least a reboot if not fiddling with syste
          • My basic rule of thumb is this:

            If they can afford to get a top end RAID5 setup from a quality vendor, it's the better choice. You can be relatively assured that when that raid controller dies in a few years, you can get another card that can import your config and recover your data.

            If you are trying to do things on the cheap, and cannot get the top of the line RAID card, software RAID provides the hardware independence to upgrade cards and drives as needed, with what ever is cheapest at the time they fail
    • SATA is the only way to go.

      SCSI is stupidly expensive -- I can build a 750GB raid 5 (4x250GB) for the same price as a 140GB SCSI drive. Both of these solutions are roughly $1000CAD. Even throwing in a 3ware SATA controller is still cheaper than doing a software Raid 1.

      I also think that everyone is going a little overboard -- I'm pretty sure that original poster does not need redundant servers running linux HA and development and production servers. They can probably afford downtime over hot swappable memo
    • >> we nearly lost all of our data since our last backup (it was only a few days, but an important few)

      Daily backups !


      Why not hourly backups ? Hourly incrimentals + daily full backups ?

      Alex

  • by Bistronaut (267467) on Friday June 04, 2004 @03:54PM (#9338523) Homepage Journal

    I work for a small company that only has three full-time employees (including me). I use two Debian boxes (cheap-o machines that are just retired desktops with some big cheap IDE hard drives in them) running Samba. I use the rsync mirroring technique I found here [mikerubel.org].

    One box is the "live" server and the other mirrors the live server every night. If the main server dies (which happened once - power supply failure), I can "promote" the backup server by changing one line in its Samba configuration. As a bonus, the backup server keeps "snapshots" back a week or two.

    • I use the rsync solution for my home computer(although to an external hard drive, not a second computer) and love it, as a matter of fact I had an unplanned test last night and it rebooted without a hitch.

      If you really wanted to save some more money you could use an external drive to rsync to although you would have to get your server fixed before you could copy the rysnc'd files back over.

    • For an incredible implimentation of the afforementioned Mike Rubel rsync trick, check out rsnapshot [rsnapshot.org]. It is perl based, quite robust in the error checking arena, and very configurable. Plus the creator is very open to suggestions and is quick with the updates. Plus, there is even a deb repository:

      deb http://debian.scubaninja.com/apt/ binary/



      -Shane

  • I do three types of redundancy/backup at my sites:

    * Mirrored Raid in all servers
    * A regular workstation with a good, large had drive that copies the server data to itself nightly
    * A DVD-RW backup made nightly on yet another workstation, with at least one off site - 5 discs, one each weeknight, replaced a few times a year.

    In most cases the server RAID (cheap ATA promise controllers) takes care of 90% of the problems - only one HD goes bad at a time, lightning strikes rarely take out the hard drives at all, nevermind both hard drives, etc. Even if it dies it's unlikely that the problem affected the HD backup on the other workstation, and it definitely didn't affect the cd-rw.

    However, whenever you get a catastrophic failure in any component in the server, replace the entire thing. If the MB or power supply fails, copy the data to new hard drives, and use the old ones in less critical applications, etc.

    Much cheaper than an 'enterprise' solution, and it should be because your application doesn't require such a solution. Use large tape drives in place of the dvd-rw if you must back up a huge amount of data on a nightly basis.

    This sort of solution is very tolerant of cheap hardware, so replacing the server later may not be such a major cost.

    -Adam
  • applications (Score:4, Insightful)

    by perlchild (582235) on Friday June 04, 2004 @04:01PM (#9338591)
    Depending on the applications you need to have redundant, you might be able to just use a compactpci server, with redundant hardware in it(this technology, while expensive, even allows removal of failed cpus during operation of the machine, it was developed for telecom carriers, and is rather expensive). That would protect you from component failures, but not from power outages without redundant power, nor from os failures.
    This is a hard problem(NP-Hard perhaps, I'm not sure), and you need to have a:

    List of applications you want to protect

    Budgeted amount

    What threats you are trying to protect from

    What kind of failures you will tolerate(do you need 99.9% uptime? or better? worse?
    You could, for simple applications, like web service, bump up a pair of linux machines, gimmick some replication between the two, and hope nothing goes wrong, if you have a very low budget, and you'd probably spend a fair amount of work debugging later on, "synchronisation problems". But for redundant storage. The openssi project [openssi.org] is working on highly-available single-image clusters for linux, in an open source model, they might be your first place to look. It's not however, something for the unprepared to do, nor is it something that I'd recommend if you do other tasks for this company. Maintaining such a beast will require a significant implantation investment. The good news is that once everything works to your satisfaction, you can probably take a 4 week vacation somewhere with golden beaches and much sun, and let it take care of itself. I can't stress this enough, this is a hard problem, if you really want to do this right, you'll want to surround yourself with qualified people with experience in this field, it's non-trivial, and mistakes can lead to severe data-loss.

  • Our backup system (Score:5, Informative)

    by fava (513118) on Friday June 04, 2004 @04:02PM (#9338608)
    At my place of work (18 people) I have set up spare low end machine (p233) with a 80gb drive as a backup file server. During the day every 15 minutes everything that has changed is copied to the backup server. The backup fileserver is configured as read only so a user cannot accidently change anything.

    If the main fileserver goes down I simply change the configuration to read/write and change filemaping on the users machine and they continue to work. The whole process will take about 10 minutes to reconfigure the server and a couple of minutes per user machine.

    As a bonus I dont delete the intermediate versions of changes files as I update the server. Instead I compress them with a unique filenames. So I can recover a fairly complete history of any given file. I have yet to fill up the 80gb drive so I havent needed to delete any backups. When the backup drive is full I will start deleting some of the older version, I should have room for about 6 to 9 months of backups at 15 minute intervals.
    • Yes, this is exactly what I was thinking. The advantage is obvious; you can get by with very cheap hardware, failover time is in the order of minutes (although manual intervention is required). The intermediate versions are also useful, I'd imagine; might be worth checking up on someone who use rsync for backups with hard links which did a similar job.

      One thing to add; I really hope you're not relying on the backup machine as your sole source of backups; if you lose the site (fire/flood), you lose all y

      • I also rsync to an offsite server at my home 2 times a day, the company pays for my broadband connection. Rsync has been configured to delete any remote files that are no longer on the server but will only delete 50 files per session.

        As well I copy all files that have changed during the day to a dated directory I periodically burn them to a cd. I end up with about 1 cd a week going back over 2 years.

        All together I have 5 levels of backup.
        1) Onsite mirror updated every 15 minutes with incremental version
        • If you intend to survive, that's the way to do it.
          You need multiple backups, and they need to be cheap.

          First rule of backups is that when you need them, something is not the way it should be and any scheme that assumes everything is normal is quite likely to fail. This means you want any failures of the backup systems to be as independent as possible of failures in the main systems.

          Second rule of backups is that every backup except the one that matters was a complete waste of time. Backups need to be chea
    • Instead I compress them with a unique filenames.

      What's your method for this? I recall something along those lines from the last time I read the rsync man page/docs, but I'm wondering how you go about it.

      • I dont use rsync for the backing up over the local network. I maintain a list of files on the server and compare file modify times using perl. If the file does not exist on the backup server it is simply copied. If there is an existing file then the existing file is renamed before the new file is copied. The unique filename is simply the file renamed with a unix time value (ie seconds since Jan 1/1970) while maintaining the file extention. The actual compression is a cron job that runs over night.

        For exam
  • by zaqattack911 (532040) on Friday June 04, 2004 @04:13PM (#9338736) Journal
    I've been a system admin for a production webserver for a few years now, and I can tell you this.

    99.9% of the time when I've had to retreive data from backup, it was because of human error. I.E. someone deleted something they shouldn't have, or the moved the wrong directory to the wrong place, or an error was made during a software upgrade, etc..

    the rest is due to random harware failure which would be a reason for using RAID. But pouring thousands into redundant servers and disks, is overkill for a biz your size.

    If someone accidently wipes out a folder or data, your raid disks won't be any help.

    Love,
    Zaq
    • by MarkGriz (520778) on Friday June 04, 2004 @04:39PM (#9339098)
      "But pouring thousands into redundant servers and disks, is overkill for a biz your size."

      I think it's a mistake to make a blanket statement that a RAID array overkill for a small business. My company is similar in size (18 employees) and a RAID is absolutely essential for us from a downtime perspective. We simply can't afford to be down becuase a drive crashed.

      Sure, backups are essential for the lost/deleted file, but a RAID (or at least a mirrored drive) keeps your server up and running. Not everyone needs that type of reliability, but if you figure the cost of recovering from a failed hard drive (even in a small company), the additional cost of a RAID upfront is well worth the investment.
      • I think it's a mistake to make a blanket statement that a RAID array overkill for a small business. My company is similar in size (18 employees) and a RAID is absolutely essential for us from a downtime perspective. We simply can't afford to be down becuase a drive crashed.

        Exactly. RAID is all about buying time when a hard drive fails. My personal server ate it's OS drive, and from a user's perspective, you would never know it. Being lazy, I waited several months before I replaced it. OTOH, at work, I hav
    • FreeBSD and a few other OS's support filesystem checkpoints, which effectively let you keep multiple versions of a filesystem on the same disk. They're actually used for background fsck in fBSD (create snapshot => fsck snapshot, leaving the rest of the filesystem live), but could also be used for keeping filesystem checkpoints about in the event of screwups like this.

      With WinXP SP2, it also seems you can use WebDAV shares like normal fileshares -- an interesting project would be interfacing this to Sub

  • Daily backups, #1

    What kind of server though?
    Mail? SQL? Files?
  • If you can do it the best way to handle is Clusters with an external Raid 5 device that is a shared resource between the two(or more) servers.
    Set them up with a shared hardware Raid 5 device.
    There is only one active Node in the cluster at a time, if that one fails the second one assumes the identity. Works great never fails!
    We are a bit larger so we use EMC Symmetrix, however a smaller shop could probably do a low end EMC Clariion CX200 or the like.
  • I'm a sysadmin for a tier 2 autmotive company in Michigan with about 35 client machines.

    The two main servers are xeons with raid 5, redundant PSU's etc etc. One server runs the domain and as a file server while the other runs the manufacturing software suite (heavy database workload). All the data is very important but I rarely have a problem with lost data , unless some smuck over-writes a file or something stupid like that.

    The backup solution I implemented was a Debian box that runs rsync every night ba
    • if someone looses something

      Karma be damned, you gotta learn the difference between loose and lose. Look, you lose your virginity to a loose MILF. Get it? DAMN, DOOD, every day. Idiots!
  • by nenam (613985) <`moc.liamg' `ta' `krukb.hcet'> on Friday June 04, 2004 @04:27PM (#9338920) Homepage
    We just finished building a 2.5 TB (terabyte) server for less than 5000$. You could probably spend even less than that since we spend about 1000$ on two fiberoptic cards. We have 2 6 chanel 3ware RAID cards and 12 250 133ATA Maxtors hooked up to a 520 watt powersupply plus another 520 watt power supply acting as redudant power(we did that mod inhouse). 2.5 TB is probably more than you guys will need unless you are doing some advertising or something like that... so you could probably go for 1 TB, which will cut your costs down even more. So all in all you could probably get it done in about 3000$ not too shabby for 16 ppl. Our server backs up my whole college.
  • An option (Score:3, Insightful)

    by Halvard (102061) on Friday June 04, 2004 @04:36PM (#9339064)

    I too have long experience doing small business consulting and in some other areas. One thing you could do is use RAID-1 with a spare drive. That way if you lose one, you aren't screwed. You also could have a couple spare drives in hot-swap carriers. Pull a drive every night and have a duplicate of your server. Fire up the duplicate server and pop in your known good pull and boot if you server fails.

    OS dependent, you don't even have to have exactly the same hardware if you use a more generic kernel build and you can list a different NIC for the spare server in the conf file for modules assuming you aren't compiling them into the kernel.

    Continue with good backups made to another machine, to tape/CD/hard drive, or off-site. This way, even if your good pulled drive is a little out of date, you can bring to data current in short order.

    You don't mention the OS of the server or budget, but I'll assume that since you've got 2 machines per desk time 15, you can afford a spare server. You don't mention OS and that affects cost, but still, if you are doubling up on hardware on desktops, you can afford to do this or most any of the other solutions offered.

    Of course, you get what you pay for and if the experience is lacking in house, hire a knowledgeable consultant or company you trust to do it for you.

  • Cheap Redundancy (Score:5, Informative)

    by Zambarra (696249) on Friday June 04, 2004 @04:37PM (#9339077)
    a relatively cheap setup for data/service redundancy for a small business.

    * two identical servers, running linux (of course).
    * heartbeat
    * drbd
    * two UPS

    Notes, Ins, Outs and What Have You's

    service redundancy

    heartbeat is used to make 2 servers look as if they were one. if one of the servers dies, heartbeat makes sure the other assumes the ip address and has all the relevant services started.

    data redundancy

    drbd is a network block device. again, it looks like one device, but when data is written to it, its actually being written to 2 seperate locations. if one box goes down, heartbeat makes sure drbd makes the other box primary.

    hardware

    these two call for a dedicated network and serial connection. so 2 nics and a serial port per box.
    definitely raid array of some sort.

    see drbd.org for more details.

    this is not a 100% proof setup, but its cheap and covers most of the bases.
    of course, it requires a linux dude to get it all to work.
    • of course, it requires a linux dude to get it all to work.

      I'm a linux dude, I'll do it (I already have for a company's mail servers). But I'm in the UK...
    • If your going to go the reduntant server route then you should also make sure that each of your servers/UPSs is on a differnt electrical circuit.
    • I'm setting up drbd and heartbeat on a couple servers that will go into my colo cabinet this month. Everything the parent mentioned is positive (and true), so I'll provide a little balance by mentioning a few of the negatives.

      First, it's not trivial to set up. If configuring one server is 1 unit, configuring redundancy is (1+n)^2 units of work where 'n' is the number of services that need to fail over. Maybe that's a little high; ((1+n)^2)-n might be closer.

      If the machine is internal-only (no public IP),
    • Good suggestions, except for "two identical servers", which is a very bad idea. Identical servers will fail at nearly identical points in time.

      You'll know what I'm talking about when you have a backup server die the day after you switched over to it when the primary failed.

      Diversify at least the the mainboards, power supplies, and hard disks.

      A little addition for the UPS's: plug them into different power outlets on preferably different circuit breakers (if unknown, try opposite ends of the room). No need
  • by sydb (176695) * <michael AT wd21 DOT co DOT uk> on Friday June 04, 2004 @04:40PM (#9339116)
    you may benefit from a combination of heartbeat [linux-ha.org] and DRBD [drbd.org], which respectively provide IP address/service failover and a network (no special hardware required) data replication solution.

    If you have appropriate hardware you might also appreciate Stonith [linux-ha.org], which provides forced-shutdown of a failed node (in the case that the failed node won't release the IP address, and hence you would otherwise have problems switching service).

    If you're in the UK then give me a shout and I'll set it up for you (for a reasonable fee)! My contact details are available on my web site.
  • If you're already making regular daily backups, and are only worried about in-between-backups, run RAID on your server -- I forget the specific RAID number, but use the one that mirrors your data on two disks (not the one that speeds up disk access by splitting your data between disks).
  • by BigGerman (541312) on Friday June 04, 2004 @05:24PM (#9339705)
    .. get people into the habit of running CVS or Subversion client on "their documents" folders. Tortoise integrates right into Windows explorer. Advantages: file versioning, ability to work off line and still sync with the server later, etc.
    if people actually work with plain text docs, they would love how CVS,etc will merge multiple users' changes.
    Of course you would back up your CVS server but in case of a crash, chances are that very important file can be found on the desktop of the user who edited it the last time. Much better than relying on a network drive and then it is just not there.
  • Rsync (Score:4, Informative)

    by peterdaly (123554) <{petedaly} {at} {ix.netcom.com}> on Friday June 04, 2004 @06:17PM (#9340198)
    It's already been mentioned a little, but a second server kept up to date with rsync may be a cheap way to go depending on how big your server is. While I don't know how much data you are talking about, I would expect rsync could sync a few times a day easily via a cron job.

    I would suggest springing an extra $90 to get two extra gigabit ethernet cards and a crossover cable for a dedicated connection for rsync which doesn't compete with office traffic.

    Using rsync as a basis, the solution could be made as low tech and simple or automated complex as you feel is needed.

    -Pete
    Do woodworking? 50 Router Bits [starvingmind.net]
    • i'm probably off my rocker, but i thought gigabit has some sensing capabilities which meant that you didnt need crossover cables, you could use any old cable between two cards and it would auto-detect and cross itself over.
  • One of the things that I think people underestimate is the importance of version control. Far too often data loss is due to somebody accidentally deleting a file they spent a week working on. With version control you should be able to revert and get back most of the work.

    The other is redundant hardware. As people point out, RAID etc. only provides protection for the redundant components. If the controller, motherboard or such which is not redundant goes bad, you are screwed. The best solution is two server
  • by Peartree (199737) <idl3mind@gmailNETBSD.com minus bsd> on Friday June 04, 2004 @08:05PM (#9341100) Homepage
    If you are using Windows 2000/2003, an easy redundant file serving solution is to setup DFS (distributed file system). Just a tip, don't setup a domain-wide share for a file server that gets a lot of updates. Using DFS like that can create an administrative nightmare (last writer wins situation). You would want to use a domain-wide share if you have a lot of read-only files (like installation files, PDF image archives, etc) and you need a high-availability solution. You would be restoring files from tape a lot. Anyhoo, if your first server crashes, temporarily redirect your users to the second server either via DNS or just renaming the servers. DFS doesn't replicate printers, so you would have to install a new printer two times, once on the first server and a second time on your second server. Shouldn't be too much a problem if you only have 15 users.

    If you are using Linux/UNIX/*BSD, you could use Rsync [samba.org]. There was a great article explaining Rsync usage in the June '04 print edition of SysAdmin [sysadminmag.com].
  • If you can't see the business opportunity for a small and cheap business server distro solutions. Then you must be blind.

    1. Do the things mentioned in other posts.
    2. Distributed OS.
    3. Offer offsite backups
    4. Profit!
    • Really, you're right this screams for a Knoppix or Mepis type treatment! Start with a Live CD with all your common apps pre configured [apache, php, perl, samba, etc] as well as several options for hardware configuration and then boot and go.

      Obviously, you'd have to limit your hardware configurations somewhat due to constraints, but that would be a good learning experience for why you needed the hardware and what each redundancy was buying you.

    • Re:If you dont see (Score:3, Informative)

      by hirschma (187820)
      While this _should_ be a great business opportunity, I think you'd find that small businesses pose some interesting challenges:

      * Small business owners are CHEAP. They don't want to spend a nickel on something that isn't an immediate problem.

      * They don't see the value in disaster recovery until they experience the disaster.

      * They are hard to sell and market to.

      * They often use horrible niche-market server based solutions that are Windows only.

      I spent a few weeks talking to various business owners about
      • That was a nice price you offered. However many people that aren't into computers wouldn't understand you. So you would need to use some FUD. Like many companies did with the Y2K (bug?).
        • Tried the FUD angle. This was shortly after 9/11, and the question was: what would you do if all of your data vanished? If your office was destroyed?

          Of course, the FUD angle is: what would you do if your server was eaten by worms/viruses?

          Again, it is a great idea, but one that would be very problematic to actually sell.

          Jonathan
          • Well as long as they have your business card in their rolodex. And will not call some very expensive Oracle guy after shit hit the fan and now are paying Oracle consultents your money times many for just advice.

            This reluctance makes me think about my car insurance. I have it and my driving license for almost 10 years but never had an accident but still pay about (average cause in the beginning its more expensive) $725 a year for now ten years.

            The chance of me getting into an accident now are almost zero.
  • I'm a sys admin at a small mission in Uganda. We landed in some hot soep after blowing up the server a couple of times. We now use 2 junk computers with suse 8.0 with samba and have (rdiff-backup.stanford.edu)make a differencial backup every night. Works wonderfull!! Also alows you to repair a blunder of some days back. Check it out.
  • Similar company size, about 20 employees,
    we have a nice server with 5 36GB drives, running RAID 5, and another old system, with 2 120 GB IDE drives running RAID 1 in software (redhat), this machine rsyncs every hour with the main server... Its been fine for 2.5 years now.. lost a drive once in the RAID 5, replaced it and everything came back up fine...
  • A lot of posts seem to surround getting a large, professional server machine with redundant everything. Those are expensive and still have points of failure.

    I would suggest buying a number of the inexpensive wal-mart PCs and clustering them redundantly. Keep spares around for emergencies - emergency switches, nics, drives, etc.

    This is a more technically complicated environment, because you have to worry about data consistency between computers, but, these walmart PCs are disposable and can work independen
  • Be sure to establish the nightly backups:
    • Automated -- no one needs to press the button
    • Nightly -- no more than a day's work lost. Done at night (when the business is closed, rather), they are most likely to be self-consistent.
    • Remote (!) -- no tapes to shuffle, nor lose to the same fire/flood, that gets your server. Find a similar office and exchange each other's backups, or pay one of the many commercial providers in the area.
    • Encrypted, so you don't worry about the other guys poking through your data.
  • One thing that everyone seems to be missing is the question of how important the data is to you. IF the loss of a server (for an hour/day/etc.) is going to cost you $10,000 (purely an example figure) then you could probably justify putting around $10,000 or so into a nice top of the line server (you'd still have to skimp on things at that price, but still, it's to give you an idea). IF, on the other hand, having the server down for a day or the data loss you experienced costs your company only a couple hu
  • why not used something designed to run "forever," like a nice old Ultra 10 or any other Sun machine? Unlike the majority of choice in x86 land, these computers are actually made to be servers that can't afford to not function.
  • We've (er, I've) struggled with how best to do handle offsite disaster recovery (e.g. building goes up in smoke, or "bad guys" break in, steal everything). Overall storage of about 40gigs in a four person business, me as the CIO/CEO/etc. etc.

    Initially, we mirrored a Snap drive to a remote site via rsync, but dropped that when we downsized. We've used Backup Exec to a 30gig tape, but that's finicky - tapes seem to go south for no discernable reason. Currently experimenting with DVD, but it takes lots of di

  • I actually found a company that specialized in a Lunux box that would be ideal for this problem.

    http://www.pugservers.com/

Philogyny recapitulates erogeny; erogeny recapitulates philogyny.

Working...