Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Operating Systems Software

Syncing Options for Computer Lab Machines? 60

sirfunk asks: "I'm going to begin helping out maintaining the computer labs around my university campus. I was wondering what solutions the Slashdot community had hints and tips for maintaining computer lab networks. We need a solution where we can keep a remote image on a server, and the computers will update to that on bootup. We also need them to be able to update, even if Windows is severely messed up (so if Windows dies, just reboot it). I know there's commercial solutions like Deep Freeze, but I was hoping someone knew of a creative Open Source alternative. I'd love if we could run these as dumb terminals with *nix, however that won't be an option for the general public. One Idea I had was to make the machines boot into a Linux partition that would rsync a FAT filesystem (the update) and then reboot to that FAT filesystem. The whole thing about getting it to boot into Linux first and then Windows next might be tricky. I would love to hear everyone's ideas on this topic. If you have any ideas that would run cross-platform (Mac/Windows) that would be great, too."
This discussion has been archived. No new comments can be posted.

Syncing Options for Computer Lab Machines?

Comments Filter:
  • For far less than the price of a real desktop, you can get a Windows Thin Client [windowsfordevices.com] that will work and play well with your NT servers.

    For a lab, you may even be able to get volume pricing.
    • $350 for a thin client plus $200 for the RDP license (1 CAL and 1 RDP CAL). Plus they still modify files on the computer. Now it just takes one talented induhvidual person that can screw up the server.
    • Even cheaper -- install redhat or some other linux and have it start rdesktop as the window manager -- you'll get a windows login every time you hit ctrl-alt-backspace ...
  • Install the OS - then lock the door. Problem solved.
  • Re-Imaging (Score:3, Informative)

    by NeonSpirit ( 530024 ) <`mjhodge' `at' `gmail.com'> on Tuesday October 28, 2003 @07:20AM (#7327038) Homepage
    If I understand the situation correctly then you want to re-image each machine on boot. I have looked at this and a complete XP Pro image on a Gb network takes anything from 20 - 45 mins. This is using a product called Altiris Deployment Server [altiris.com]which uses PXE under the covers. If this is acceptable then I'm sure you could do your own PXE solution with a Linux DHCP and TFTP server. You can download a free 30 day eval to see how it works and "clone" the procedure.

  • in japan and it seems like america is still asleep and in a good mood so I'll search for you.....
    I thought I read once that ghost creates its own partition and then boots to that and downloads the image. So booting a minimal install of linux mightn't be much different...so.... Ghost for Unix [feyrer.de]
    Something called system imager [systemimager.org]
    A thread about ghost alternatives for linux [cantech.net.au]
    cluster cloner [sourceforge.net]
    cluster cloner [sourceforge.net]
    tired of a href'n:
  • Altiris (Score:5, Informative)

    by MImeKillEr ( 445828 ) on Tuesday October 28, 2003 @08:03AM (#7327138) Homepage Journal
    When I worked in support (last gig was supporting internal classrooms) we used Altiris LabExpert. They've changed the name to Application Management [altiris.com], but this may be what you want. It's not open-source, but comparing this program's prices to the other similar ones on the market, we saved a TON of money (one vendor wanted nearly $150K for all the computers we were going to use this on. I think we spent $7K at each site for a total of $28K)

    It has a server and client modules. The clients sync with the server on reboot. If there are jobs in the queue, the server pushes the jobs, they're applied and rebooted.

    To create jobs, you make a baseline of an OS, install the application, and then run the baseline app again. The application examines the entire disk as well as the registry and notes changes. You build a package containing just the changes.

    You can even turn the packages into self-extracting .EXEs, burn to CD and deliver that way.

  • easyeverything (Score:1, Interesting)

    by Anonymous Coward
    The UK's easyEverything [easyeverything.com] (or easyInternetCafe, or whatever they call themselves) runs large internet cafes (up to 1000 pcs - I think the one in Time Square in NY is the biggest), and every time a user logs out of the pc it reboots and re-images itself. They use a commercial product [rembo.com] for that though. And its windows 95 (!) so I'm sure they have a pretty tiny image.
  • Don't Knock It (Score:5, Informative)

    by yancey ( 136972 ) on Tuesday October 28, 2003 @08:25AM (#7327194)

    It seems like you are pro-open-source, but don't dismiss the commercial products completely. Novell's ZENworks for Desktops (ZfD) product is quite simply amazing! It also happens to do exactly what you're talking about.

    Does it require Novell servers? No, it does not. You can read more from the ZENworks documentation [novell.com] at Novell's website. Read the ZENworks 4 docs. ZENworks 6 is a bundle of ZENworks 4 for Desktops and ZENworks for Servers and ZENworks for Handhelds.

    I once read about a university (I think in the UK) that managed 30,000 Windows desktops with only six people! Also, the largest companies on the planet tend to favor ZENworks for Desktops over SMS [ntbugtraq.com] for deploying patches.

    My computer support group uses ZfD to manage about 1,500 computers whose configurations vary widely from P2-400's to P4-3.06 Ghz boxes running anything from Win98 to WinXP. About 400 machines are in labs, but the rest are faculty or staff desktops. ZfD is extremely flexible. ZfD has an imaging solution, but is not limited to that.

    ZfD imaging boots up a Linux agent first, either from the hard disk or by booting it over the network from the ZfD server or from a bootable CD-ROM. This agent checks Novell eDirectory to see what it should do (store an image of this workstation on the server, install an image onto the workstation, or other tasks). Once the image has been transferred, the computer reboots into Windows. Each time the computer boots, ZfD will check to see if it should perform an imaging task; if not, then it just boots Windows. ZfD can also add software to the base image on-the-fly!

    Alternately, you can automate an install of Windows (just the base OS, with patches). Then install the ZfD agent and let it install all the other software for you. This solution is the ultimate in flexiblity, but requires you to have a pretty intimate knowledge of how Windows and ZENworks function, like what registry entries are dangerous to deploy to other workstations.

    A combination of imaging and software deployment is an excellent way to get a workstation installed quickly and have a large selection of software available. You can deploy a small image (Windows, ZfD agent) and allow the ZfD agent to install other software as needed by the users. For example, ZfD can put items on the Start menu and when the user clicks on that item for the first time, ZfD installs the software. Rarely does one need to reboot.

    ZENworks is probably the best solution available for managing large numbers of Windows desktops. It is powerful and flexible. Like many powerful tools, it is also a double-edged sword. It can easily deploy a patch and fix thousands of workstations, but if you deploy the wrong registry entry, you can just as easily break thousands of workstations. This is why you have to know Windows inside and out.

    Finally, Novell has really good discounts for education. If you don't already have it available to you, check into it.

  • Unison (Score:3, Informative)

    by jungd ( 223367 ) * on Tuesday October 28, 2003 @09:28AM (#7327452)

    Check out Unison [upenn.edu]. Not sure if it is exactly what you want, but it is a nice cross-platform filesystem sync tool I use.

    • yeah... the problem using this for his solution is that if windows gets to acting strange, unison (is there a windows port?) may not work properly. Also, I believe windows is pretty strict about overwriting system critical files while they are in use, so uniso would fail for a full system sync after booting into windows.

      also, unison can be a little tedious when trying to merge thousands of files... without an interactive session it is verrry difficult :(
  • http://www.infrastructures.org
    has documentation on the theory behind keeping multiple systems up to date. Most of their work has been Unix oriented, but the concepts they have developed are broadly applicable.

  • why image? (Score:3, Interesting)

    by gizmo_mathboy ( 43426 ) on Tuesday October 28, 2003 @09:38AM (#7327516)
    Actually, how "close" are the images, network-wise? As another has noted, it will take a long time do the image.

    In my labs we just deploy the machine and update the software remotely as needed. Sure, we should redeploy once or twice a year to clear out the cruft that builds up ove a semester. But I think it beats re-imaging on every boot.

    A good question is how much are you imaging? That could save some time.

    Of course, that's just my opinion I could be wrong.
  • I'd love if we could run these as dumb terminals with *nix, however that won't be an option for the general public.

    Why not? What are these machines doing that makes Windows absolutely irreplacable? Decide what apps will be running on these machines. Since they're university computers, they probably won't be running games. Exchanging Office documents? If everybody in the university uses OpenOffice, that limits the requirements for MS Office to out-of-uni work. A few, limited-access, machines could be

  • by pbulteel73 ( 559845 ) on Tuesday October 28, 2003 @10:05AM (#7327672) Homepage
    You could always have a partition saved on the a 2nd hidden partition and recover from there. That would make it a LOT faster than trying to go through the network. The LG Internet fridge recovers it's win98 partition and resets itself by doing this. (No I don't have one - they're $8000.)

    I don't know what tools they use for this though, but dd should work. This is also how some companies use to have the recovery information for their desktops. If you used your rescue CD, it would revover from that hidden partition.

    Anyway, just a though...

    • That's what we do at URI. Most of our lab workstations had 10GB local drives, and our image was about 2GB in ghost high-compression, we stored the image on a second partition, and each morning at 7AM they'd wake up and if there time was right they'd reimage.

      Pushing updates was hard because they wouldn't let me have any server access. I had to do it manually. If I DID have server access though I'd store the image on the server, have a cronjob MD5 it, and the workstations would compare MD5sums with the serve
  • by cloudmaster ( 10662 ) on Tuesday October 28, 2003 @10:11AM (#7327700) Homepage Journal
    You can install rsync for windows, which is easily done using cygwin. Write a little shell script (since you're pro *nix) and set it up to run on boot. That oughtta be fairly easy.

    When I was working computer labs, my preferred solution was linux + vmware, BTW. The machines ran linux (with everything mounted read-only - I'd netboot if I did it again), started up X, and then fired up a VMWare instance that ran full screen. The virtual disk image was on a remote machine (though it could just as easily be pushed to the client machines when it was updated), and was opened read-only on the clients. If anything happened, they'd just "restart", which just threw away any local changes that they'd made. It was great for the net admin classes, as we could give the users full control of the windows machine without worry of them actually screwing anything up. Also, you can update the install at any time by simply opening the disk image with "save changes" enabled. If you set the file system permissions so that normal users can't write to the image even if they do manage to change the vmware settings, you're pretty well set.

    Granted, it costs some money, but it works real well - if you don't need direct hardware access to devices not supported by the host OS. That's the VMWare solution's catch - not all hardware is perfectly supported by linux, and using Win32 as a host is rather pointless. :(
    • There's a native rsync for Windows which doesn't need cygwin. (Google for it.)
    • rsync is great but it looks like he is going for more a of a PXE or NetBoot solution as pointed out by his post:

      computers will update to that on bootup and be able to update, even if Windows is severely messed up

      rsync is only usable from userland, but I suppose if you had no other solution, you could :

      1) install an initrd on the box,
      2) boot into that,
      3) rsync the image to a different partition on disk, and
      4) pivot_root into that partition.

  • Have you looked at Partition Image [partimage.org]? The NTFS support is still 'experimental', but it can load images over a network from a server. I don't know if it can boot them or not, but it's open-source, so I'm sure you can get some kind of help from the developers toward adding that sort of capability yourself. Then, you'd just need to make a set of bootable CDs that run the partimage client and automatically rewrite the hard drive with the correct image. Shoot -- if you put 2GB of RAM in them, would it be possible
    • Right now, I have a "partimage" solution we use to reinstall our PC rooms (115 PC's right now) in a similar way to what's asked for in the originating post.

      Complete picture:
      + PC boots, loads linux from network (PXE boot)
      + Linux does an fdisk, start partimage and restores original image

      Overnight reinstalls are necessary 'cause we want to give students total freedom on the machine

      Two problems:
      1) found no way to boot windows from a running Linux so far. Temporary solution is a reboot, having the DHCP server
  • Windows 9x gives the user root. Anything you do can be bypassed since anyone can write the partition table to wipe out linux and change the default boot partition to the windows one. The only way I know of to get around this is booting off a chip in your network card, your server can either load linux, or tell it to boot off the windows partition, depending on some scripts you set up. Do to the time it takes to re-install I recomend doing this nightly, if you notice a machine that someone has screwed up

    • Where I am we have Win98 machines for the Users. (Not my choice. Setup predates my employment here.)

      Yes it's a commercial solution, but we use DeepFreeze to keep the machines locked down. It's very very hard to screw up one of the machines here, and if the worst comes to the worst we can re-clone a machine as a last resort.

      The only drawback to this is that if it isn't set up for remote administration, it's a real bugger to install any legitimate upgrades. So minor changes (like adding one shortcut to the
  • - 2 equal partitions on clients
    - use cygwin's rsync to auto update the passive partition
    - move folder "os.old" to "os" when rsync complete
    - round robin boot between the partitions

    this may be a terrible idea, have never tried it
  • Here at Ohio State, we use a free program called RevRdist [purdue.edu] to keep the Mac machines up to date.
  • At the Uni I used to work at we used a product called Rembo [rembo.com] and it worked well. It uses multicast to reimage (amongst other useful things), so reimaging an entire lab doesn't bring your network to its knees.
  • Make sure you contact whoever handles your networking so you can properly configure multicast on whatever app you use (Ghost, etc.). If not, you're almost bound to kill your upstream router, especially if it's older. It happens at the U where I work fairly frequently. CPU load goes high on a certain router, you check it, and its just flooded with multicast from incorrectly configured applications.
  • If windows is absolutely irreplaceable, I found the easiest solution was to buy a Linux VMWare license for each machine. Install Win32 in the VMWare environment. Save a snapshot (which is just a large regular Linux file). Copy the snapshot to a server. Restoring the Windows environment is as simple as restarting VMWare from the snapshot. Costs about $300 per machine.
  • SystemImager v3.0.1 (Score:2, Informative)

    by bastion ( 444000 )

    SystemImager makes it easy to do automated installs (clones), software distribution, content or data distribution, configuration changes, and operating system updates to your network of Linux machines. You can even update from one Linux release version to another!

    It can also be used to ensure safe production deployments. By saving your current production image before updating to your new production image, you have a highly reliable contingency mechanism. If the new production e
  • Don't do this (Score:3, Interesting)

    by CompVisGuy ( 587118 ) on Tuesday October 28, 2003 @12:16PM (#7328894)
    When I was an undergrad, we had machines that were managed like this.

    There were two different setups, and I can't tell you what software they used to achieve them, but I can tell you what happened from a user's perspective.

    In the first setup (a small lab -- about 20 machines), the machines were setup to automatically replace their installation of Windows once a week at a "convenient time". The problem was, this time was convenient for the sys admins, rather than the users. So, when working on a project out of scheduled lab times, I would often have to wait for about 30 mins to start work while the machine got a fresh copy of Windows. This was even worse if there was more than one person trying to use the machine, as the network would slow down.

    The obvious solution to the above problem is to change the time to something like 3am. However, in these days of devastating Windows worms, I don't think it's an option to install a new image once a week. Also, many university computer facilities are open 24/7; you often get students who like to work antisocial hours, so choosing a convenient time is pretty difficult.

    The second setup was a more campus-wide solution. I'm not sure how they achieved it, but it seemed that each machine maintained a log of which files were changed while a particular user was logged on. When they logged off, the machine simply returned the disk to the state it had been in before.

    There are many problems with doing what you suggest:

    + User ignorance: naive users are used to saving their stuff to C:. If you then overwrite the disk, they will complain about your policy eating their homework.

    + If you have one 'master' disk image, how do you manage the different drivers required for different hardware? It's impossible to maintain a large number of systems with exactly the same hardware (when you consider component failures etc).

    I would suggest the following: Use the permissions and management facilities of the OS to prevent users installing their own software or writing to the C: drive etc. Really lock them down. Give each user networked disk space which only they can write to. Make sure that you have an automated way to roll out patches, and keep on top of things. Make sure your virus protection is tip-top. Try to reduce the possibility of students infecting systems via removable media (I'd outlaw floppy disks, but students still use these!).

    Further, for each "group" who need to work together (e.g. small groups of final year students who are working on a particularly project), provide a "transfer"area which they can all read and write. For users who need to install their own software (e.g. computer science researchers), establish a small team of sys admins at their location and let them do their own thing -- just make sure they are sufficiently safe behind a firewall so they can't easily shoot themselves in the foot and your managed main network is safe from any of their screw-ups.
    • The second setup was a more campus-wide solution. I'm not sure how they achieved it, but it seemed that each machine maintained a log of which files were changed while a particular user was logged on. When they logged off, the machine simply returned the disk to the state it had been in before.

      Perhaps you're referring to a Centurion Lock [centuriontech.com]?
    • regarding groups, it would be even nicer if there was a way for students to create groups themselves, so they don't need to bother the sysadmin, or wait for the sysadmin.

      since group projects are pretty big in undergrad these days, it'd be nice if the students could easily have group storage, without having to do it on their one machines (since school run servers tend to be more reliable, easier to connect to, and on a faster connection than student run ones)

    • Try to reduce the possibility of students infecting systems via removable media (I'd outlaw floppy disks, but students still use these!).

      I work at a college, too, and I can't tell you how many times students have walked in with a floppy disk (sometimes physically damaged, ie cracked in half) with the only copy of their paper that's due in an hour.

      We've had pretty good luck using BadCopy (from JufSoft) to recover the disks, but sometimes they are too far gone, and students can't understand how it could

  • Set up Lilo with two targets: Linux and Winders.

    Make Linux the default target to boot to.

    When you're inside of Linux, and you want to set it so it boots Windows for the next boot, and only the next boot, then you do a

    lilo -R windows ; shutdown -r now

    The next boot will be into Windows. The boot after that will be back into Linux.

    Seems like you could set things up very easily to do what you want.

  • i just setup a 29 node XP lab using PowerQuest's Deploy Center, which is basically Driveimage on steroids, create image of o/s using boot disks, saving image to net server make boot disk hardcoded to grab and download image from server, run on all clients. the problem we ran into is the network here is 100 Mbit fiber full dupe backbone, and 10Mbit full dupe UNSWITCHED horizontal runs, the lab workroom was wired into the MDF directly, but all boxes were on 10 mbit line, a 6.2GB image took 9 hours when deplo
  • Check PC-Rdist [pyzzo.com] out. We used them in about five labs to sync about 200-300 PCs running from Windows 95 to 98 to 2000 to XP. It's really fast, works extremely well, and has a lot of options that will let you customize how it runs. For example, if they're computers for students and students are prone to accidentally leaving their files on the PC, you can set it so when it runs it will save all .DOC files less than 1 MB (or whatever size) in a particular folder of your choosing, and after a week of being there
  • OS X Server (Score:4, Interesting)

    by Johnny Mnemonic ( 176043 ) <mdinsmore@@@gmail...com> on Tuesday October 28, 2003 @01:22PM (#7329601) Homepage Journal

    This probably won't be able to apply to you, but it's worth knowing: Mac OS X Server can do this out of the box (to Mac clients). Apple calls it "NetBoot", and it's been available since at least 2000; I believe the tech came from NeXT originally.

    Under OS 9 and 10.3 it allows for clients-without-drives as they get all their OS etc from the server down the wire (10.1, .2 required a HD, but only for swap), which is useful in some secure installations. Read more about it here [apple.com].
    • I'm pro-apple and ex-next, but netbooting is hardly NeXT or Apple specific. Just about any unix variant will netboot. I've netbooted nextstep, solaris, and linux.

      Just google for netboot bootp and tftp.

      Can't see where anyone netboots windows, though...
      • Any mac with smoothly netboot and it's easy to configure. That is not my impression of x86 nix. To be fair, it's not the OS - it's OpenFirmware vs. PC BIOS (both for variety and because it's often less powerful)

      • I stand corrected. Interesting, though, that one can netboot OS 9 clients--I guess it's because you've got a Unix Server doing the heavy lifting.
  • Some network cards support using a Boot Prom, you could boot off a server and copy an image down to the client at that point.
    Not so hot in terms of network traffic at 8am, and god-forbid your user saves locally and the machine locks, or your server gets compromised (shudder) but maybe an option if you can get around those hurdles.
    Just a suggestion anyways. :)
  • by korpiq ( 8532 ) <-.@korpiq[ ]i.fi ['.ik' in gap]> on Tuesday October 28, 2003 @02:28PM (#7330343) Homepage
    I'd put something like this into a script (/etc/init.d/restore_windows):


    lilo -R windows


    cd $WINDIR

    shutdown -r now

    Is that too simplistic? man lilo for the -R switch.
  • Just break down and pay for Norton Ghost or a similar program. That way when it doesn't work, you can make them fix it.
    • Norton Ghost may be one of the better products on the market today, but it is also the cause of many headaches in my job as a lab manager. Sometimes it works great, and other times it refuses to work. Basicly the moral to this story is just because you spend money on it does not mean it will always work. However, for restoring the condition of the system on every boot, we are using DeepFreeze. It is on of the best investments we ever made.
  • Already did this. (Score:3, Informative)

    by transiit ( 33489 ) on Tuesday October 28, 2003 @04:20PM (#7331670) Homepage Journal
    I helped a guy out set up this exact FAT32 + rsync setup.

    We used Smart Boot Manager [sourceforge.net] and set up scheduled reboots.

    Works like a charm. Note that it not only cleans up the machines at the end of each day, it will also allow you to patch your master image and push that out to the network. (even a one-day lag is still faster than going from machine to machine patching or ghosting)

    Watch out for oddities such as the Daylight to Standard time switch, though.

    • Mod the parent up!

      This is exactly the perfect solution (at least this solves my problem perfectly).
    • Watch out for oddities such as the Daylight to Standard time switch, though.

      This is a bitch with DeepFreeze too. (Though a worthwhile one compared to the havoc Students/Tutors can cause on the PCs).
      I was just lucky this week that it coincided with both a no-lessons week and the latest Virus Signature update.

      • Here is what we do about the DST problem:

        1) Machines are set to completely ignore DST updates
        2) The samba login scripts has the time sync upon log in, every time.

        That keeps the clocks right, and the dialogs down.
  • Check out Frisbee for fast disk imaging.

    From the abstract:

    Both researchers and operators of production systems are frequently faced with the need to manipulate entire disk images. Convenient and fast tools for saving, transferring, and installing entire disk images make disaster recovery, operating system installation, and many other tasks significantly easier. In a research environment, making such tools available to users greatly encourages experimentation.

    We present Frisbee, a system for saving, tran

  • pc-rdist (Score:2, Informative)

    by tangsc ( 161284 )
    We did such a thing to manage 3 computer labs for the college of engineering at a large university. (They deployed it to a couple more labs after I graduated). We used a program called PCRdist. (http://www.pyzzo.com/). It is based off a unix app called rdist. It was great. We used it to manage the different desktops, deploy applications, etc.

    A reply to someones comment about work space. When you setup applications, just make sure their default save location is in such a directory (Also, use NTFS to enfo
  • I've done it.

    Google for JO.SYS and download the free one some guy wrote. Configure JO.SYS to boot to the hard drive after a 1 second delay. Google for and download int19.com (it makes a PC warm reboot). Put both files on a floppy. Rename JO.SYS to JO.BAK. Configure autoexec.bat on the floppy to do your thing (re-image with Ghost, whatever) and then rename JO.BAK to JO.SYS and then call int19.com.

    Finally, configure some kind of startup script on the hard drive to rename a:\JO.SYS to a:\JO.BAK.

    Now, every t
  • use grub with a UMSDOS boot partition.
    have the windows image copy a grub.conf
    into place when it boots such that the default
    boot partition is the linux UMSDOS partition.
    have the linux partition copy a grub.conf
    into place when it boots such that the default
    boot partition is the windows partition.
  • What I've seen... (Score:2, Interesting)

    by DaracMarjal ( 513394 )
    The University of York used to do this idea. The computers would network boot to a small menu system (probably in DOS or something). You could either choose to Boot windows (whereby the Hard Disk was chainloaded) or Rebuild the PC.

    Rebuilding the PC downloaded an image from a central server and re-imaged C:

    If, however, the menu system noticed the time was after 1:00am and the PC hadn't been rebuilt for 24hours, it would force a rebuild, cleaning up any left over problems.

    The system was enforced by removin
  • There are two options, Norton Ghost works well for images but takes about 20 minutes and doesn't work on Mac OS. Not sure about Unix/Linux.

    However there is hardware out there that will remove any changes to the system on next boot, even if hd is formated. The solution I know of is called ZeroCard. Set the computer up, install the PCI Card, then set it up with password. When you want to change it, do so, restart and boot either of disk or hold down a key and enter the password. (I'm a bit sketchy on th
  • Google 'Frisbee' from the Univ. of Utah. Complete re-imaging in about 30 seconds! It was originally developed to rebuild a cluster used to test network protocols.

Perfection is acheived only on the point of collapse. - C. N. Parkinson