Syncing Options for Computer Lab Machines? 60
sirfunk asks: "I'm going to begin helping out maintaining the computer labs around my university campus. I was wondering what solutions the Slashdot community had hints and tips for maintaining computer lab networks. We need a solution where we can keep a remote image on a server, and the computers will update to that on bootup. We also need them to be able to update, even if Windows is severely messed up (so if Windows dies, just reboot it). I know there's commercial solutions like Deep Freeze, but I was hoping someone knew of a creative Open Source alternative. I'd love if we could run these as dumb terminals with *nix, however that won't be an option for the general public. One Idea I had was to make the machines boot into a Linux partition that would rsync a FAT filesystem (the update) and then reboot to that FAT filesystem. The whole thing about getting it to boot into Linux first and then Windows next might be tricky. I would love to hear everyone's ideas on this topic. If you have any ideas that would run cross-platform (Mac/Windows) that would be great, too."
You only need RDP terminals (Score:2, Interesting)
For a lab, you may even be able to get volume pricing.
Re:You only need RDP terminals (Score:3, Informative)
Re:You only need RDP terminals (Score:3, Insightful)
The problem is the users... (Score:1)
Re-Imaging (Score:3, Informative)
Re:No way in hell. (Score:1)
some goooooogling (Score:1)
I thought I read once that ghost creates its own partition and then boots to that and downloads the image. So booting a minimal install of linux mightn't be much different...so.... Ghost for Unix [feyrer.de]
Something called system imager [systemimager.org]
A thread about ghost alternatives for linux [cantech.net.au]
cluster cloner [sourceforge.net]
cluster cloner [sourceforge.net]
tired of a href'n:
http://www.microwerks.net/~hugo/about/about.html
http://www.jpartner.com/documentati
Altiris (Score:5, Informative)
It has a server and client modules. The clients sync with the server on reboot. If there are jobs in the queue, the server pushes the jobs, they're applied and rebooted.
To create jobs, you make a baseline of an OS, install the application, and then run the baseline app again. The application examines the entire disk as well as the registry and notes changes. You build a package containing just the changes.
You can even turn the packages into self-extracting
easyeverything (Score:1, Interesting)
Don't Knock It (Score:5, Informative)
It seems like you are pro-open-source, but don't dismiss the commercial products completely. Novell's ZENworks for Desktops (ZfD) product is quite simply amazing! It also happens to do exactly what you're talking about.
Does it require Novell servers? No, it does not. You can read more from the ZENworks documentation [novell.com] at Novell's website. Read the ZENworks 4 docs. ZENworks 6 is a bundle of ZENworks 4 for Desktops and ZENworks for Servers and ZENworks for Handhelds.
I once read about a university (I think in the UK) that managed 30,000 Windows desktops with only six people! Also, the largest companies on the planet tend to favor ZENworks for Desktops over SMS [ntbugtraq.com] for deploying patches.
My computer support group uses ZfD to manage about 1,500 computers whose configurations vary widely from P2-400's to P4-3.06 Ghz boxes running anything from Win98 to WinXP. About 400 machines are in labs, but the rest are faculty or staff desktops. ZfD is extremely flexible. ZfD has an imaging solution, but is not limited to that.
ZfD imaging boots up a Linux agent first, either from the hard disk or by booting it over the network from the ZfD server or from a bootable CD-ROM. This agent checks Novell eDirectory to see what it should do (store an image of this workstation on the server, install an image onto the workstation, or other tasks). Once the image has been transferred, the computer reboots into Windows. Each time the computer boots, ZfD will check to see if it should perform an imaging task; if not, then it just boots Windows. ZfD can also add software to the base image on-the-fly!
Alternately, you can automate an install of Windows (just the base OS, with patches). Then install the ZfD agent and let it install all the other software for you. This solution is the ultimate in flexiblity, but requires you to have a pretty intimate knowledge of how Windows and ZENworks function, like what registry entries are dangerous to deploy to other workstations.
A combination of imaging and software deployment is an excellent way to get a workstation installed quickly and have a large selection of software available. You can deploy a small image (Windows, ZfD agent) and allow the ZfD agent to install other software as needed by the users. For example, ZfD can put items on the Start menu and when the user clicks on that item for the first time, ZfD installs the software. Rarely does one need to reboot.
ZENworks is probably the best solution available for managing large numbers of Windows desktops. It is powerful and flexible. Like many powerful tools, it is also a double-edged sword. It can easily deploy a patch and fix thousands of workstations, but if you deploy the wrong registry entry, you can just as easily break thousands of workstations. This is why you have to know Windows inside and out.
Finally, Novell has really good discounts for education. If you don't already have it available to you, check into it.
Unison (Score:3, Informative)
Check out Unison [upenn.edu]. Not sure if it is exactly what you want, but it is a nice cross-platform filesystem sync tool I use.
Re:Unison (Score:2)
also, unison can be a little tedious when trying to merge thousands of files... without an interactive session it is verrry difficult
infrastuctures.org (Score:1)
has documentation on the theory behind keeping multiple systems up to date. Most of their work has been Unix oriented, but the concepts they have developed are broadly applicable.
-ghostis
why image? (Score:3, Interesting)
In my labs we just deploy the machine and update the software remotely as needed. Sure, we should redeploy once or twice a year to clear out the cruft that builds up ove a semester. But I think it beats re-imaging on every boot.
A good question is how much are you imaging? That could save some time.
Of course, that's just my opinion I could be wrong.
What are these machines doing? (Score:2, Interesting)
Why not? What are these machines doing that makes Windows absolutely irreplacable? Decide what apps will be running on these machines. Since they're university computers, they probably won't be running games. Exchanging Office documents? If everybody in the university uses OpenOffice, that limits the requirements for MS Office to out-of-uni work. A few, limited-access, machines could be
How about an image on an 2nd partition? (Score:4, Informative)
I don't know what tools they use for this though, but dd should work. This is also how some companies use to have the recovery information for their desktops. If you used your rescue CD, it would revover from that hidden partition.
Anyway, just a though...
-P
Re:How about an image on an 2nd partition? (Score:2)
Pushing updates was hard because they wouldn't let me have any server access. I had to do it manually. If I DID have server access though I'd store the image on the server, have a cronjob MD5 it, and the workstations would compare MD5sums with the serve
rsync doesn't need *nix (Score:4, Informative)
When I was working computer labs, my preferred solution was linux + vmware, BTW. The machines ran linux (with everything mounted read-only - I'd netboot if I did it again), started up X, and then fired up a VMWare instance that ran full screen. The virtual disk image was on a remote machine (though it could just as easily be pushed to the client machines when it was updated), and was opened read-only on the clients. If anything happened, they'd just "restart", which just threw away any local changes that they'd made. It was great for the net admin classes, as we could give the users full control of the windows machine without worry of them actually screwing anything up. Also, you can update the install at any time by simply opening the disk image with "save changes" enabled. If you set the file system permissions so that normal users can't write to the image even if they do manage to change the vmware settings, you're pretty well set.
Granted, it costs some money, but it works real well - if you don't need direct hardware access to devices not supported by the host OS. That's the VMWare solution's catch - not all hardware is perfectly supported by linux, and using Win32 as a host is rather pointless.
Re:rsync doesn't need *nix (Score:2)
Re:rsync doesn't need *nix (Score:2)
computers will update to that on bootup and be able to update, even if Windows is severely messed up
rsync is only usable from userland, but I suppose if you had no other solution, you could
1) install an initrd on the box,
2) boot into that,
3) rsync the image to a different partition on disk, and
4) pivot_root into that partition.
Not Windows, but Linux... (Score:2)
Have you looked at Partition Image [partimage.org]? The NTFS support is still 'experimental', but it can load images over a network from a server. I don't know if it can boot them or not, but it's open-source, so I'm sure you can get some kind of help from the developers toward adding that sort of capability yourself. Then, you'd just need to make a set of bootable CDs that run the partimage client and automatically rewrite the hard drive with the correct image. Shoot -- if you put 2GB of RAM in them, would it be possible
Re:Not Windows, but Linux... (Partimage(d)) (Score:2, Interesting)
Complete picture:
+ PC boots, loads linux from network (PXE boot)
+ Linux does an fdisk, start partimage and restores original image
Overnight reinstalls are necessary 'cause we want to give students total freedom on the machine
Two problems:
1) found no way to boot windows from a running Linux so far. Temporary solution is a reboot, having the DHCP server
Nightly, network boot, or NT (Score:2)
Windows 9x gives the user root. Anything you do can be bypassed since anyone can write the partition table to wipe out linux and change the default boot partition to the windows one. The only way I know of to get around this is booting off a chip in your network card, your server can either load linux, or tell it to boot off the windows partition, depending on some scripts you set up. Do to the time it takes to re-install I recomend doing this nightly, if you notice a machine that someone has screwed up
Re:Nightly, network boot, or NT (Score:1)
Yes it's a commercial solution, but we use DeepFreeze to keep the machines locked down. It's very very hard to screw up one of the machines here, and if the worst comes to the worst we can re-clone a machine as a last resort.
The only drawback to this is that if it isn't set up for remote administration, it's a real bugger to install any legitimate upgrades. So minor changes (like adding one shortcut to the
roll-your-own idea with rsync (Score:1)
- use cygwin's rsync to auto update the passive partition
- move folder "os.old" to "os" when rsync complete
- round robin boot between the partitions
this may be a terrible idea, have never tried it
A free solution for the Mac (Score:1)
Rembo (Score:2)
Watch out with Multicast (Score:1)
Linux + VMWare + Windows (Score:2)
SystemImager v3.0.1 (Score:2, Informative)
SystemImager makes it easy to do automated installs (clones), software distribution, content or data distribution, configuration changes, and operating system updates to your network of Linux machines. You can even update from one Linux release version to another!
It can also be used to ensure safe production deployments. By saving your current production image before updating to your new production image, you have a highly reliable contingency mechanism. If the new production e
Don't do this (Score:3, Interesting)
There were two different setups, and I can't tell you what software they used to achieve them, but I can tell you what happened from a user's perspective.
In the first setup (a small lab -- about 20 machines), the machines were setup to automatically replace their installation of Windows once a week at a "convenient time". The problem was, this time was convenient for the sys admins, rather than the users. So, when working on a project out of scheduled lab times, I would often have to wait for about 30 mins to start work while the machine got a fresh copy of Windows. This was even worse if there was more than one person trying to use the machine, as the network would slow down.
The obvious solution to the above problem is to change the time to something like 3am. However, in these days of devastating Windows worms, I don't think it's an option to install a new image once a week. Also, many university computer facilities are open 24/7; you often get students who like to work antisocial hours, so choosing a convenient time is pretty difficult.
The second setup was a more campus-wide solution. I'm not sure how they achieved it, but it seemed that each machine maintained a log of which files were changed while a particular user was logged on. When they logged off, the machine simply returned the disk to the state it had been in before.
There are many problems with doing what you suggest:
+ User ignorance: naive users are used to saving their stuff to C:. If you then overwrite the disk, they will complain about your policy eating their homework.
+ If you have one 'master' disk image, how do you manage the different drivers required for different hardware? It's impossible to maintain a large number of systems with exactly the same hardware (when you consider component failures etc).
I would suggest the following: Use the permissions and management facilities of the OS to prevent users installing their own software or writing to the C: drive etc. Really lock them down. Give each user networked disk space which only they can write to. Make sure that you have an automated way to roll out patches, and keep on top of things. Make sure your virus protection is tip-top. Try to reduce the possibility of students infecting systems via removable media (I'd outlaw floppy disks, but students still use these!).
Further, for each "group" who need to work together (e.g. small groups of final year students who are working on a particularly project), provide a "transfer"area which they can all read and write. For users who need to install their own software (e.g. computer science researchers), establish a small team of sys admins at their location and let them do their own thing -- just make sure they are sufficiently safe behind a firewall so they can't easily shoot themselves in the foot and your managed main network is safe from any of their screw-ups.
Re:Don't do this (Score:1)
Perhaps you're referring to a Centurion Lock [centuriontech.com]?
Re:Don't do this (Score:2)
since group projects are pretty big in undergrad these days, it'd be nice if the students could easily have group storage, without having to do it on their one machines (since school run servers tend to be more reliable, easier to connect to, and on a faster connection than student run ones)
How true... (Score:2)
Try to reduce the possibility of students infecting systems via removable media (I'd outlaw floppy disks, but students still use these!).
I work at a college, too, and I can't tell you how many times students have walked in with a floppy disk (sometimes physically damaged, ie cracked in half) with the only copy of their paper that's due in an hour.
We've had pretty good luck using BadCopy (from JufSoft) to recover the disks, but sometimes they are too far gone, and students can't understand how it could
Not tricky to implement your dual boot solution (Score:2, Informative)
Make Linux the default target to boot to.
When you're inside of Linux, and you want to set it so it boots Windows for the next boot, and only the next boot, then you do a
lilo -R windows ; shutdown -r now
The next boot will be into Windows. The boot after that will be back into Linux.
Seems like you could set things up very easily to do what you want.
PQ DeployCenter (Score:1)
PC-Rdist (Score:2)
OS X Server (Score:4, Interesting)
This probably won't be able to apply to you, but it's worth knowing: Mac OS X Server can do this out of the box (to Mac clients). Apple calls it "NetBoot", and it's been available since at least 2000; I believe the tech came from NeXT originally.
Under OS 9 and 10.3 it allows for clients-without-drives as they get all their OS etc from the server down the wire (10.1,
Re:OS X Server (Score:2)
Just google for netboot bootp and tftp.
Can't see where anyone netboots windows, though...
Re:OS X Server (Score:2)
Re:OS X Server (Score:2)
I stand corrected. Interesting, though, that one can netboot OS 9 clients--I guess it's because you've got a Unix Server doing the heavy lifting.
Boot Prom? (Score:1)
Not so hot in terms of network traffic at 8am, and god-forbid your user saves locally and the machine locks, or your server gets compromised (shudder) but maybe an option if you can get around those hurdles.
Just a suggestion anyways.
lilo -R boots to other OS once (Score:3, Informative)
Is that too simplistic? man lilo for the -R switch.
pay for it.. (Score:2)
Re:pay for it.. (Score:1)
Already did this. (Score:3, Informative)
We used Smart Boot Manager [sourceforge.net] and set up scheduled reboots.
Works like a charm. Note that it not only cleans up the machines at the end of each day, it will also allow you to patch your master image and push that out to the network. (even a one-day lag is still faster than going from machine to machine patching or ghosting)
Watch out for oddities such as the Daylight to Standard time switch, though.
-transiit
Re:Already did this. (Score:1)
This is exactly the perfect solution (at least this solves my problem perfectly).
Re:Already did this. (Score:1)
This is a bitch with DeepFreeze too. (Though a worthwhile one compared to the havoc Students/Tutors can cause on the PCs).
I was just lucky this week that it coincided with both a no-lessons week and the latest Virus Signature update.
Re:Already did this. (Score:2)
1) Machines are set to completely ignore DST updates
2) The samba login scripts has the time sync upon log in, every time.
That keeps the clocks right, and the dialogs down.
Frisbee (Score:2)
From the abstract:
pc-rdist (Score:2, Informative)
A reply to someones comment about work space. When you setup applications, just make sure their default save location is in such a directory (Also, use NTFS to enfo
The every-other-reboot thing (Score:1)
Google for JO.SYS and download the free one some guy wrote. Configure JO.SYS to boot to the hard drive after a 1 second delay. Google for and download int19.com (it makes a PC warm reboot). Put both files on a floppy. Rename JO.SYS to JO.BAK. Configure autoexec.bat on the floppy to do your thing (re-image with Ghost, whatever) and then rename JO.BAK to JO.SYS and then call int19.com.
Finally, configure some kind of startup script on the hard drive to rename a:\JO.SYS to a:\JO.BAK.
Now, every t
seems simple (Score:2)
have the windows image copy a grub.conf
into place when it boots such that the default
boot partition is the linux UMSDOS partition.
have the linux partition copy a grub.conf
into place when it boots such that the default
boot partition is the windows partition.
What I've seen... (Score:2, Interesting)
Rebuilding the PC downloaded an image from a central server and re-imaged C:
If, however, the menu system noticed the time was after 1:00am and the PC hadn't been rebuilt for 24hours, it would force a rebuild, cleaning up any left over problems.
The system was enforced by removin
Automatic Ghosting. (Score:1)
However there is hardware out there that will remove any changes to the system on next boot, even if hd is formated. The solution I know of is called ZeroCard. Set the computer up, install the PCI Card, then set it up with password. When you want to change it, do so, restart and boot either of disk or hold down a key and enter the password. (I'm a bit sketchy on th
Frisbee (Score:2)