Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Ubuntu Windows IT

Virtualizing Workstations For Common Hardware? 349

An anonymous reader writes "We have approximately 20 workstations which all have different hardware specs. Every workstation has two monitors and generally runs either Ubuntu or Windows. I had started using Clonezilla to copy the installs so we could deploy new workstations quickly and easily, when we have hardware failures or the like, but am struggling with Windows requiring new drivers to be installed for all new hardware. Obviously we could be booting into Ubuntu and then load a Windows virtual machine after that, but I'd prefer not to have the added load of a full GUI underneath Windows — we want maximum performance possible. And I don't think the multi-monitor support would work. Is it possible to have a very basic virtual machine beneath to provide hardware consistency whilst still allowing multi-monitor support? Does anyone have any experience with a technique like this?"
This discussion has been archived. No new comments can be posted.

Virtualizing Workstations For Common Hardware?

Comments Filter:
  • VMWare View (Score:3, Informative)

    by Anonymous Coward on Sunday April 18, 2010 @10:37PM (#31892042)

    VMWare View [vmware.com] is what you want.

  • Using Citrix. Not sure if this is what you are looking for or not...

  • yes (Score:5, Insightful)

    by girlintraining ( 1395911 ) on Sunday April 18, 2010 @10:37PM (#31892048)

    I do. The short answer: Don't.

    Just on the interactivity alone, it's slow response, you spend extra seconds loading windows, menus, and after awhile those extra seconds add up to real productivity loss. Virtualization belongs on servers and in labs, where interactivity is less important than raw horsepower. For a workstation, don't virtualize. It's painful.

    • Re:yes (Score:4, Informative)

      by MyLongNickName ( 822545 ) on Sunday April 18, 2010 @10:39PM (#31892056) Journal

      I am in a virtualized environment and it works fine. I guess it really depends on your situation.

      Most of my users are using basic business apps. For these things, Citrix XenApps (I think that is the name this week) works well.

      • Yup, it totally depends on the situation.

        At one employer, I had to occasionally run Windows apps, to appease the bosses. It was annoying, but I did it. For that, I had XP installed in a Virtualbox VM. It ran fine. I'd leave it minimized so it didn't bother me while I was doing real work. The hardware wasn't anything exciting. It was a $400 PC from CompUSA (single core AMD64, 2Gb RAM). Everything worked fine, including the occasional request to look at something in MSIE be

      • Am I right?

        • Am I right?

          I'm both a user and an IT professional. I'm a strong proponent of using the tools I make, and spending some time actually doing the job they were meant for before handing it back. People who are conventionally-schooled have preconceptions about how things "should" be, and when they get into the field you get ideas like this -- remote desktop for one application is not what the article is about. The article was talking about wholesale virtualization of the entire workstation, not just a single application.

          Th

          • Re: (Score:3, Insightful)

            by rhendershot ( 46429 )

            have preconceptions about how things "should" be, and when they get into the field you get ideas like this -- remote desktop for one application is not what the article is about. The article was talking about wholesale virtualization of the entire workstation, not just a single application.

            I think you're also missing the OP's real question. The way I read it is that s/he wants to setup each workstation with a simple virtualization layer upon which the choice of Windows or Ubuntu can be made at boot time.

    • Re: (Score:3, Informative)

      by itzdandy ( 183397 )

      I would argue just about every point here.

      modern hypervisors are quite fast. Most of the perceived slowdown is a result of using something like VNC to access the VM.

      basic linux install with KVM and the console glued to the VM. Get serious and contribute some software developers or put out some bounties to make a windows video driver appropriate for your needs.

      • modern hypervisors are quite fast. Most of the perceived slowdown is a result of using something like VNC to access the VM.

        It's not about the damn hypervisor, it's about system overhead. Every thread you add means more shuffling in and out of the cpu stack. The more threads, the more accesses to (slower) main memory instead of level 2 or level 1 cache. It doesn't matter what operating system you use, or if it's virtualized or not -- modern systems can only handle so much concurrency gracefully. Exceed that limit and you incur performance penalties. And beyond a certain point, the system spends more of its time doing memory ops

    • I second the GP's response, with the added caveat that graphical performance is by far the slowest part of current virtualization methods. To put it in perspective, your GPU (even if it's a bargain basement integrated piece of junk) has a lot more (albeit narrowly focused) horsepower than your CPU does. Virtualizing the CPU is pretty much a solved problem with vmx/svm, while there's still no performant solution for virtualizing the GPU.

      • Well, then if you are worried about GPU virtualization, why not go with application virtualization?

        Of course, I have a hard time believing that 20 workstations is all that hard to maintain. unless they are geographically disbursed, I am not sure if virtualization is worth the effort.

        • Re: (Score:3, Insightful)

          by Mad Merlin ( 837387 )

          By application virtualization I assume you mean running a single application over the network as is possible with X11 (or many other solutions), instead of the whole desktop/machine. The problem with that is that it doesn't solve the problem outlined in TFS at all, as he wanted to eliminate having to deal with a grabbag of random hardware which Windows inevitably does not support (without special coaxing) every time a new machine comes through the door or some hardware explodes.

    • I do. The short answer: Don't.

      Just on the interactivity alone, it's slow response, you spend extra seconds loading windows, menus, and after awhile those extra seconds add up to real productivity loss. Virtualization belongs on servers and in labs, where interactivity is less important than raw horsepower. For a workstation, don't virtualize. It's painful.

      This is a surprising response. The rare times I've needed to work on Windows GUI projects, I've always virtualized with VirtualBox from an Ubuntu host and have never had any performance complaints at all. In fact, it was much faster than most Windows machines I've used because once I got the guest to a good state, I snapshotted it and rolled it back every time I shut the guest off. I would almost go so far as to say that the preferred way to run Windows is as a guest OS from linux where you roll back the

      • You're doing this in a laboratory situation, not in the realworld. Your approach will not work when you're talking about running a hundred, or a thousand, concurrent VMs on commodity hardware. Remote or local access is hardly the problem... it's all those concurrent threads gulping down bandwidth that could be used to do actual processing, instead of memory copies.

        • You're doing this in a laboratory situation, not in the realworld. Your approach will not work when you're talking about running a hundred, or a thousand, concurrent VMs on commodity hardware. Remote or local access is hardly the problem... it's all those concurrent threads gulping down bandwidth that could be used to do actual processing, instead of memory copies.

          I don't think the OP was talking about "hundreds or thousands of VMs on commodity hardware". He was talking about making 20+ workstations be able to run both linux and Windows in a sane way. In that case, if you have to do it, make Ubuntu the host and Windows the guest using VirtualBox. Performance is much, much better that way.

        • Re:yes (Score:4, Informative)

          by _Sprocket_ ( 42527 ) on Monday April 19, 2010 @12:42AM (#31892792)

          You're doing this in a laboratory situation, not in the realworld. Your approach will not work when you're talking about running a hundred, or a thousand, concurrent VMs on commodity hardware.

          Woah. Hold on. Who said anything about running hundreds, even thousands of concurrent VMs? I think the parent (and actually the subject) is talking about single local box, single VM.

          I've been doing the same thing for a few years now. I can't escape Windows apps so I run a VM to provide a Windows desktop. That's worked pretty well for me except for lately where performance has degraded - I suspect due to my using a real partition (which is no longer supported). Co-worker of mine does the same thing and has no issues whatsoever (which he points out when I grumble at my VM).

  • by couchslug ( 175151 ) on Sunday April 18, 2010 @10:38PM (#31892052)

    It's easy enough to slipstream (lots of) extra drivers and periodically update a master install .iso using tools such as nlite.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      It's easy enough to slipstream (lots of) extra drivers and periodically update a master install .iso using tools such as nlite.

      nlite is not for commercial use!

  • not a cure-all (Score:5, Interesting)

    by CAIMLAS ( 41445 ) on Sunday April 18, 2010 @10:40PM (#31892060)

    Virtualization is not a cure-all (and your approach is wrong, to boot).

    What you're looking to do is use the latest, greatest technology for profit(!!!). You're going about it wrong. There are plenty of other, better technologies to accomplish the same basic thing. Proper system imagining/installation via something like an installation server.

    When you've got 20 workstations, you're at that cusp of continuing on the path you're on (and hopefully, resorting to a method of consistent repeatability) or deciding on a different approach - thin clients, perhaps. Or maybe virtualization is the right approach - but I can guarantee that there's likely no good reason to virtualize Windows on top of each of the 20 workstations that couldn't be solved with better design.

    Honestly, if you're one of multiple IT in a place with only 20 workstations, you're seriously over-staffed. Someone - if not you, someone else - is going to figure this out, and figure out a way to make themselves important and you redundant. Even with moderate consistency and controls, a single competent Administrator should be able to take care of 5 times as many workstations and a handful of servers without too much sweat.

    • Re:not a cure-all (Score:4, Insightful)

      by MyLongNickName ( 822545 ) on Sunday April 18, 2010 @11:07PM (#31892266) Journal

      Who said he has multiple IT people working? My guess is that it is a smaller shop and they have one or maybe two people doing double duty as IT admin/other duties. My guess could be wrong, but so could yours :)

      • by CAIMLAS ( 41445 )

        At any rate, virtualization at the workstation level to abstract the primary utility is the Wrong Approach.

        One thing I've learned is that simplicity is often better than complexity. KISS. This plan doesn't: while it might save some time on deployment of a new system, it's needlessly complex and has twice as much maintenance involved. Also, it adds additional headaches due to Windows licensing (unless they're going to sysprep the machines).

        There are a lot of little "gotchas" which someone not up on such thin

    • Re: (Score:3, Insightful)

      by catmistake ( 814204 )

      Virtualization is not a cure-all

      I respectfully disagree. When it comes to MS Windows, if ever there was a cure-all, virtualization is it. Make a short list of the problems with Windows, and one way or another, virtualization can solve it. If you're clever enough, for instance, the ubiquitous need for virus protection can be eliminated by sand boxing (just think of the gazillions of proc cycles that could be saved). Virtualization can make Windows secure in a way it will never be when it runs on the bare iron. Once you have a virtualized s

    • by Z34107 ( 925136 )

      I'd be inclined to agree with your "proper system imaging/installation via something like an installation server" approach. If he has sufficient dollars, kittens blood, and vespene gas, he wants to set up Windows Deployment Server (a feature in 2008 R2.) Windows 7 images are almost completely hardware agnostic - you can build them in a virtual machine and deploy them to real hardware if you want, as long as you stick the appropriate drivers on the server.

      XP is a different story; it's so hardware bound it'

    • by dbIII ( 701233 )

      Honestly, if you're one of multiple IT in a place with only 20 workstations, you're seriously over-staffed.

      That really depends on what the company does so we can't judge this.
      I had some clueless fool say to me once "Isn't it funny how even very small companies have an IT guy" when the company name implied data processing with clusters. By not knowing the circumstances and making blanket statements we could look like little other than clueless fools. Even a video game arcade has multiple staff because they

    • Honestly, if you're one of multiple IT in a place with only 20 workstations, you're seriously over-staffed.

      "Honestly", you're making a lot of assumptions and have invented a scenario where a)the story writer is IT AND b)The company is only 20 people AND c)They are overstaffed IT-wise. Do Slashdot posters ever listen to how stupid they sound?

      Maybe he's a developer or similar user at a small/startup company where they are the most technical people already. I was hired on to my first job because the en

      • by dmomo ( 256005 )

        >> I was hired on to my first job because the engineers were tired of playing tech support for the rest of the company.

        And Bless you for it! I'm an engineer, and I've fallen into that position before. I can program, but don't really know much about the Windows desktop post 98. And I know probably about as much about Windows networking. Still, I've answered many calls that prevented me from doing my actual job. Every time the solution was a mix of Google and Start->Control Panel. All to figure out

  • Xen? (Score:3, Interesting)

    by the_humeister ( 922869 ) on Sunday April 18, 2010 @10:40PM (#31892064)

    But only if you have hardware virtualization support.

    • Re: (Score:3, Informative)

      by CAIMLAS ( 41445 )

      Xen would be the way to do it, if you had servers. Running the display on the same system as the Xen system is, last I checked, not yet possible.

  • Maybe (Score:3, Funny)

    by MeNeXT ( 200840 ) on Sunday April 18, 2010 @10:40PM (#31892068)

    next year will be the year of the Windows workstation.... 8^)

  • by kaustik ( 574490 ) on Sunday April 18, 2010 @10:42PM (#31892078)
    NxTop is pretty cool. It is a hypervisor that installs directly onto the client hardware, allowing you to pull and boot pre-configured images over the network. The hypervisor removes the need for specialized drivers and supports dual monitors. It also has the advantage over VMwareView of allowing the OS to sync for offline use if you would like to leave the office with a laptop. Sure VMware has it as an "experimental" feature now, but it is production with these guys. They came and did a demo for us the other day, pretty cool stuff. I think it was affordable too. You can set policies for who gets what images, remotely disable a lost or stolen laptop, etc. Check this out: http://www.virtualcomputer.com/About/press/nxtop-pc-management-launch-massively-scalable-desktop-virtualization-for-mobile-pcs [virtualcomputer.com]
  • by SlamMan ( 221834 ) on Sunday April 18, 2010 @10:43PM (#31892082)
    You're just making it harder than it needs to be. Use Ghost, Acronis, KACE, or any of the other semi-hardware agnostic imaging systems. Failing that, just take individual images of each peice of disparate hardware. Just takes a little one time act for each peice of hardware, and a large disk drive.
    • by guruevi ( 827432 ) on Monday April 19, 2010 @08:58AM (#31894742)

      Yeah, I tried that before, doesn't work all that well with Windows. With Linux and practically any other OS you can just deploy a generic image with a modular kernel on the system and it will generally work. Network adapters work with a generic driver, video cards work with a generic driver, usb ports work with a generic driver.

      With Windows, only the APIC or the boot drive hardware (PATA/SATA/SCSI) have to be different from the original host for the thing to give a blue screen even when sysprepped. Even when the drivers are included and you have an image of a system with the same APIC, the system has been sysprepped but the USB ports aren't the same as whatever machine you made the image off, the system won't be able to react to your input unless all USB hardware has been re-detected (which can take a while and sometimes requires a cold reboot as you can't click on the dialogs). Whenever an update (especially Service Packs) needs to be included in your image, all drivers have to be re-checked (manually) for all your different hardware to make sure none needs to be updated as well. Ideally you would have test-systems, replicas of each piece of hardware you have but even in small organizations this can add up to 10's or 100's of idle hardware that you have to acquire and justify.

      I now know why large organizations standardize on a single vendor and can't offer their end-users any choices in hardware besides the amount of RAM and hard drive space. Windows is just plain bad to maintain even with experienced admins. I have virtualized practically all installations of it and even though it takes a slight performance hit, it's much easier to manage than trying to keep up with images for all the different hardware you can have in a single organization.

  • i think ghost/clonezilla is the way to go. you really shouldn't add extra layer's of complexity for no reason.

    Do you really think switching to linux will fix your driver problems? The real solution is to use the same hardware across the network.
    I mean the having a a cd taped to the side of the case for machine specific drivers might be a little low tech but it prevents confusion.

    • -1 for comprehension

      Linux DOES solve his driver problems. Everything works. Windows needs different driver sets and this is causing deployment issues.

      Of course, this is solved via extra tooling (I'm sure that "Ghost", "Clonezilla", "Sysprep", "Slipstreaming" (whatever that is, please don't comment - I don't administer Windows, and, really, don't care).

      He has realized that deploying a single image Windows would work, if rolled out onto a Linux base OS using virtualization. However, multiple monitor support i

  • by PenguSven ( 988769 ) on Sunday April 18, 2010 @10:45PM (#31892104)
    this was solved a long time ago. Sysprep allows you to bundle whatever drivers you want, and it will just load what it needs on first boot. Combine that with a network imaging solution (back when I worked in that area, we used ZENworks, but there are other options), and ideally network installs of software (i.e. the image should be a base OS and not much else) and you should have limited problems. A new machine type will require a new image, but you can just deploy the old one, add the new drivers, run sysprep and re-create the image. I never had to do mass-imaging of Linux machines, but surely you could take a similar approach for the Ubuntu images?
    • Re: (Score:3, Insightful)

      by QuantumRiff ( 120817 )

      In addition to sysprep, if you are running Vista or Windows 7, you can use the tool DISM.exe from the Windows Automated Installation Kit, to inject plug and play drivers into your offline image. You also might really, really want to look at the MDT 2010 tool from Microsoft. It does make deployments of windows easier when it comes to drivers.

    • sysprep is not a 100% thing and some drivers have there own control planes / back round apps that may or may not be loaded right after sysprep.

    • I never had to do mass-imaging of Linux machines, but surely you could take a similar approach for the Ubuntu images?

      Why bother? Unless your hardware OEMs refuse to cooperate with Linux, the drivers are either going to be present, or Ubuntu will download them after the first boot. Ubuntu may not be the geekiest distro around, but it does make things like that as easy and painless as possible.

      • Why bother?

        Well that's great. It's been several years since I did more than deploy web apps to already-running Linux servers, so auto-downloading drivers is great... so long as it has a generic NIC controller i guess..

  • Not always is a common solution the right one. Many times they lack the requisite low level IO needed to do the job right.

    Take, for instance, DDC/CI. I don't know what you're doing and that's fine, but in my line of work we have to talk to the monitor. You ain't doin' that on a virtual machine.

    Just because it's virtual doesn't mean it's better.

  • VMware view (Score:5, Informative)

    by dissy ( 172727 ) on Sunday April 18, 2010 @10:52PM (#31892146)

    It's not cheap so might not be a viable option for a smaller shop, but VMware has been making some very interesting strides in this area.

    Check out VMware View, also known as PCoIP (Yes, that is personal computer over internet protocol)

    http://www.vmware.com/products/view/ [vmware.com]
    http://www.vmware.com/resources/techresources/10083 [vmware.com]

    Put really simply, each real workstation is loaded with a minimal system and the vmware view clients.
    When a user goes to login to a computer on your network, after authentication their virtual workstation pops up (Be it windows or ubuntu) and lets them work.

    All of the actual 'workstations' being used are virtual machines, thus are the same unified image you are looking for with one set of drivers.

    While I have not tested it with a multi-monitor setup, they claim it is supported.

    The one main thing you do lose is full accelerated 3D support, and direct support for old eccentric hardware. (Think ISA card support and non-standard PCI interfaces)
    I can say USB support is simply amazing in how well it works.

    Clients can even play full interactive flash media and video, and it runs well (As well as one would expect it to work in native OS anyway)

  • by Merc248 ( 1026032 ) on Sunday April 18, 2010 @10:52PM (#31892156) Homepage

    I used unattended [sf.net] on a FreeBSD box at one of my old jobs, since we had like five or so different models of computers. It works sort of like RIS, except it's easier to extend the system since it's all written in Perl and it's all open source. We dumped the contents of an XP disc on the server, then slipstreamed driver packs [driverpacks.net] into the disc directory structure; this catches almost everything but the most obscure hardware out there. Unattended allowed us to run post-install scripts, so we threw in a bunch of other software packages that would install after the OS was done installing, like Office 2007, Adobe suite, etc.

    This was substantially better than a disk image; we took care of all of the drivers in one fell swoop, so the only thing we used as a differentiator between computers was how the person used the computer (if it's a student lab computer, we loaded a bunch of stuff like Geometer's Sketchpad, InDesign, etc. If it was a faculty's laptop, we'd load software to operate stuff in the classroom.) We save space on the server, and we save time when it comes to putting together another "image" for a different use case.

    But as others said above, I wouldn't virtualize the workstation, even if it eases up on the IT dept. a little bit; just be smart about what deployment method you use. I wouldn't recommend using unattended if you had only about three different models; it's likely substantially easier to just use CloneZilla.

    Oh, and use a centralized software deployment system such as WPKG [wpkg.org]. Your disk images will go stale after a while, in which case you'll have to make sure that you can manage the packages installed on clients somehow.

  • by Anonymous Coward

    What you are looking for is called type 1 or bare-metal client hypervisor. Bare-metal client hypervisor's are a fairly new technology with the leading ones "which are still in development" being from Citrix and VMware. They are XenClient and CVP both are expected to be out later this year. Two of the smaller players in this field are Neocleus and Virtual Computer both have a general release product however neither of them have been around long enough to be proven.Hope this helps you might not have a the so

  • Existing deployment tools from Microsoft already do this. You need the WAIK, which is a free download from Microsoft.

    You need to create a generalized image. If you get all the required drivers for all your hardware into the driver store, the drivers will be found during install. You can also deploy from PXE boot using WDS with a generalized image...

    There are a few caveats around a few drivers that aren't designed properly for Sysprep, and applications that aren't designed with sysprep in mind, but otherw

    • This is the correct answer. Use Clonezilla for the Linux installs and WDS for the Windows installs (or install a third party PXE server and use the same server for both). Forget virtualization unless you specifically need it to run applications or multiple simultaneous operating systems.

      WDS is how I reimage Windows PCs on my network, and to go from nothing to 100% reinstall is, start to finish, 1 keystroke, a standard login prompt, and two mouse clicks. Come back in a few minutes and you're booted into the

  • Shadowprotect HIR (Score:2, Informative)

    by ill1cit ( 730941 )
    Wow, such terrible advice from slashdot. The easiest way to move Windows OS from one machine to another when their are hardware differences is to get your self a copy of shadowprotect and use the HIR (hardware independent restore) option. Google it. Virtualising is not the best way to by a long shot to do what you are trying to do.
  • Can I run a Windows 7 virtual image (Virtual Clonedrive) on an Ubuntu PC somehow? On a P4/2.6Ghz/1GB-RAM machine? Fast enough to run Visual Studio 2010 and test Silverlight apps? How?

    • by CAIMLAS ( 41445 )

      Windows 7 with VS2010 won't even run on that hardware; why would you expect it to work on a virtual environment? That's somewhat unreasonable.

      (Though, I will say from personal experience, it's amazing how much better a Windows VM will run on the same hardware it was on prior, but virtualized and with slightly less RAM due to system overhead. I did this once some time ago - w2k3 w/ VS2008, 400M RAM with the rest for the CentOS host. Disk access was, sadly and amazingly, faster and swapping wasn't half as pai

      • by nxtw ( 866177 )

        Disk access was, sadly and amazingly, faster and swapping wasn't half as painful as it was in "just" Windows

        I find this neither sad nor amazing. Host RAM can be used as disk cache.

  • Cut to the chase.

    You have Client machines - not all are going to be the latest or the greatest in hypervisor tech., [you do what you have to do to keep things afloat]. Consider, Thin Clients from a myriad of hardware offerings, less headaches and better Server hardware will keep you way ahead of the curve and lessen your - footprint, exposure and budget.
    The caveat is only if your Clients run AutoCAD, heavy graphic intensive programs or major databases, programming.

    Windows, UNIX or Linux - or all, "pick your

  • Yes you can do it with VMware ESXi, if and only if the hardware supports it.
  • As others have mentioned, you don't want to go down the rabbit hole of virtualization just to manage 20 computers in an office.

    Altiris products are worth considering. The Client Management Suite is pretty terrific for managing lots of dissimilar clients. The rebranded "Backup Exec System Recovery Solution," doesn't do as much, but also works fine with different hardware clients. I haven't bought anything from them since Symantec bought them, but we loved Aliris before that.

    If that's too rich for your blo

  • Stop looking for a stove+fridge combination, buy a fridge and a stove.

    Seriously, you will spend less money and have a faster result by buying two machines, if you need both environments - unless you're talking about many hundreds (perhaps thousands) of machines, it's difficult to justify building a merged Windows / Ubuntu SOE in terms of delivery architecture. What would the merged SOE look like in terms of budget, after it's filtered through a bunch of consultants? Ubuntu doesn't take much in the way o

  • One answer is a terminal server. There are a couple of drawbacks and its not a solution for every office, but the advantages are many:

    - Workstation drivers/quirks are far less trouble or made moot altogether
    - You can set security policies, do application installs, and just generally manage things from one place
    - Backing up everyone's data is way easier
    - Users can login and gain access to their files from any workstation (even allow VPN in)
    - A dead workstation can simply be swapped out with another with no h

  • I use Kubuntu for my workstations. I load them from a PXE boot server, which installs using a combo of a kickstart file and preseeding. The kickstart/preseed config looks to a local mirror to install from. That local mirror is run by apt-cacher-ng so it's always up to date. If you're trying to get maximum performance, don't bother trying to use a VM for Windows. I do that as well, but it's only to run legacy apps we don't have Linux versions of yet. I've gotten to around 95% Linux only use, so it's not a bi

  • Sounds like your problem is in having to support different drivers for every different piece of hardware.
    How the heck is Virtualisation going to help you? Even if you have a Virtual machine that emulates a standard config of hardware what are you going to run the Virtualisation software on? You are still going to have different hardware to account for at some level. Whether it's a hypervisor or the OS itself somewhere down the chain the differences in hardware are going to have to be accounted for.

    I don'
  • by Pav ( 4298 )

    OPSI allows windows to build itself across the network via PXE, and allows deploying of apps also. It pitches itself as an option for non-identical hardware where cloning works poorly or fails. It's also OSS.

    As an aside I'm attempting to combine OPSI /w FAI and GOsa (an LDAP management platform) to manage workstations, servers, services (such as Samba, DNS,DHCP, ftp, Asterisk, Groupware (Kolab, Horde, SOGo, phpGroupWare... you can make it manage just about anything), Nagios, Netatalk (for Mac file/print)

  • At least on the hardware, I see two potential hardware related issues. One is i586 vs. x86_64. You will have to make an image of each type. The otther problem will be your IDE/SATA Controller. The image will have all the IDE and SATA drivers you need, however, you may have to use PXE Boot to cause it to rebuild the initrd for your IDE/SATA Controller of choice. The OS should take over after that.

    If your package manager is worth its salt, you should be able to load a list of uniform install packages from a t

  • Desktop virtualisation is hypeword of the month. Don't get suckered into it until you understand the whole concept.
    It's financially feasible only after you have 200+ desktops which you turn into virtual machines. With less, it'll just cost more.

    Your case might be slightly different though.. you want to virtualise the OS on same hardware the user is currently using.
    Remember that hypervisor adds overhead, you lose performance always.
    It also creates some funky clock skew issues sometimes, and your virtual
  • First of all, the easiest solution: buy consistent hardware. Get a bid from your systems vendor for a workstation build and a laptop build, and buy them in lots of 10 or so. (The bigger your order, the better rates you can get, natch.) You can either stick with the OS (and drivers) the vendor provides and just install your own crap on top, or make a Windows image with all of the necessary drivers slip-streamed in. This is how most companies do it-- it's easier on you, and it's cheaper on the company.

    If that

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...