Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Businesses Data Storage Networking Privacy Security IT

Ask Slashdot: Networked Back-Up/Wipe Process? 253

An anonymous reader writes "I am required to back up and wipe several hundred computers. Currently, this involves booting up each machine, running a backup script, turning the machine off, booting off a pendrive, and running some software that writes 0s to the drive several times. I was wondering if there was a faster solution. Like a server on an isolated network with a switch where I could just connect the computers up, turn them on and get the server to back up the data and wipe the drives." How would you go about automating this process?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Networked Back-Up/Wipe Process?

Comments Filter:
  • Homebrew (Score:4, Informative)

    by Anrego ( 830717 ) * on Tuesday November 29, 2011 @12:40PM (#38203420)

    Don’t know of any off the shelf software that does this, but should be easy to homebrew if you have the available skill set.

    At the very simplest, you could probably build a custom livecd linux distro to automate the process after plugging in the machine and inserting the CD/pendrive. It’s not as complicated as it sounds if you base it off an existing livecd distro!

    More complex, you could do PXE if the boxes are capable/configured for it (if not, probably more effort to change the bios settings than it would be to plug in the CD).

    You’re probably content just with the backed up files, but I’ll also throw out there that I’ve found a very effective way to back up old machines/drives is to convert them into virtual disk files. Lets you boot up the old machine in a VM and poke around should the need arise. (disclaimer: I’m a dev not a sysadmin, so this is purely from “at home” experience).

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      I kinda lean towards a linux PXE setup too.

      Debian FAI (Fully Automated Install) with all the needed setup, can run tasks and such, in a way that would work for you. It takes some setup (PXE/bootp/dhcp + NFS etc), but it's very capable, and might be practical if you need to do "thousands" of machines.

      • Just make sure to whitelist the MAC addresses when you do this, in case someone plugs something in later, then regrets it ;)

    • by charnov ( 183495 ) on Tuesday November 29, 2011 @12:54PM (#38203588) Homepage Journal

      Acronis or Ghost Enterprise can do this with every PC on a single network segment.

    • Correct answer on first post, excellent work!

      Turning Linux PCs into VMs is easy but for Windows computers it's a big PITA.

    • by Kamiza Ikioi ( 893310 ) on Tuesday November 29, 2011 @01:28PM (#38204048)

      FOG is a PXE cloning solution. http://www.fogproject.org/ [fogproject.org] Install FOG and storage where you want backups, setup PXE IP on network, and input all MAC addresses you want backed up. Through web interface to clone all. When done backing everything up, put a .img file of DBAN on the FOG server. http://www.dban.org/ [dban.org] Configure it in the FOG PXE boot menu, and make it an option but NOT default. Add appropriate start up flags for the level of wiping you want. Restart all computers you want to wipe, and select wipe option after PXE boot menu comes up.

      I suggest you set that option with a password, since it will be available on all computers, not just the one's with the MAC address since only the FOG boot authenticates to MAC, not DBAN.

      • Have you used FOG at all? I'm in the middle of a project where I have to setup a PXE boot server to install Windows to a bunch of boxes. The rub of it is that I've already got a DHCP server and I'm a bit weary of running the FOG installer that is going to reconfigure my production server (which already has a TFTP server installed). I'm also a bit worried that when we switch over to VOIP phones I'll need that TFTP server to serve configuration to the phones.

        If you have any experience with these setups,

    • Always network boot first.

      3 pxeboot configs; backup, wipe, localboot
      2 corresponding tftp configs, which boot 2 different ramdisks.

      First is a backup image using the tools of your choice. Last thing the backup does is write a flag to shared storage which tells the boot server to switch a particular machine to wipe mode.

      Second boots the wipe image. When the wipe is complete, the pxeboot config switched to localboot.

      Now you have a network of centrally managed systems you can manage by changing a couple of pxebo

  • by Anonymous Coward on Tuesday November 29, 2011 @12:42PM (#38203438)

    Then don't automate it.

    • Re: (Score:3, Insightful)

      by Mythran ( 2502540 )
      That's just crap. "Lets be less efficient so we can get more money!" That's not the mindset devs or sysadmin should ever be in. I can't think of a career where less efficient just for greed is a good thing. Always strive to be better than what you are.
      • I can't think of a career where less efficient just for greed is a good thing. Always strive to be better than what you are.

        You, my son, will never have a career in politics...

      • Nobody gives a fuck if you live or die because you are an expendable sharecropper. Business and employee owe each other nothing not spelt out in contract or law.

        Get paid, show activity, and ensure you are essential.

    • You may get paid more for the job...
      However you may loose the opportunity to get repeat business.
    • Re: (Score:3, Insightful)

      by hrvatska ( 790627 )
      Even if hourly it's still worth it to automate it. If you're conscientious it will permit you to exceed expectations, which can be good for a raise or bonus. If all you care about is slacking off, if you automate it you'll have more time to slack off. Either way it would pay to automate.
      • Also, you define when the automation is "done". That means you can make your own job easier, then take your time learning other related things, experiment with optimizations, look into changing the requirements and experiment with how you might meet any new requirements. Once it's working, "done" just means you're bored with it and ready for the next project.
    • This attitude is why we cannot have nice things.

  • Well you can back them up using clonzilla however I've never used it that way before so I don't know exactly how you automate it....

    but if you were to do that, you could then just write a bash script on the end which does the wipe with DD for you too. Job done :)

  • by Anonymous Coward on Tuesday November 29, 2011 @12:48PM (#38203502)

    Nobody has demonstrated the ability to recover data after that outside of a carefully controlled lab.

    • Multi-pass overwrite may not be necessary to comply with your policies, but if the boss thinks he heard something once that it's better and insists it be done, we do it.

    • by Anrego ( 830717 ) *

      If this is just for their own paranoia, then yeah, I agree.

      However they are probably trying to be compliant with some standard/requirement (the backup makes me think that).. in which case it is probably mandated that they have to use a tool from some approved list with a minimum number of wipes.

    • I say 2 passes with random data to be extra safe, if you want more than 2 passes, seek professional help - for your paranoia.

    • Nobody has demonstrated a proof if its infeasibility, either. Thats not terribly reassuring, to tell your boss, "dont worry, this confidential information is PROBABLY safe because some dude named Peter Gutman says that it is unlikely that someone can recover data."

    • by blair1q ( 305137 )

      1. There are carefully controlled labs, and your competitor/enemy has them.

      2. Depending on his situation, it may not be legal to reuse the disks without doing all the writes. If you're involved in defense work and these guys [dss.mil] find out you've got one known improperly securitized system and may have more, every box in your company can be carted off, generally to be returned to you with hard drives and flash memory (including any soldered to the motherboard) removed.

  • by BagOBones ( 574735 ) on Tuesday November 29, 2011 @12:49PM (#38203510)

    Microsoft User State Migration Tool + Microsoft Deployment ToolKit + Sdelete http://technet.microsoft.com/en-us/sysinternals/bb897443 [microsoft.com]

    You should be able to backup the profile, load the OS and run a zeroing delete on all "empty space" on the drive.

    • Microsoft's USMT isnt terribly good. Its quicker (much) and easier to simply use ERUNT on the user's hive, and backup the %userprofile% Desktop, MyDocuments, Favorites, AppData, and LocalSettings\Appdata folders. Thats essentially what USMT does, except it takes about 3x longer to do so and sometimes manages to bork everything in the process.

  • by Oswald McWeany ( 2428506 ) on Tuesday November 29, 2011 @12:49PM (#38203514)

    There are two commonly used techniques to the wipe process.

    In Europe the preferred method is to fold the paper in half before wiping. In the US the preferred method is to scrunch up the paper in a ball before wiping.

    Check whether the PCs you are wiping did a number one or a number two. Male PCs do not need wiping for a number 1.

  • DBAN? (Score:3, Insightful)

    by Anonymous Coward on Tuesday November 29, 2011 @12:49PM (#38203518)

    As for a whole problem solution, I think you will need to do a bit of DIY. But just a note on the wipe process. Just writing 0 to the drive repeatedly will not ensure all the possibly sensitive data is non-recoverable, you really need to write random 1's and 0's at least 3 times to each bit of the drive. For that there is no better program than Derek's Boot And Nuke (DBAN) that I think is available as a liveCD and is available to several distros, including The Ultimate Boot CD (UBcd) and that may be a good place to start for a single boot backup, wipe solution. if you can write a shell script that can run from a pen drive while UBcd is in the CDbay.

    • Re:DBAN? (Score:5, Informative)

      by EdZ ( 755139 ) on Tuesday November 29, 2011 @01:18PM (#38203924)

      Just writing 0 to the drive repeatedly will not ensure all the possibly sensitive data is non-recoverable, you really need to write random 1's and 0's at least 3 times to each bit of the drive.

      This has not been true for a LONG time. Ever since the GMR head became widespread (first introduced in 1997), platter field densities became too high, and field strengths became to low, to be able to feasibly read any sort of residual field after a single pass. Never mind that even if you could read the residual domain, poring over a single 1tb drive with a MFM would take literally billions of man-hours (8796093022208 bits * 1 bit every 10 seconds = 24433591728 hours, or 2.789 million years) to recreate a even rough guess of the bit layout, and that you would then need to align the all guessed layouts for each platter perfectly (think a few million possible combinations at least) before you could even start trying to pull data from the drive.

      Send the ATA SECURE ERASE command to the drive, then move on while the drive controller does it's thing. It'll even erase sectors in the G-list, which DBAN will not.

      • Does anyone have any data to back this up? All I've ever heard on the subject is speculation on what is probably feasible, but noone seems to have gotten a lab like Kroll or the NSA to comment on it (except that the NSA seems to think that one pass ISNT enough).

        Personally, I recommend a pseudorandom wipe for people who care to some degree, but I make it clear that there are very few guarentees in computing, and Im also not dealing with confidential data.

      • Incidentally, the gross error in your estimates are A) 10 seconds for one bit seems awfully high, and B) a devastating leak could be affected with the release of as little as 64,000 bits, or 8kB-- which even by your estimates is doable in a short period of time. It is not necessary to recover everything off of the entire disk in order to cause harm.

      • by blair1q ( 305137 )

        you don't need to inspect every location on the disk before you can start reconstructing it. the MBR is in a known location. its content is less random than you'd think. and the rest is hierarchical from there.

        you also don't need an MFM. just a sensitive head that tells you actual field strength instead of high-bit/low-bit values.

        in any case, overwriting disks is a start-and-walk-away process. you can always start enough of them that the first is done before the last begins. even better if you have a m

    • by jimicus ( 737525 )

      I've heard that one from a number of quarters.

      Closer examination almost invariably reveals that they're referring to the work of Gutmann. The thing is, Gutmann's work was entirely theoretical. I have yet to see any evidence that anyone in history has ever successfully recovered any data from a hard disk - any hard disk - that was entirely overwritten with 0's.

  • Use a screwdriver. (Score:5, Insightful)

    by Scioccoballante ( 1417005 ) on Tuesday November 29, 2011 @12:49PM (#38203520)
    Take the hard drives out of them, label them, and stick them in a closet.
    • I think this is my favorite answer to this. It may not be *the* answer, but I applaud your approach of "rethinking the problem".
    • by blair1q ( 305137 )

      Not very useful if your plan is to donate the obsolete computers to local schools and take a big tax break and get a lot of cred with the kids.

  • by TheCarp ( 96830 ) <sjc.carpanet@net> on Tuesday November 29, 2011 @12:50PM (#38203530) Homepage

    I would look at FAI or kickstart. For FAI a pretty early hook to backup and wipe.... for kickstart a %pre script.

    Of course, if you are working alone, and don't know how to configure DHCP/NFS etc.... it may take you a couple of days just to get the basic setup going, as they can be very finicky, but the quickstart guides out there should generally be able to get you going. If all goes well, you could be working on your scripting in a couple of hours, if not..... well....I hate troubleshooting NFS.. (and don't forget to check your IPTables setup if you are having trouble getting it working...amazing how much better NFS works when its packets are not being dropped.

    Overall, I like FAI better than kickstart, but thats probably because I have used it less and those early stages (DHCP/NFS mount) are hard to troubleshoot with kickstart since stage2 (and thus a shell with which to troubleshoot) isn't available until that works.... though.... you probably don't have the same constraints I do and can just switch USB keys and boot off a fully functional system to test poke around.

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Tuesday November 29, 2011 @12:53PM (#38203570) Homepage

    There isn't a whole lot to optimize in your process. Backups and wipes take time. One thing that could save you a step is to run the backup from the pen drive. That would allow you to script the entire process, such that you only need to boot off the pen drive, preferably have it cache itself into a ramdisk and start the script automatically, then move on to the next box. That would bring the whole process down to maybe 2 minutes per box.

    Having ghosted a bazillion machines this way, it's monotonous but if you create 4-5 of those pen drives, you can do a bunch in parallel.

    • Debian's debirf tool allows fairly painless building of custom bootable ISOs which boot to ramdisk. The ISOs can usually be run through isohybrid for pendrive booting, depending on the hardware and how fussy it is.
    • Back ups should be completely automated and where the sysadmin only has to verify that they were completed and that they are viable for restore. Having to do anything more than that is an indication that things aren't being done properly and that you're going to lose data at some point.

      Wipes OTOH can be automated, but it's going to depend how confident you are that you're wiping the correct machine and that the backups are completely current and haven't been corrupted.

  • Storage (Score:4, Informative)

    by vlm ( 69642 ) on Tuesday November 29, 2011 @12:58PM (#38203654)

    Everyone else (anyone else?) will answer the automation question, but if you're ever done a PXE based linux install, you're about 99% of the way there.

    The mystery I have, is where are you going to store "several hundred" drives worth of backups? And who or what is going to back up and maintain and store and recover the backups?

    I'm guessing the best answer is open all the boxes, remove the drives, install new blank drives, all done? Given the cost of storage and admin time, this might even be the cheapest solution.

    If this is a forensics issue, its a heck of a lot simpler legally to stuff THE drive in a evidence bag and buy a new one, rather than try to explain how your image is a true image crypto signed so it wasn't altered after it was signed, except how do you prove it wasn't altered before it was signed, blah blah blah.

    Are you talking about backups where you only store relevant user "my documents" type data which might be practically nothing, or merely all files on a stereotypically mostly empty drive which would be at most a couple gigs, or a full bit for bit forensics dump of hundreds of 1 TB drives?

    There's a big difference between "it all fits on a single USB attached consumer grade 1 TB drive" and "We're gonna need multiple racks of multimillion dollar NAS to hold all the images".

    How valuable is the data? If it leaked would you lose PCI / CC / HIPPA / SOX stuff and its the end of the world or at least your corporation and job, or is it just a university computer lab and the most valuable/sensitive thing is a couple rickroll videos and some lolcats?

    What do you intend to do, if anything, with the backups? The simplest / cheapest / most efficient way to store backups might involve just throwing the machines in a rented storage room. Climate controlled if possible. You can rent a heck of a lot of storage space for a long time for the cost of a couple hundred hours of admin time.

    Finally whats your liability? If for example, one doesn't boot due to hard drive failure or whatever, are you shipping it to one of those $10K data recovery places, in other words you actually care, or if you lose some, eh, whatever, it was just a "nice to have"? If you can lose one, can you lose all of them with the same "eh" attitude? If your liability is significantly lower than your costs, your best plan might be to skip the backup and destroy the drives.

    In summary the problem isn't how to "transfer" a couple hundred terabytes, that is a long solved question, no big deal. The unsolved problem is how to store / collate / search / backup / distribute / secure a couple hundred terabytes.

    • Re:Storage (Score:4, Informative)

      by vlm ( 69642 ) on Tuesday November 29, 2011 @01:12PM (#38203816)

      Whoops epic fail on my part, you have an endgame plan for the old machines, you are imaging their drives and wiping them, like today, or whenever you get off slashdot. That's just ducky.

      Now, what's your endgame plan for the images. Keep them forever? Or just next financial quarter/year? Or whatever the IRS interval is (7 years, I think?) Does the NAS / RAID / external USB drive holding them need to get copied and wiped? If you're doing the geographic diversity thing, who's securely disposing of the offsite backups?

    • Wish I had mod-points for this.. I approve of this higher-level thought process. However, the OP left out his actual role in this process.. it's possible he's being given a Divine Mandate of how it's supposed to be done (that is: network backup)

  • Live with the tedium of doing in manually. It sucks, but unless you are going to have to do this exact operation again in the future, don't bother with automating it. Possibly the solution of taking out the hard drive, putting in a drive dock on another computer, and letting that computer back-up and wipe the drive might be slightly less tedious, depending on the situation.

    Because, if you listen to what you are asking, you are trying to set up an automated back-up and erase system. Unless you have a Lot

    • by vlm ( 69642 )

      Unless you have a Lot Of Time to Test this BEFORE HAND, you could easily end up with an automated screw-up-the-back-up and nuke-everything system

      This might be the best (only?) justification for buying non-free beer non-free freedom software I've ever seen, because you can intentionally buy the cheapest cruddiest non-working commercial software out there, then when all the data is lost, you don't have to maintain, backup, search, restore and otherwise admin the images for eternity minus a day, and you can blame the commercial software provider instead of yourself... Everyone, especially in management, knows commercial software just doesn't work some

  • If you're doing this for secure disposal, there's a much easier solution:

    Pop the drives out and do your work via external slot-loading drive caddies. You can get rid of the big machines as usual and work your way through the drives as time permits between other tasks. If your software has command-line APIs, it should be pretty easy to setup scripts to do this.

    - or -

    Do the backup as a separate task. Deploy a dedicated backup tool (for de-duplication and compression) or use rsync. Then setup DHCP with NetBoot

  • Single server pxe boot into a live linux distro with clonezilla and your drive wiper of choice. Some simple scripting to get clonezilla to backup all drives to the server under the name gotten from a prompt and wipe when it's done. Throw the same bits on a USB drive if you want.

  • PXE booting is not difficult to set up and Clonezilla is dead simple to automate after that. DBAN also has instructions to PXE boot, but I've never used it that way. Extra points for setting it up to do both in 1 pass. Clonezilla also has the nice feature of verifying that you have a good backup.

  • For software backup, Norton Ghost enterprise is the way to go unless you have some solution you already are using / have to use...

    For wiping the hard disks, they used to make bench-top hardware boxes you could hook up 4 drives to directly and mass-copy them all 4 at a crack. You can use a clean formatted drive as a source and "duplicate" that to wipe the drives clean, 4 at a time simultaneously.

    I'm not sure if there are similar devices that do Ultra-ATA or SATA, but it might be worth looking into getting o

  • running some software that writes 0s to the drive

    That seems unwise. You're not really wiping the drive, just making it harder to read. Most modern wipe software overlays the drive 7 times with random data.

  • by md65536 ( 670240 ) on Tuesday November 29, 2011 @01:13PM (#38203858)

    That will make the backup a lot easier.

  • The first question that pops into my mind is, what is determining this secure wipe procedure, and how secure does it really need to be? If you're looking to speed things up, you could wipe everything with zeros once instead of "several times". The difference in security is minimal.

    Aside from that, there are open source solutions that will image a drive and others that will wipe the disk. It shouldn't be too hard to chain them together, though I don't know of any pre-built solution. I'm stating the obvi

    • by vlm ( 69642 )

      You say chain them together, like 1 ms after backing up, you start wiping. I say, how long can you wait with the images and hardware in storage before wiping?

      At least back them all up, then wipe them all in two separate processes? Whatever you do, don't manually start one process after the other because at least 1% of the time (several machines, in your case) you'll accidentally start wipe before backup. At least that'll compress pretty well if you're wiping with zeros.

      Wiping is faster and "what if" the

  • If the machines are Linux (or booted temporarily into Linux), use ssh (or rsh) to script most of what you're doing. Be sure to configure them to not require passwords for ssh. Then use rsync to back up, and remote ssh scripting to do the wipe on all machines. You can get smart with transferring scripts to the machine & running them with ssh scripting without doing anything manual.

    If the machines are Windows boxes, you might want to look at some remote access/backdoor solutions (of the "gray" hat vari

  • Most modern infrastructure management tools like Altiris can easily perform a pxe boot function. Set up a wipe job, link it to the MAC address and wipe it. Bonus points for having an auditable trail if that's required by your flavor of regulation.

  • of recovering any data after a successful single pass with

    dd if=/dev/zero of=/dev/sda bs=4k

    I'm just curious. I've read all the theoretical stuff, but wouldn't the drives have to be disassembled in a clean room and the platters installed on some machine that can read the faint magnetic residuals...

    Who has these facilities and machines, if anyone, beyond the alphabetsoup gangs?

  • If this is the only time you'll ever do it, a pen drive sounds good enough, although a CD image might be better since you could make a ton of them quickly.
    Otherwise, piecing together a PXE solution would be a waste of time since you still have to plug the machines in, configure the BIOS for PXE, unlock the BIOS if you're planning on donating the machines (the bios steps can be done with automated utils if you're using HP or Dell machines).
    If you can leave the machines where they are, and they're already u
  • What I would do is configure a laptop to run DRBL or Windows Deployment Services (WDS). Both will give you PXE boot options and can boot whatever Linux (DRBL) or WinPE (WDS) utilities you want to use. WDS is a part of Windows Server 2008 R2 and for what you are going to need it for, you shouldn't have to purchase a license since the evaluation period should be sufficient time for you to complete your process. My suggestion would be to customize a Windows PE image to run a backup utility to capture all th

    • I actually built a similar system, but you lost me at

      "run a backup utility... Norton Ghost to wipe the drive"

      ImageX is built-in to Windows (in the AIK) and it does a fantastic job of backups, it even does compression and single-instance storage to save time and space. To wipe the disks, you can run any number of free/cheap utilities (Active KillDisk?) or you can just run 'diskpart' with 'clean all' to write zeros (good enough for 99.5% of cases).

  • debian installer is very much misunderstood or at least underappreciated. i did a very very large (significant deviation / automated installation) system for automated customised installs of KDE desktop. some people said i would have been better off creating debian packages with postinst and preinst scripts, but i liked the convenience of being able to edit the shell scripts etc. etc. *without* having to run a debian package-create command. the results of the work are still here: http://lkcl.net/d-i/ [lkcl.net]

    anyw

    • by lkcl ( 517947 )

      ... it was very cool, especially combined with automated telnetting to KVM switches. at the HTTP console, just run a script that said "ok, power-cycle machine X, set it to PXE boot, rewrite the DHCP config (automatically), when it comes up it will load this OS" :)

  • How about removing the drives from the machines and doing more than one backup and wipe at a time? Linux dd doesn't have a problem doing the backups of anything as long as it is mounted, and wiping would be a lot faster and easier without all those reboots and hoops you have to jump through. That's how I would attack the problem. What sense does it make to boot and backup and reboot and wipe when the drives can be easily removed from the machines and wiped attached to a processing machine. Hell, you cou

    • Not sure why most of the previous commenters thought you were redeploying these machines. Sounds like they are being surplused to me if you're writing 0s to the drives.

    • "Linux dd doesn't have a problem doing the backups of anything as long as it is mounted"

      Linux DD will also save all your deleted data as gobbledygook and lead to GIANT image files. If anything, you want these backups at the file-level, not block-level. Bonus points if you can backup to something with deduplication or single-instance storage.

  • You can script as much of this as you want.

    1: boot a linux live image (CD, Thumbdrive, PXE)
    2: mkdir /mnt/backup
    3: mount //someserver/someshare /mnt/backup

    Copy the raw device to the network share. We'll use ddrescue rather than DD so that it finishes even if the HDD has issues. You'll also get a nice log of the issues.
    4: ddrescue /dev/sda /mnt/backup/someName-`date +%Y-%m-%d`.img /mnt/backup/someName-`date +%Y-%m-%d`.log

    Wipe the disk
    5: dd if=/dev/zero of=/dev/sda bs=1M

    If you would like t
  • There are a bunch of good posts here, but most people are jumping to solution without knowing some pretty important requirements.

    1) How important are the backups ? Is your system backup failure rate is 1% is that ok ? How many backups can fail, affects how much effort you need to put into the validation you need to do to make sure that you've got every backup. I'd suggest you should use whatever you use for regular backups, automate the agent installation and removal - this way you can plug into your regula

  • Easy to do, on boot it runs a backup of the drive, I.E. does a image of it, then when done dd /dev/zero > /dev/hda

    reboot for each step? why?

  • Here's what we do where I work:

    You'll need a Windows Server 2008 R2 with Windows Deployment Services role. You basically want to set up an isolated network with PXE booting, load a Windows PE disk into the PXE server. Modify the PE image to mount a drive off the server (to store your backups), then run a wipe script. As soon as the backup is done, you can actually fire up the next machine, you don't need to be 'connected' to wipe the disk.

    For our purposes, we use Active Killdisk to wipe, and ImageX to backu

  • 1) Get all the computers together in a room with a couple of monitors and a few keyboards.
    2) Plug monitor+keyboard into computer 1, start backup script
    3) Plug monitor+keyboard into computer 2, start backup script ... etc
    4) Coffee break.
    5) Plug monitor+keyboard into computer 1, start wipe
    6) Plug monitor+keyboard into computer 2, start wipe ... etc
    7) Coffee break.

    I did something similar in the past, and ran everything off a boot floppy. I didn't even need to plug the monitor in, just boot from floppy, hit en

  • Definitely PXE.

    However do not forget the TIME it takes to backup and wipe. I hope your disks are not too big! If they are, you might want to partition them to use only a small part of the disk.

    More to the point, what is the aim of your requirements? Maybe you would be better served by

    - diskless workstations

    - encrypted disks

    - encrypted partition with the key in the boot partition (wipe the 1 MB partition containing the key and bingo you've wiped your 3TB disk)
  • and running some software that writes 0s to the drive several times.

    This has been covered several times here and elsewhere, but you don't need to write the 0s more than a single time.

It is easier to write an incorrect program than understand a correct one.

Working...