Ask Slashdot: Free/Open Deduplication Software? 306
First time accepted submitter ltjohhed writes "We've been using deduplication products, for backup purposes, at my company for a couple of years now (DataDomain, NetApp etc). Although they've fully satisfied the customer needs in terms of functionality, they don't come across cheap — whatever the brand. So we went looking for some free dedup software. OpenSolaris, using ZFS dedup, was there first that came to mind, but OpenSolaris' future doesn't look all that bright. Another possibility might be utilizing LessFS, if it's fully ready. What are the slashdotters favourite dedup flavour? Is there any free dedup software out there that is ready for customer deployment?" Possibly helpful is this article about SDFS, which seems to be along the right lines; the changelog appears stagnant, though, although there's some active discussion.
I've wanted deduplication for a long time! (Score:4, Interesting)
That deduplication for NTFS is really interesting [fosketts.net], actually. It's not licensed technology but straight from Microsoft Research and it has some clever aspects to it.
Some technical details about the deduplication process:
Microsoft Research spent 2 years experimenting with algorithms to find the “cheapest” in terms of overhead. They select a chunk size for each data set. This is typically between 32 KB and 128 KB, but smaller chunks can be created as well. Microsoft claims that most real-world use cases are about 80 KB. The system processes all the data looking for “fingerprints” of split points and selects the “best” on the fly for each file.
After data is de-duplicated, Microsoft compresses the chunks and stores them in a special “chunk store” within NTFS. This is actually part of the System Volume store in the root of the volume, so dedupe is volume-level. The entire setup is self describing, so a deduplication NTFS volume can be read by another server without any external data.
There is some redundancy in the system as well. Any chunk that is referenced more than x times (100 by default) will be kept in a second location. All data in the filesystem is checksummed and will be proactively repaired. The same is done for the metadata. The deduplication service includes a scrubbing job as well as a file system optimization task to keep everything running smoothly.
Windows 8 deduplication cooperates with other elements of the operating system. The Windows caching layer is dedupe-aware, and this will greatly accelerate overall performance. Windows 8 also includes a new “express” library that makes compression “20 times faster”. Compressed files are not re-compressed based on filetype, so zip files, Office 2007+ files, etc will be skipped and just deduped.
New writes are not deduped – this is a post-process technology. The data deduplication service can be scheduled or can run in “background mode” and wait for idle time. Therefore, I/O impact is between “none and 2x” depending on type. Opening a file is less than 3% greater I/O and can be faster if it’s cached. Copying a large file can make some difference (e.g. 10 GB VHD) since it adds additional disk seeks, but multiple concurrent copies that share data can actually improve performance.
The most interesting thing is that Microsoft Research says it doesn't affect performance almost at all. So when are we going to see Linux equivalents? Because Linux is getting behind on the new technologies.
Dragonfly BSD's HAMMER... (Score:5, Interesting)
...includes dedupe.
There was a blog entry a while ago where on a 256MB RAM machine someone was able to dedupe 600GB down to 400GB and the performance was fine. This is much unlike ZFS which wants the entire dedupe tree in memory and requires gigs and gigs of RAM.
BackupPC (Score:5, Interesting)
Check out BackupPC. Been using it for about 5 years at our company, admittedly a mostly Linux shop, with great results. Deduplication on a per-file basis, block-based transfers via the rsync protocol, and a good web-based UI (at least in terms of function). Thanks to deduplication we are getting about a 10:1 storage compression backing up servers and workstations: a total of 1.28 TB of backups in 130.88 GB of used space.
Backup or live FS? (Score:3, Interesting)
Your post doesn't make it clear if you're looking for a free backup product to replace DataDomain, NetApp, etc. or if you're now wanting to dedup on live filesystems.
If you're looking for a free backup product that supports deduplication, look at backuppc . Powerful and complex, but free. I've used it for years with good results.
write a script (Score:4, Interesting)
md5sum `find . -type f` | sort
...and so on
Re:I've wanted deduplication for a long time! (Score:4, Interesting)
I have often wondered why someone doesn't use the rsync algorithm as a basis for this kind of chunking and deduplication. I imagine a FUSE-based filesystem that breaks the application-level files into checksummed pieces and stores both the file fragments and file descriptions into an underlying filesystem. Then it could reconstruct the application-level files on demand, using the description to draw out the right fragments.
From an academic point of view, it already solves the same problem and just needs some repackaging. It breaks arbitrary data into phrases to be identified by checksum and located in another existing corpus of data. It just needs a metadata model to record the structure of the file as composed of these canonical phrases, rather than performing the actual file reconstruction immediately as rsync does now.
From my cynical point of view, I realize someone may have patented the repackaging, in the same way that Apple seems to think they can re-patent every idea "on a smartphone".
Re:I've wanted deduplication for a long time! (Score:4, Interesting)
Re:What is deduplication? (Score:2, Interesting)
Seriously, at this point on slashdot its been talked about enough that unless you bought your UID from someone, you should be fully aware of what it is from here alone.
Re:OpenSolaris but not FreeBSD? (Score:5, Interesting)
I've got 8GB of RAM in the machine - RAM is so cheap now that it didn't seem worth skimping. It's a 1.6GHz AMD Fusion system. Over GigE, I was getting 40MB/s writes to the deduplicated filesystem, with the load on one core about 100%.
ZFS definitely likes RAM. I'm not sure what the minimum requirements are, but the general recommendation is 'as much as you can afford'. I think 8GB of SO-DIMMS for the mini-ITX board cost about £40, and maxed out its memory, so that was a pretty obvious choice. I'm not sure what happens when the deduplication tables don't fit into RAM, whether it degrades performance or degrades deduplication efficiency. Having 8GB means that a lot of the time it can satisfy reads from RAM.
I'm using it over WiFi 99% of the time, so I'm not too bothered about the performance: it can easily saturate the WiFi link without any problems. . The compression ratio is 1.11x. ZFS only shows the deduplication ratio for the entire pool, not for individual filesystems. That's currently 1.06x for my system, but that's with 1.43TB of data in total, only 266GB on the deduplicated filesystem, so that means that it's saving about a third. Roughly speaking, the extra space used by RAID-Z and the space saved by dedup seem to balance each other out, so (on my backup filesystem) I am using 1GB of hard disk space for every 1GB of data, and still have redundancy so one disk out of the three can fail without losing any data.
Time Machine on OS X does clever things like make a new copy of a 10GB file if 1MB of it has changed, and the deduplication on the NAS translates to a huge space saving for that. For things like DV footage, I don't bother with the dedup.
Re:Lessfs is slow on Atom (Score:4, Interesting)
You got 40MB writing to memory cache possibly, not the ZFS store.
I did? That's interesting. I copied 500GB of data from an external FireWire disk attached to my laptop to the NAS via a GigE connection, yet the NAS only has 8GB of RAM. That's one hell of a compression algorithm they're using for the RAM cache...
Re:I've wanted deduplication for a long time! (Score:5, Interesting)
Also, perhaps the reason that Linux does not support file-system compression on the fly is because it's a horrid idea, and should never actually be used?
Ah, the "Terrible idea" objection.
This is a common objection to implementing ideas on Linux - so common, in fact, that it's successfully held Linux back for at least ten years.
Multi-master LDAP replication? Terrible idea [ietf.org]. Remained terrible for several years after literally every commercial LDAP server on the planet supported multi-master replication, only became non-harmful when OpenLDAP started to support it in version 2.4.
Active Directory support? Such a terrible idea that it's held Samba development back by at least five years. Even now, where Windows Vista deprecates NT4-style policies and 7 deprecates NT 4 domain support altogether (which is about all you get from Samba 3); Samba 4 is considered alpha software.
Some sort of centralised work-together system that integrates email, address book, calendars, task-list? Terrible idea. So much so that Exchange (despite being way too complicated for its own good) is still an extremely popular email solution and the closest you can get to a viable F/OSS alternative either requires your users to completely re-think how they collaborate (yuck) or buy the commercial version simply because the free version lacks vital features.
Free clue to all naysayers who work on F/OSS projects: If you spent as long trying to think of ways to make something work as you do thinking of objections to existing implementations and explaining how you're right and everyone else is wrong, you wouldn't be ten years behind the times.