Auditing Large Unix File Systems? 21
jstockdale asks: "The recent article on perpendicular recording hard drive technology brought me, as a unix(tm) admin, to reflect on the management of data systems and file servers of capacities >1TB (which exist today and tomorrow will become commonplace). Since Google for once seems useless, what suggestions does the Slashdot crowd have on methods and software to audit changes, visualize file system usage, and in general to determine the qualitative and quantitative nature of the content of large unix file systems?"
Same (Score:1)
No, Different (Score:4, Insightful)
Also, whose to say file size increases with storage capacity? Perhaps at his site, number-of-files increases with storage capacity, with the file size staying statistically constant. MB for MB, traversing lots of little files is harder than traversing a few big files.
Re:Same (Score:3, Insightful)
OP : see also, ZTree (Score:2)
The arena gets larger, the things you are tracking get larger, but the ability to view branches of your directory structure, treat all the files on the entire set of drives as if they were all a congruent bunch (sort them all by size, date, extension for temporary files, etc...) will be invaluable.
If nothing exists that rep
I like treemaps (Score:5, Informative)
[umd.edu]
http://www.cs.umd.edu/hcil/treemap-history/inde
Hehe, it was originally made to see what was taking up all the room on an 80MB hard disk
There's various software available based on this concept, most working like "du", except that you get the results graphically. You typically see a large picture on screen of what directories and files are taking up the most space. It looks like a piece of Mondrian artwork, with the size of rectangles corresponding to the size of space taken, so it is easy at a glance to see what is hogging all of the disk space. It can be drilled down, of course, by clicking to zoom in.
A quick Google search revealed SequoiaView:
[win.tue.nl]
http://www.win.tue.nl/sequoiaview/
Unfortunately this only runs on Windows, but I'm sure there are similar Linux programs available.
Re:I like treemaps (Score:4, Interesting)
Screenshots [sourceforge.net]
Re:I like treemaps (Score:2)
Re:I like treemaps (Score:1)
1. tkdu
http://unpythonic.dhs.org/~jepler/tkdu/
tkinter (= python + tk)
gpl
2. treemap
http://www.cs.umd.edu/hcil/treemap
java
mentioned here [uni-bielefeld.de]
can't charge for redistribution
3. xdiskusage
http://xdiskusage.sourceforge.net/
gpl
*nix only
i tried treemap first, and it was REALLY slow, and consumed tons of memory. i'm testing it under windows, but will also use it for linux. matter of fact, rerunning it on another smaller directory (though still gigabytes large with thousands of files) caused
Re:I like treemaps (Score:2, Interesting)
> are similar Linux programs available.
kdirstat [sourceforge.net].
The currently available devel version comes with treemap.
With TB sized installations you will probably want some
additional tools (or at least have kdirstat import a database
built by a daily cronjob [1]), running a complete scan will
take forever.
[1] Some coding involved
xdu (Score:2)
Maybe the quota subsystem can be used for something...
- Hubert
Re:Rapid advances in technology (Score:1)
Damn right, wow. That's a lot of MP3's. What's the RIAA going to do then?
Tivoli (Score:3, Informative)
Huh? (Score:4, Insightful)
Our backend storage system for my project is 1TB, or at least very close to it. I don't manage the box, but I do work with it. It holds three things:
1) It's OS (small)
2) It's Oracle Database files (300GB on disk, about 200GB used now)
3) Files. Word documents, CAD drawings, TIF, GIF, etc. A whole slew of them.
The admin knows what's using what. Under
Under
When filesystems can actually hold metadata regarding their contents then I'd give this question some though. We could have a whole new set of Unix tools to modify our everything-is-a-file-with-badass-meta-data system. Until then I don't see any way for filesystem maintence to be a huge issue on this multi TB systems. All you can really do with the FS is determine which system needs morespace and order more disks. You can't trim or manage it with the FS.
I'm wrong a lot though, but that's my take on the "issue".
(Warning: Here there be Trolls!) (Score:2, Funny)
I am facing the same problem... (Score:4, Informative)
I have a 8.8TB (raw) EMC Symm here, and another one in Austin TX. Then I have another 600GB or so in Sun jbod. Most hosts are connected to the EMC over a 2Gb SAN using brocades.
I just got this environment a couple weeks ago, and there was NO documentation. So figuring out disk usage over 15 systems has been a nightmare.
As much as I hate to admit it, EMC's ECC and Storage Scope have been a huge help. I could have done it using Veritas as well, but the EMC tools are nice.
And I'm soon going to add another 2TB SATA array over iSCSI so then we'll see if ECC can really manage "other peoples disk"
But welcome to the new exciting field of Storage/SAN architect/admin. With arrays from HDS and EMC coming in the 46TB flavor and more, resorce management is a big job.
For instance my counterpart (he handles windows, I handle *nix) found that we had 1.2TB of unused BCV's (it's a 3rd mirror, weird EMC'izm) that had never been used!
So it's all about documentation. And do yourself a huge favor, and come up with a clean scalable disk/lun/volume/mountpoint naming convention. This can be so critical. Not sure what other people do, but I'll have something like:
host 1: dg001-dg005
host 2: dg006-dg010
host 3: dg011-dg015
the disks are enclosure_name_id_lun#
volumes are v{dg}{v#} so first volume for disk group dg006 would be v1601.
Then mount points are based on either oracle{SID} or port, or app name or something logical.
Keep everything unique! So then you can move luns form host to host without having issues. This also allows you to generate usage reports and know that if v3206 is at 68% exactly what host, disk group, volume, and app are involved.
Ug, sorry, went off, but I'm in this mess right now myself and so I have some very strong feelings about it right now =)
easy! use The Parity System (Score:5, Funny)
Now, here's the secret: take all these zeros and ones, and do a parity check on THEM. BLAM! Your entire array is now down to ONE status bit!!!
Now take a big crayon and write that status bit on a piece of your favorite color paper. Put it up in the machine room for all to see. Or just slip it in your drawer if you think that letting this kind of information out is a security leak. Your call.
Then, repeat the process once an hour or so. Today's arrays are so fast that it shouldn't take long. Each time you get the digit, the zero or the one, compare it to the last output. If it's changed (for example, from 1 to 0 or 0 to 1), than WHAMO, you've got SOMETHING going on, better check it out!!!
This "early warning system" gave me a "heads up" to some serious probablems more than once. You might want to check it out, so-called "storage experts" EMC didn't even have a package to do this so you might do a little coding in VB, but it's worth it!
Granularity and documentation (Score:3, Interesting)
For example, on one recent project on which I worked, a PeopleSoft/Oracle environment was built on a pSeries system. There were instances for every conceivable piece of architecture which led to 50+ file systems. And we're not talking about 50+ file systems off of /, but file systems within file systems within file systems. This was good for separating data but made df an ugly mess.
Conversely, I worked on another project for with a homebrew app designed to track tickets. Rather than using a database the genius who designed the methodology stored every ticket as a 1024-byte file. This caused the system to eat up inodes even though the NBPI was set to 1024, and it caused an additional fun feature: the ls command could not work. With over 200,000 files per file system (all in one directory), ls could not parse in all of the files. The homebrew guy actually had to write an app to crack open the inode table to list the files. When you set up your environment, first consider what degree of granularity you need.
Next, document everything. Consider this situation: an HP Virtual Array with 100 LUNs each cabled to two brocade switches for redundancy going to ten different systems. Would you know just by popping a cable what effects would happen? Documentation is key for managing large disk/file system environments. This also applies to naming file systems, logical volumes, volume groups, and any other pat of your system.
commercial solution (Score:1)
Should be available for Linux among other OSes.