Organizing Data Across a Heterogeneous Net? 306
angst_ridden_hipster asks: "Like many people, I have a bunch of machines I use regularly. These include Linux machines, BSD machines, a Mac OS X machine, and a Windows machine. These machines are on a number of networks. All have internet connectivity. Some of them are always powered on. A few of them are not. Obviously, I have a bunch of accounts. And, it goes without saying, I have a bunch of data. What are the best approaches to sharing data? I want to be able to securely access my home data while at work, and from one machine to another, etc. Opening ssh terminals is the approach I have traditionally used, but I'm beginning to wonder if some mirroring software (e.g., Unison) might be in order. It'd provide the function of backups, as well as guaranteeing availability. Would it be wiser to tunnel nfs over ssh? Or is there some better option?
Assuming I actually start mirroring data across multiple machines, I'll need to organize it in a portable taxonomy. This is almost easy, since I use cygwin on the Windows machines, so I can assume a standard Unix-ish directory structure. But this gets more complicated when there are scripts or other code involved. What about application/platform-specific data? How do other people organize their data, anyway? Are there any useful standards? I'm hoping people will describe their approaches, and why they think they're (not) the best."
Accessing them (Score:1)
Database and rsync+ssh (Score:3, Informative)
Without knowing more about the type of data you're storing, I would recommend putting it in a database. I like PostgreSQL 7.x [postgresql.org] myself.
For the software, I would organize it in a directory structure and use rsync [anu.edu.au]+ssh [openssh.org] to mirror it as needed.
For backup software, use Amanda [amanda.org].
For file sharing, use Samba [anu.edu.au].
'Nuff said.
Re:Database and rsync+ssh (Score:3, Informative)
Re:Database and rsync+ssh (Score:4, Insightful)
Unfortunately, much of the data I have is not sufficiently structured for an RDBMs. To be more specific, I have about 5 GB of digital photographs / scanned negatives, 1 GB of email archives, 1 GB of various and sundry text files, 100 MB of assorted MS Office-type documents, 100 MB of source code (only about half of which is in CVS), 500 MB of web site source material (Photoshop files, HTML, etc).
So I figure that the filesystem is the best database for this kind of information. But I could well be wrong!
Re:Database and rsync+ssh (Score:4, Informative)
Well, there are several angles to look at. I'm going to hazard a few guesses at the situation, and hopefully I won't be too far off.
Accounts: You mentioned many accounts, so part of the problem could be (not saying that you don't know, just that I don't). different users on different boxes. It's initially easier to use groups to clear up these issues, and tackle account changes later. Create some extra users to make usernames match for box to box, and then group them together so they all can access the appropriate files. This still leaves room for account name matching later.
File System Uniformity: Some people will probably think this is an awful solution, but if you use a single directory (like /mnt) and mount/link everything to identical naming on each box, you won't have the location problems. Sure, it's cyclical to have / linked to /mnt/mylinuxbox on your linux box, but you will always know that your MP3s are in /mnt/mylinuxbox/mp3 (or whereve the hell they are).
Remote Access to your Filesystems: I'm not really qualified for this one, but the NFS/SSH combo is secure and tried. If you don't mind the at-home network traffic, you can make life easier by mounting everything on one computer, and then mounting it. Not recommended for heavy use, but it's easier than managing four connections.
Mirroring is OK if you have specific, regular downtime that the computers can spend, or you have an OC-3 from home to work and great drive access times. The probelm mirroring can present is synchronization lag. Unless you specifically set up your mirroring to syns ASAP, what will you do if you make it home before your data does? Live access does two things; you only transfer the files you need, and you don't have to worry about sync'ing. Plus, what's the point of the Internet if not to make information available? : )
Organization: I've been re-organizing my files for years now, and the best this I've done for most files is to just simplify. I used to make subdirectories for everything. Just recently I have realized the real intent of the "filing cabinet" metaphor...
Filing cabinets are only ever four layers deep. Department (what the cabinet is for - cabinets and drawers are physical limitations, not part of the concept), Group (Hanging Folders), Project (Manila Folders) and then files. Sure, you may end up with alot of "Groups", but that is what alphabetization is for.
Mind you, I haven't managed to change over all of my filing systems to this format. It takes time to sit down and think about what should be where. But it seems (at least to me) like a good though for personal file organization.
Good Luck.
Hey, thats my question (Score:3, Insightful)
For me, I'd just like a top down, real time view with convenient access of what I have - getting it anywhere and anytime isn't quite as crucial for me.
Maybe you make a little daemon that can monitor your data respositories at several sources and 'merge' the data listings at a central source for publishing to multiple sources again?
Re:Hey, thats my question (Score:2)
Pretty tough when you're confined to doing it all with one hand. With your level of insight, I'd have thought you could have figured that out!
not enough info (Score:2)
Re:not enough info (Score:2)
Sounds like an intriguing idea, something I'd almost consider trying to hack around with
Re:not enough info (Score:2)
How much data? I've got is about 5 GB of digital photographs / scanned negatives, 1 GB of email archives, 1 GB of various and sundry text files, 100 MB of assorted MS Office-type documents, 100 MB of source code (only about half of which is in CVS), 500 MB of web site source material (Photoshop files, HTML, etc).
How much of it do I normally need at any given site at any given time? Not much. But when I need it, I want it available.
Common accounts? Yes, when I can manage it. Unfortunately, I don't have absolute control of all the machines, so I have to have "similar" accounts on some.
Use most? Depends on the task. I tend to pretty much round-robin.
What kind of data? See above.
Backup Medium? Hard drive in a spare Debian Linux box, using rdiff-backup.
Re:not enough info (Score:2)
That said, I would probably put as much stuff as possible on the Debian box. I assume you have total, or near total, control over this box. Set up methods of choice for access, and set up appropriate aliases for outside accounts.
Given the mix, you are going to be limited in protocol. If you can, I'd consider Amanda for backups (put server on box with tape, put clients on other machines).
Good luck getting useful answers.
Sharing data across machines and locations (Score:2, Funny)
What about Amoeba? (Score:2, Interesting)
I found this on google: Amoeba WWW Home Page [cs.vu.nl]
This seems to me to be a unique way of sharing data, since it isn't machine centric. Rather, it focuses on the user and the user's data. I have no experience with Amoeba, but on the face, it seems to answer this person's question.
My question is this: Why has interest for Amoeba dried up? (Or has it?) What with the proliferation of alternative OS'es over the past few years, why hasn't Amoeba caught on?
Exchange Server (Score:1, Troll)
Re:Exchange Server (Score:2, Funny)
Re:Exchange Server (Score:3, Funny)
Re:Exchange Server (Score:2)
Use imap for email clients and keep your email on the fileserver.
Re:Exchange Server (Score:2)
I'm sure a whole bunch of slashweenies will now accuse me of voluntarily paying a Microsoft tax, but I can assure you that I've made lots more money out of Microsoft than I've ever paid them, and I get a jolly good ROI on the subscription.
Control of technology (Score:2)
I've long since given up reading the hardware specs for the processors I'm using and expecting to understand every wire on the circuit board and every byte of code in the PROM. (Yes, I used to do this.) It's just all too complicated, and one does wish to have some time left to use the stuff.
It all got too much for me when processors started caching stuff internally, so you could no longer see what they were doing by watching the data fetches with a logic analyser; it was at this point that you could no longer calculate how long a processor would take to do something, because the same instruction might take a different number of cycles depending on cache history; you had to just run the code several times and measure it.
So the fact that I don't have a copy of several million lines of source code that I have no desire at all to spend time reading doesn't bother me in the slightest.
Re:Control of technology (Score:2)
Ironically your worst nightmare is Transmeta and Itanium - where the CPU modifies the code. You'll just have to face up to the fact that to model a mechanism this complicated you'll have to write a computer program yourself to predict these complexities. Remember, all chip and chipset manufacturers aren't breaking the laws of physics. How the heck do you get a logic probe onto the pin of a P4 without creating a pin to pin short anyway? I think it's possible if you use a hypodermic needle not to inject yourself but instead as a logic probe.
What I find funny is that you say you're a computer freak and yet on your CV you state you use Micro$oft Access and Microsoft C++ <Krusty the Clown> Bwa ha ha ha, huh, huhhhhhhhhhhh </Krusty the Clown>
Presumably your embedded development sparked your interest in knowing what the CPU does, well you'd be pretty stupid to use a P4 in embedded RealTime situations, unless it's supposed to double as a hand-warmer. Make your own CPU using an Altera FPGA.
AFS? (Score:5, Informative)
AFS is an NFS style implementation though, so you would have to save your files onto a special mount.
Re:AFS? (Score:3, Insightful)
Re:AFS? (Score:2, Informative)
Also, I was frustrated by the process of compiling OpenAFS for my Mandrake 8 box (GCC version crap), and if I ever try to mount AFS when anything is wrong with the network, I know I am in for a serious crash later on. Perhaps these are just my fault, of course.
Hope this helps.
Re:AFS? (Score:2)
I work in a multi-national team
we use AFS to share access to one filespace
using AIX, Linux, Windows.
IMHO, it's simply brilliant.
The AFS transport protocol "Rx" is optimised
to work well over WAN. It's definitely *not* NFS
and has a whole bunch of systems management tools.
A very neat thing about AFS is that it is scalable - it can be grown to meet your needs dynamically: you can add new servers to your AFS cell and move data between servers "live" with no outages.
You can also use RAM cache instead of disk cache
for faster access to cached files.
Love it love it love it
AFS is dependent on your network which needs
to be reliable and fast.
IBM sells the original Transarc version
and now you can also have access to the OpenAFS
source ( http://www.openafs.org )
"The universe is full of magical things patiently
waiting for our wits to grow sharper." --Eden Phillpots
Re:AFS? Not suitable (Score:3, Informative)
Re:AFS? Not suitable (Score:2)
Re:AFS? Not suitable (Score:2)
Instead, read this page:
http://www-personal.umich.edu/~srb/openafs/
i use... (Score:1)
Use the fish (Score:3, Interesting)
"kio_fish is a kioslave for KDE 2/3 that lets you view and manipulate your remote files using just a simple shell account and some standard unix commands on the remote machine. You get full filesystem access without setting up a server - no NFS, Samba,
It works through SSH, so everthing is encrypted.
I use this with the konqueror file browser, but all KDE apps can transparently access files on remote hosts using this amazing utility, which required no special setup on either end, at least on my systems.
Solved all my data sharing needs - and andromeda [turnstyle.com] solved the rest :)
It's called a server (Score:5, Insightful)
With the right kind of server, it can do AppleShare, NFS, and SMB, allowing all your other machines to mount the shares and make them appear as local drives. This keeps all your data in one place, allowing for easy backups, and also makes it easy to get at the same files from any computer.
My personal preference is a Linux computer with several cheap IDE drives each on their own IDE controller (no slave drives). The drives are configured as software RAID 5 and ext3. Regular backups are setup through cron to a tape drive. Samba handles file sharing, printing, roaming profile, and PDC duties for Windoze. Netatalk 1.6cvs handles file sharing duties for pre-OSX systems. NFS is used for file sharing to *nix systems. The only thing I'm missing is a NetInfo daemon for Linux so it can act as a complete configuration server for NeXTSTEP, OPENSTEP, and MacOS X systems.
Server yes! And NetInfo vs. LDAP (Score:4, Informative)
Trying to maintain coherency of data via replication across multiple machines is begging for trouble -- this is a hard problem that to my knowledge has not been solved in a clean, cheap way.
If you want to use NetInfo for Mac OS X, create a new port from the Open Darwin [opendarwin.org] sources. There's a port of an old NetInfo server module for Linux floating around, but it's not what I'd call up to date.
A better choice would be to use OpenLDAP, as Mac OS X is designed to pull directory service info from an LDAP data source. Windows systems can also pull from a LDAP, as can Linux and *BSD and Solaris and so on.
--Paul
Re:Server yes! And NetInfo vs. LDAP (Score:2)
Now what about when there's a laptop in the mix? It would be simple to flag specific files as "current" and have them copy over the laptop regularly (use rsync and do an update just before leaving), but what about user accounts? How easy is it to have the laptop use a remote NetInfo or LDAP server when available but use a local one when on the road or plugged into a remote network? Obviously the local one would have to sync to the real one regularly as well.
Re:Server yes! And NetInfo vs. LDAP (Score:2)
--Paul
Re:It's called a server (Score:2)
The drives are configured as software RAID 5 and ext3.
Ext3 on a production server? You're braver than I... it's still an experimental FS (from 2.4.18 make config):
IDE will work fine for most small offices and you've got the "no slave devices" right for RAID. My personal preference for small offices is two large IDE disks in RAID1 with a tape backup.
Re:It's called a server (Score:2)
Alternatively...
I've been using Reiser on my production machines with not a single hiccup, and I know of many others who do the same. For that matter, ext3 is used (reliably) in a lot of places as well. rpmfind.net is one that comes to mind.
I know at least one guy who absolutely swears by XFS since it's not a "new" fs like Reiser and ext3 and has actually been used in production for years now. I'm thinking of giving it a try soon.
It's really hard to go wrong with any of the journaling filesystems available in Linux these days. The visible differences betweent them are fairly small and which one you choose will depend mainly on if you have any special needs. (For example, ext3 is forwards and backwards compatible with ext2, NFS is noted to be more cranky with some filesystems than others, etc.)
Re:It's called a server (Score:3, Informative)
First is a secure IMAP server for centralized email. This will allows any SSL-enabled IMAP client to access your mailbox. Also, Squirrelmail [squirrelmail.org] running on an SSL web server can give your access to your centralize mail repository from any web browser.
SMB and NFS are the obvious choices for LAN-based access, but WAN access needs more care. I think that a VPN setup using CIPE [sites.inka.de] is a good approach. One the CIPE links are build, you can use most services as if you were located on your wired LAN.
The other need might be for file access from "arbitrary" locations. In addition to the normal scp and sftp apps in OpenSSH, there is a nice SCP client for windows, WinSCP [winscp.vse.cz]. Lastly, if you have a SSL web server there already, Web-FTP [web-ftp.org] will give you access to your files via https.
This sounds like a lot. In the end, you would need to expose SSH, SSL IMAP, SSL Apache, and CIPE servers. I am midway through this deployment myself, but it has stalled a bit because one of primary Internet access points started disallowing outgoing SSH.
I use WebDAV (Score:4, Informative)
On the other hand, if you have a computer that is always on, that can run Apache, you can have your own personal WebDAV [webdav.org] server instead. Simply install mod_dav, and access it through mod_ssl, and have a secure web-based filesystem.
Better than NFS, you can mount it on Windows (through web folders), Linux (through davfs) and Mac OSX (through the native DAV file system client that is designed to run with iDisk).
NOTE: I work for Xythos software, and we make an enterprise-level WebDAV server called the Xythos WebFile Server. It's significantly more expensive than free, and we run in-house copies of the product (y'know eat your own dogfood), so that's where I keep my shared data, but if I didn't, I'd have mod_dav running right now.
Re:I use WebDAV (Score:2)
I use vtun - vtun.sf.net.
I understand that openvpn.sf.net is nice, too.
Excellent Example of WebDAV (Score:2)
Re:I use WebDAV (Score:2)
Try CVS. (Score:1, Informative)
Re:Try CVS. (Score:2, Interesting)
Add a bit of clever scripting, and you might also handle whole dirs automagically (cvs works on individual files).
One word of caution: Be careful with binary files, and programs that restructure files, since thats not what cvs is made for (you can set files as binary though).
Mirroring concept (Score:1)
No really it would provide the "function of backups" becuase the purpose of backups isn't just for hardware failure, but also for accidental deletions, incorrect changes to files etc. So in this way if something like that happened, it would simply be mirrored to the other machine. I'd hate to loose my data.
On no! The feds are coming! C:\>delete *.mp3
Ooops, it was just the cat wanting in. $%$#% mirror! I wish I had a backup.
Re:Mirroring concept (Score:2)
I was thinking more about the hardware failure issue.
How about webmail + a version control system (Score:1)
I don't know if this applies (Score:3, Interesting)
My solution was to write a series of little scripts to copy data from common share points on each machine to a large, central data store, and into a "backed-up" directory on the workstations. Presently my central data store is 600GB of IDE disks in a RAID1 array (10 disks, total). If I lose the central fileserver, all my data, and the scripts needed to recreate the information in that 600GB is sitting out on my workstations
It's kind of a brute force approach, but it works OK. I'm not sure how well it would work for non-local systems, though.
I'm sure there are better ways to do what I do, too, but it's nice to have a single place to look for my MP3s or whatever, while knowing they're backed-up in multiple locations as well.
Re:I don't know if this applies (Score:3, Funny)
Your apartment looks like the one in "Pi" [imdb.com], doesn't it? Are any of these computers currently calculating a 216-digit number that you'll use to predict the stock market?
:-)
Re:I don't know if this applies (Score:2)
Your apartment looks like the one in "Pi", doesn't it?
Such an apartment would be incomplete without stimulant pills in the bathroom medicine cabinet to replace food in the refrigerator.
Oh, and don't forget the electric drill!
Re:I don't know if this applies (Score:2)
But yeah, with full-size APC rack, a pair of IBM RAID cabinets, a Cisco 5005 and five machines in a walk-in closet, yeah, I can definately get into the pi sort of atmosphere.
Re:I don't know if this applies (Score:2)
It's a bit loud, but we got used to it..
Re:I don't know if this applies (Score:2)
And what appears to be either Beige G3s or 72/3/5/6 00's.
Damn man, what are you using those machines for?
(tempted, because my own college dorm room is starting to look that, which is why I am probably going to have to move off campus junior year)
Re:I don't know if this applies (Score:2)
You're right. There's a G3 (AppleShare IP server) and a 7200 that used to be running NNTP software, but my upstream news server sucks, so now it just sits there.
Some of them are co-located, but others are web servers/etc... typical stuff.
Re:I don't know if this applies (Score:2)
I'd mod your comment +1 Funny if I hadn't already posted. I got the closet only after I promised to move all my stuff *OUT* of the living room.
Re:I don't know if this applies (Score:2)
cable MISmanagement is more fun.
Raid1? Are you on crack? (Score:2)
RAID5's overhead is a fraction of RAID1's overhead, and as long as you don't have a lot of drives fail at once (which is rare, and RAID isn't a replacement for backups anyways), you're much better doing anyways.
Seperate home and work! (Score:5, Insightful)
first of all, seperate your home life and work life. Then seperate the data. I understand that once in a while you need data from one place at the other, but avoid those situations.
At work: that is IS's problem. Store all work data on the work machines, and make IS do the backups. Use SSH, or other VPN when you want to work from home. Compile (or whatever) at work as much as possible. If you have data that you need on the road, get a laptop or PDA for work, and synchronize that when you are at work.
At home: set up a linux box (a 386 is enough, though you might want more) with a big disk, a UPS, and a network card. Put it in a closet or on a shelf. Install SAMBA, and Netatalk. with NFS built in (though there is better than NFS if you look, nfs is there) Use one loging for all machines.
Laptops are a problem, because you often want to use them where you can't get to the network. The first solution to that problem is 820.11. Use it at home, and look for open access on the road. With good VPN (ssh+nfs) you can get to your network server from many places. I manually synchronize only the files I need, but my laptop is rarely used outside of 802.11 areas, if you travel often, then you might need more. (CODA? AFS? )
Re:Seperate home and work! (Score:3, Insightful)
For Pete's sake, this is a recipe for disaster! In the 5 or 6 companies I've worked for, every time the IS department managed someone else's data, they screwed it up! No one knows the value and purpose of your data better than you, so why on Earth would you allow someone who doesn't give a rip about it to manage it?
I would suggest using the IS departments resources and knowledge to help you manage your data yourself. Then, you have control of the backups, etc.
Re:Seperate home and work! (Score:2)
I have roughly 10 computers in the house and 4 users.
I have one main Linux machine with several 40GB drives that basically holds everything for the Linux and Windows clients (some dual boot, some static) and the web server (another Linux machine).
The main Linux machine has Samba and NFS. All other Linux machines mount a single
The only thing I need to backup is
Mac n' Windows (Score:2, Informative)
I have a server on a public IP address that runs SAMBA, but only accepts connections from 'localhost'. From my Windows box and iBook (running OS X), I just do a bit of SSH tunneling, and I'm able to mount the machine from anywhere I happen to be.
As far as I can tell, it's reasonably secure, and it works just fine for general files.
I also have a CVS repository on the server for my development projects, but that doesn't work so well for binary files like images and Word documents.
One of my friends keeps his files synchronized via an htaccess protected website which allows him to download and upload files. If you're interested, I'll see what I can do to track down his PHP script
How do you define data? (Score:2)
Non-platform-specific data, like MP3s, I have migrating to my Linux machine, because I just installed a 120GB drive in it. I run GNUMP3d as an MP3 server. I run my web server, and access it from anywhere if I need to. (I use dyndns and dynamic-IP DSL). If I need to lock things down, I can permission the directories. If I need to get in for something from outside my network, I can quickly SSH in (using PuTTY), unlock them, get the data (HTTP, FTP) and then lock them back up.
I guess I don't see the real usefulness of having a completely seamless network. Different systems need different files, and if anything is platform-independent, you can put them on a file server. At home, on an isolsated network, it is pretty quick. Oh, and Samba. Gotta have Samba. If all your shared data is on one server, it is easier to back up also.
It is certainly possible that I am missing something, but I don't really see the point of what you are asking about.
How about a Fileserver (Score:2, Informative)
If you need access to the box from work, allow ONLY ssh/scp access from your work IP ranges through the firewall, and/or setup something that will do what you want.
You're not specifying what KIND of data, so it could be mp3's, text, whatever. The methods of accessing different types of data are many and varied, so be more specific - but, if you want to keep a data repository, you MUST have ONE and ONLY one source of data for ALL of your systems. If you don't, then you'll run into problems.
Hetero-Data (Score:2)
When an application needs the data, using your optical sensors mounted on your head view the data, then translate it into the application using the computers keyboard.
Patent Pending.
here's what I've got... (Score:5, Informative)
Consistent directory structures
Categorize your data
rsync / CVS
IMAP
Samba
Dump
Wiki
Hope this helps you out. I'll be interested to see what things other people do.
e.
Re:here's what I've got... (Score:3, Funny)
Wiki
This tool is just starting to be useful to me... I run it on a webserver, and use it for all the small text-files that would otherwise litter my home dir. Little things like phone numbers, the name of that song, todo lists, whatever. I use the "pikipiki" version myself, as its small and fast and feature-free.
I've got some feature free code for you:
int main() { return 0; }
Re:not bad. (Score:2)
first impression (Score:3, Insightful)
Now this is not totally fair, since it implies a pointy haired boss situation. All it really means is that that you would would have to have a better definition of the problem.
What it seems that you really need is an application, a database, that would constantly monitor in realtime the status and availablility of your various resources. This would tie into your other dataservices so that when you do a query on "XP sourcecode", or whatever, one of the result you get is from this resource monitor database saying that "the resource is offline" or "the data is available, but you don't have access rights", etc. depending of the resource status, and other realtime situations.
It occurs to me that clever design of the database may be able to do the resource availibily query in advance of the actual access of the data, so that you do not get a crash or whatever if a child record or whatever is unavailable.
Currently, I do not know of any tool that does this, although obviously this is not my area of expertise.
Re:first impression (Score:2)
So then the question becomes how workable is Active Directory? And what are alternatives that could be open source?
I have heard a certain amount of discontent with it.
Use what already works... (Score:2, Interesting)
Why not use Gnutella or a similar P2P system? There are clients for basically any OS out there, the files don't have to reside in a central location.
It works for the internet - Why not your own 'mini-internet'?
One modification you would want make is to get it to make a listing of all that you have.
Could you use SSH tunneling with a system like that?
huh? use a standard file server. (Score:2)
This way the data is in one spot, but it's much less vulnerable to hdd failure. Plus since it's on a *nix machine, you can export it to your clientelle.
don't use NFS (Score:5, Informative)
NFS is used very often to mount home directories. But what is stopping someone from unplugging the workstation, plugging in a linux laptop with the IP of the legitimate workstation and mount the share, "su - user", and voila, you now have all the user's files.
That's just the simplest way. The problem is that most NFS implementations don't have *any* authentication except for IP authentication. So so other DNS attacks would work as well.
I am surpised that the most widely used network file system implementation for linux and most posix OSes has no real authentication. There *has* been authentication built in the protocol since version 3, but last time I checked, it was not supported on the linux. I was told by one guy working on the project that the problem was that there's no crypto in the kernel.
I used secure NFS on Solaris 8 for a while but I constantly lost the mounts. That but be fixed now, I don't know.
Use AFS, CVS, rsync, intermezzo, or something. But I would stay away from NFS.
It's called "The World Wide Web" (Score:2, Interesting)
Seriously -- run a webserver + WebDAV on each of your machines. Then you can read/write from anywhere, and with any platform.
Systems like YouServ/uServ [cmu.edu] provide a webserver, access control, and mirroring/replication support in a single package. This way as long as only some of your machines are online, the data from every machine remains accessible. Unfortunately the system is not available for general public use, but the system may be in open source soon.
Re:It's called "The World Wide Web" (Score:2)
Back it all up using gnutella (Score:2)
That's what it's for, right?
Seriously, I think it would be great if there was a P2P backup system. Private files could be encrypted, and everything could be uploaded to multiple peers. Obviously some sort of trust system would have to be worked out, but it could work. Even if I just connected to myself and two or three real life friends with DSL connections, it'd be great to have my files accessible everywhere, almost all the time.
Shared filesystems & well named directories (Score:2)
The point is, everything is stored and "backed up" centrally, but accessed using a different mechanism depending on where I'm at when I need my data. Since I don't delete files accidentally, mirroring works fine for a backup - I'm really only concerned with drive failure.
I then structure the directories according to type of file. I've got a documents directory where I keep anything I create myself. Specific projects that require multiple files generally go under documents/projectname. I've got a music directory, and many subdirectories under it:
music/fullalbums/artistname/albumname/files.mp3
music/music
Etc. Then software. apps/isos. apps/windows. apps/linux. And so on, and so forth.
rsync + ssh + logout scripting + cron (Score:3, Interesting)
Set up your accounts to rsync-upload changes to whichever server is most secure when you log out, and use a cron job on that server to rsync-download to all the other servers nightly. You can make a tar backup part of the system also.
You will have to remember what's going on so you don't modify the same file differently on two different systems within 24 hours. If you want to overcome that shortcoming by making this work on an immediate sync basis rather than periodically, you'll need something like SGI's fam (included with recent linux distros) to trigger the updating processes.
You should already be 90% there if you have your ssh keys set up for passwordless login. Passwordless PKI logins are not significantly less secure than passworded logins in most situations (granted hostile system management can get you, but the BOFH can trojan your login anyway).
Lots of people use this technique to sync CVS trees over slow links. Rsync is very efficient for that kind of thing (large volume of files, low number of changed bytes).
Can there be only one? (Score:5, Informative)
First, I try to adhere loosely to the FHS [pathname.com] for ideas on overall organization. Even though it's mostly intended for POSIX systems, following their philosophy will really help you separate your data from your platform-dependent program files and libraries.
Most of my important stuff goes on the Linux server in /home (on an IDE software RAID1). However, I try to limit files in here to stuff that's absolutely essential to keep the size down. I occasionally mirror this offsite to my friends' servers with rsync (with the private stuff pgp encrypted). I try to make browser caches, etc. symlinks to dirs in /tmp . Try to keep only the stuff you created yourself in here.
I keep media and downloads on a plain partition under /home/ftp/pub (which is also symlinked from the http document root). That way, all my computers can easily get access to music and installers and junk.
Samba helps win32 boxes access the /home and /tmp directories.
NFS exports /home to the other UNIXen, as well as /usr for the other machines with the same CPU arch. It should be acceptable to export /usr/share to other UNIXen with different architectures.
I'd like to set up CODA [cmu.edu], since it seems to support more different kinds clients than Intermezzo. These support disconnected operation and are good for laptops. For the meantime, I just use rsync to mirror home dirs onto my laptop, though (and just keep track of stuff that I change on the road manually :/ )
No thoughts on how to combine everything into a distributedFS so you could have parts of, say, a music archive living over several machines. There are several projects for Linux-only (PVFS) or Win32-only (more advanced network-neighborhoods). I'd say your best bet for convenience is just to make sure everything is visible from your one server and reexport it from there (invest in a switch so it doesn't deadlock your network). Until better DFSes exist, though, I think you'll get better performance and less confusion from running everything from one beefed-up server with a RAID (or two if you want failover).
Unison works, perfectly. (Score:4, Insightful)
At one point, managing all my data (I would change a bit here, and a bit there, then try to copy and synchronize by hand) was manageable, but I got real tired of it real fast. I considered putting together a CVS server, and then synchronizing that way, but it's really overkill and not a very user-friendly solution anyway.
Enter Unison [upenn.edu]. Now I just have a few directories designated as shared, and they get synchronized by Unison automatically. At home, my data is on a FAT partition, which is accessible to both Linux and Win98.
The good thing about this is that since I synchronize with the laptop when I'm connected, I get to use my data even when I'm on the move - not so with NFS. And I get free backups as well - I do have roughly 2Gigs of data, which would be a hassle to backup any other way. Besides, if I took tape backups, I would have to manually carry them off-site in case of a fire; now Unison takes care of backups to and from my remote machines.
inferno! (Score:2)
the connectivity and security are as versatile (or more so) as unix pipes; also you can write programs for it that run without change (really!) on any supported platform ('cos it provides an OS level view of everything rather than trying to shoehorn itself into the parent environment like java).
the security model is public-key based and because it's end to end, you don't need to worry at all about little things like 802.11 insecurities...
plus it's all small, clean and beautiful as befits something coming from CSRG at bell labs.
centralize and distribute. (Score:2, Informative)
In your home directory, create a folder you are going to put your mount points in to mount the data stores you need.
On all the other systems, create a share that will contain the data you want to access "anywhere". On the central server Mount all of these shares in that sharesmount folder. This may be nfs or cifs as the architecture of the servers dictates.
As this is all mounted to your home directory, you can go to just about any system in the network and remotely mount all of your folders by Mounting your home folder from your primary server.
To remotely access this storage center, use either nfs over ssh, or build appropriate links into your web pages, and run a secure varient of apache.
I also recomend keeping your work data in a seprate storage area from your personal/home data. You may recall that Northwest Airlines successfully sued to get the personal computers of Flight Attendants who they believed co-operatively negotiated a sick-out strike. Keeping your personal data completely separate would reduce the likelyhood of loosing your entire computer setup if someone at work files a complaint that they believe you are doing something wrong.
There are other advantages to this kind of a setup. By centralizing your data storage tree, it is easier to perform backups, you will only need to backup the one server's home directory, tracing into the peripheral servers. If you wish to set up a thin client in a bedroom, or someplace where you don't want to have a lot of fans going, this gives you a platform ready made for your storage needs, as well as a reasonable terminal server. I think you get the idea.
-Rusty
My approach (Score:2)
Sharing data files is easy with the NFS/SAMBA combination - e.g. non-Windows machines mount my home directory as
Sharing software is less easy since none of the common UNIXy filesystem layouts really let you have binaries for multiple platforms available at once. There are unconventional layouts that do this, but you'll have to compile a lot of things yourself and mess with configure scripts a lot... I've given up on sharing binaries and libs; I just run Debian on as many of my systems as possible, and run a script now and then that ensures the same packages are installed on each machine.
For remote work I use SSH to set up a VPN. However, unless I'm on a very low-latency connection, I find it difficult to use a shell remotely, much less NFS. I usually end up manually rsync'ing the files I need.
cvs & rsync (Score:4, Insightful)
This works well for me to keep about 30 accounts in sync, most of them just get a minimal checkout of my home directory (5 mb or so), while 3 or 4 get the whole home directory and rsynced files (5 gb). The CVS repository is about half a gigabyte in size these days.
Once something that allows proper file rename tracking, like subversion, comes along, I plan to stop using rsync alltogether, and just check all the files in.
As has been noted elsewhere in this thread, one of the key things is coming up with a consistent directory structure and sticking with it.
root directory: (Score:2)
/warez/
/mp3/
oh filter, why must thee filter my comment
I tried the same thing at school... (Score:2)
Being an engineer, I thought of a bunch of ways of setting up complicated distributed ways of doing this, but settled on just leaving the data in one place, and SSH'ing to that box.
The benefits of keeping it simple were:
1. No new work, which is good for the lazy^H^H^H^Hefficient among us.
2. Data coherency. If its only ever in one place its hard to mess up.
3. Backups are easy, since you're only backing up one data set.
4. Did I mention no new work?
As much as data sharing on a heterogenous network would have been nice (Linux box at home, Suns in the lab, Windows at my parent's place, iBook in my backpack), the marginal utility of that data sharing was low compared to the marginal cost of actually doing the work to make it happen.
My vote is for keeping the data in one place and remembering how much you love the terminal. Not a sexy solution, but it works.
RSYNC (Score:3, Informative)
Okay, you *could* use some form of networked file system, but a) your laptop and other machines would need to be connected to use it, and b) I hope you are willing to fight to get a good implementation to work, and c) I hope you aren't playing with big files
I use rsync. I have ~/Makefile, 'make sync' works wonders. Here's the contents:
On the laptop:
Works like a charm
We need a HOWTO (Score:2, Interesting)
Re:We need a HOWTO (Score:2)
Re:We need a HOWTO (Score:2)
Sure, a HOWTO could be written, but you could also write a HOWTO about how to be an attorney or a plumber - it wouldn't do the field justice. You need, at the *very* least, a good solid book. Or a turnkey solution (wherein you are trading money for experience) like the netstorage boxes that you plug into the network and speak a half dozen protocols for all your systems to talk to.
--
Evan
Web Design (Score:3, Informative)
I do a lot of back end web development. As such I usually like to copy the entire site down to a local machine, work on the system, upload to a test machine, test, and then move to a development machine. Unison has made my job a lot easier than it using a bunch of ssh scripts since unison automatically checks for changes and only copies over files with changes.
A sample script is as follows:
From my local file system $HOME/web/(website) I execute the following script
unison -auto -batch include ssh://user@somehost.com//www/(website)/include
unison -auto -batch www ssh://user@somehost.com//www/(website)/www
This script pulls all my programming work in include and the website accessable files www to my local system... I then work on the files and upload using the following script
unison -auto -batch include ssh://user@testhost.com//www/(website)/include
unison -auto -batch www ssh://user@testhost.com//www/(website)/www
I then check the coding and on the test host, when I get it to the point I want I upload it to the production machine...
If I have problems on the test host, I can go in and remove all files on my development system and pull a fresh copy of files from the live site...
Since I don't need to program and compile on different systems, just uploading the the test and production machines it works well.
Recently I took a trip and did not have access to my local system. I was able to borrow a windows system and after installing putting, winscp and unison I was up and running within 10-15 minutes at the remote site, which allowed me to get back to work.
The problem with using a remote mounting system is that you have to maintain network connectivity while working on files, not always an option, plus you are working with the live production files...
So basically I use unison just like a cp command except that it does not copy files that already are synced between systems and it automatically keeps my permissions sync'd as well.
Hope that helps
Here's how I stay organized: (Score:5, Funny)
Segregate the data, manage each. (Score:4, Informative)
There's really three (or more) different separate data issues that you have to deal with.
Like most, I have many accounts, and just manage them on the fly. My data is retrieved manually when I need it. SSH (and scp), VNC, etc. This usually does the job.
Not the easiest way to do it. Especially when I recently changed jobs and had to setup new data and profiles - I thought, there must be a better way to do it.
So, here's a breakdown of the problems, and suggested fixes.
Break it down into 3 separate sets of data:
1. Profile data - Your shell scripts,
2. Daily Documents - My Documents folder, data directory. Limit this to stuff you need in ALL locations (though you could have a personal and a work version...) and on a regular basis.
3. Archived files - Infrequently used, but you occasionally need to access them from various places.
Then, the problem becomes much simpler. Instead of a grand scheme to manage all three of these at once, you have three smaller, simpler problems.
Here's my suggestions:
1. Profile info - Wasn't originally my idea, but the best thing I've found is to use CVS to manage the files. You'll also have to setup your shell scripts to detect the OS / machine you are on and run OS / machine specific versions.
For example:
Detects OS, runs ~/.profile.d/linux, ~/.profile.d/win32, ~/.profile.d/macosx, etc.
Detects hostname, runs ~/.profile.d/hostname.
Put core stuff in the
The rest, usually doesn't change.
Add it all to CVS on a personal server. Then just checkout to each account you have. cvs update will keep it up to date if you change the master copy. You might need a special
Then, you have the same profile files on all of your machines. Got a new
2. Daily use Documents. This is a mix. Perhaps you could use a separate CVS repository. Or, use rsync and rdiff type backup sync programs. The key here is to keep this to a minimum. How much to you really need, and how much *must* be in sync between all your machines at all times. Again, this is fairly easy for a small number of documents, so don't let it get out of hand. If you don't use the file all the time, and don't need to maintain changes, then push it to archives.
This is the issue that most other posts address, so I won't get into too much detail. All those solutions are much easier with a small number of documents.
3. Archived files. This is probably what you were really asking about with regards to NFS and sharing files. These are the files you need every so often, stuff like your mp3 collection, downloaded software, extended (non category 2) documents, and the like.
For these, it depends on your setup and level of network access (the speed is important too). rsync might work if you need a locally cached copy, but this is much easier if you leave it in one place. Setup a gateway on your home network with IPSec or PPTP. Or, find WebDAV or some internet accessible filesystem you can use (NFS or SMB even, depends on your security needs). Then, connect to the central repository when you need these files.
This can be large, but keep it so that you don't need to synchronize frequently, and preferably only in one direction. You listen to your mp3's, but you don't change them frequently. Same with your downloaded tar/zip files of software you've collected. (Face it, having a single directory with cygwin, mozilla, etc - all the software you have installed at each location - is much easier than finding and downloading them all from their various sites each time.)
Or, for these files, if you really don't need them all the time, leave them on the central server, and scp them when you need them.
--
So, that pretty much covers it. I hope these suggestions are useful. There comes a time where managing it on the fly just gets too cumbersome. (You'll know that time - it usually happens right after you wipe out some vitally important data because you didn't synchronize the files.)
Beyond this, you can always add all kinds of stuff. Some examples: ACAP (a configuration file server, I use it with mulberry, my IMAP client. It lets me set preferences), Kerberos for common authentication, LDAP for an address book or netscape roaming profiles, the list goes on and on.
What would be nice is a set of scripts to help manage this.
Imagine, getting a new account and typing "pullprofile", and having your environment and data all retrieved, pulled from your central server. Then you could have login and logout scripts to synchronize the data, or just manually (possibly remotely if you forgot to sync before you left work) run them. A cron job to synchronize the big data store overnight.
I'll keep dreaming, and keep looking on freshmeat and sourceforge for a project like this. Maybe one day I'll get up the energy to start it myself, but don't count on it.
;-)
~Jonathan
AFS + kerberos (Score:2, Informative)
http://www.openafs.org [openafs.org]
http://www.pdc.kth.se/heimdal [pdc.kth.se]
Samba + VPN (Score:2, Informative)
1. Set up samba on the reliable (linux) machine, with proper tape backup, etc.
2. Firewalled the segment (which included their desktops) with a WatchGuard SOHO router (about $500 for 25 user support, runs linux
3. Set up Mobile User VPN on the firewall, and any laptops that might travel out of the office.
Samba and SMB are not the world's fastest solutions, but it is nice to be able to have the directory browsing in winders and macos. Samba is easy to set up, my first install of a samba PDC only taking about 3-4 hours (and never touch it again). If you need real speed for transferring over large files, you can always use SSH and SCP (putty and pscp for windows, niftytelnet for mac). Just always attempt to maintain a central data server, back it up as needed, and you'll be successful in clearing the data clutter.
KISS (Score:2)
My supervisor swears by one of these things... He used to have a complete mess of redundant files all over the place and could never remember which was the most current. Now it's easy. The VST drive is the definitive version.
Of course, there is an outside chance that you could lose the drive or the data be destroyed, so make a habit of backing up (using rsync or something similar) on a weekly, or even nightly, basis to a more secure machine (a desktop, for example). You could probably set up a nightly cron job to run that would check to see if the drive is connected and backup if it is. That way, backups for you would be as simple as connecting the drive when you get home...
tar and ssh (Score:2)
Well, let's say your working on a unix system and it crashes or loses its configuration and the network underneath it gets reconfigured. I find the best solution for moving data and preserving permissions is a tar pipe through ssh or rsh. Cpio and other stuff might work better since tar has problems for deep directories. But here's what I use:
But AFS looks cool. Does anyone know how secure it is?
SQL Database (Score:2)
in massive Oracle SQL databases.
Obviously, if you are asking this question,
you don't need such a high powered system
as this (we have a big-iron Sun machine that
does the serving).
However, buying a powerful Dell Server, and
running Access on Win2K would give you a
decent SQL system to work with.
Applets can be written for any platform
which will all use SQL and can then translate
the results of the query into native stuff
for the computer its on.
Furthermore, look into Macromedia ColdFusion.
CF can be used to quickly create web-based
systems which interface with an SQL database
rediculously easy. (My department does just
this.)
You can use a web app and a database to
retrieve data and upload data, perform
authentication, all sorts of great stuff.
My setup (Score:2)
Dev box (PC - W2K/Linux)
Server (PC - Linux)
Firewall (PC - Linux)
Laptop (PC - W2K/Linux)
GF's machine (old Mac)
PDA (Psion 5)
All my working data lives on the server and is available to the other machines via Samba, NFS or netatalk. Backup via DDS3 on the server using afbackup, "minimal restore information" encrypted and mailed to a free webmail account so I can get to it if, say, the server catches fire.
Laptop has a directory under my ~ called "mirrored" which contains my current working set of stuff from the server. This is synced using unison whenever I come back from / head off on a trip to the office (I work from home 3 days a week).
GF has a home dir on the server which is visible on the mac desktop and has been told "put stuff here, it gets backed up, put stuff anywhere else, it's your problem."
Dev box and laptop are dual-boot linux/W2K, with a VMWare install running inside linux set up to boot from the physical W2K install, which can see both ~ on the host machine and on the server (if connected).
PDA syncs with Outlook (no email, just calendar and tasks) on the dev box - VMWare or real hardware, works just the same and the data is visible in both as it's the same physical drive.
Everything works very smoothly, except for:
- Unison which dies if it tries to use more than 64M of RAM to do the sync. This has only happened to me once, when trying to sync about 40-50,000 files in one go. For normal day-to-day jobs, I've never had a problem with it.
- The W2K VMWare session on the dev box losing the serial port occasionally, which means I need to reinstall the port or boot into native W2K before I can sync the PDA. Not really a problem as it only happens very occasionally.
How I do it (Score:3, Informative)
I've been trying to deal with the same problems as you for several years. I have a Mac running Mac OS X, Windows PC, Linux server, and a NeXT around my desk. I have two large hard drives. One is in the Mac and that holds my home directory, and the Linux machine has all my MP3s. My home is exported via NFS and is mounted on the Linux box and on the NeXT so I always have live access to my files. The Windows box only does my TV program and Kazaa, so I'm content to simply have it use FTP to copy files back and forth (I haven't found a decent Windows NFS program.)
It all gets the job done, and it all works smoothly. Printing is done by IP printing to my big 'ol LaserJet. All the mail is kept either on my server at school, or on the cyrus server on the Linux box. It's a delight =)
Re:How I do it (Score:2)
Re:Alternative to IMAP (Score:2)
Re:Alternative to IMAP (Score:2)
It's funny, but they support IMAP too. So does mutt, in fact. There's no reason to not use IMAP just because you only provide shell access... and to follow Ashley's line, if you have a laptop, then IMAP with locally cached messages gives you much better access to your mail if you travel.
And, if you ever provide mail for people in a different state or country, a mail system that's not dependent upon a constant and fast connection to your machine is pretty much necessary.
Take the taste test: consider setting up a super-small machine to host your mail for a little while, on IMAP; configure vm to use IMAP; go ahead and download the imap-utils package from UWash (it gives you things like icat, that cats messages from the server). See if you notice a real difference or not. A little Sparc IPX would be enough for this, with a tiny 3-4G drive. Just give it a try... heck, email me if you need help.