Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Security

Experiences and Thoughts on SHFS? 43

eugene ts wong asks: "I was looking over SHFS, & I thought that this seems like a very good software package. If I understand it correctly, then it should be the defacto way to mount shares across a network. I never heard of it till today, though. What do all of you think of this? What kinds of experiences do you have? I am interested in hearing some of your stories. I heard that NFS isn't secure. How do they both compare? Would you recommend SHFS for small, medium & large businesses?"
This discussion has been archived. No new comments can be posted.

Experiences and Thoughts on SHFS?

Comments Filter:
  • by Anonymous Coward
    Secure Sneaker Net
    Two feet
    Write Only Floppy Disk.
  • by johnjones ( 14274 ) on Monday April 19, 2004 @09:59PM (#8912272) Homepage Journal
    or more to the point why do you think its secure ?

    it all comes down to trust...

    do you trust the network your pluged into ?
    how about the people who are selling that VPN ?

    I surgest that you have a look at IPSec

    it works on winXP linux solaris BSD's and then find a Networked File System that is high performance

    regards

    John Jones
  • by Padrino121 ( 320846 ) on Monday April 19, 2004 @10:02PM (#8912313)
    I wanted a transparent way to access my remote files over SSH since it's the only external access I trust and came upon SHFS a couple of weeks ago.

    It has worked out really nice and I now don't have to do the scp or SFTP dance all of the time to edit files on a remote box.

    One thing I came across though during "make install" under 2.6 is that the .ko module built for 2.6 that the install process copies to you lib/modules directory didn't work. There was however a .o as well built for 2.6 that worked great after I copied it manually.
  • tried it (Score:5, Informative)

    by Satai ( 111172 ) * on Monday April 19, 2004 @10:08PM (#8912378)
    I tried it, and I found it to be a bit unreliable. This was last fall... Random accesses on files were slow, and frequently it hung, leaving me with orphaned partitions I couldn't umount. Otherwise it worked ok -- I mean, it was easy to configure and whatnot, but performance wise when I tried it it was found lacking.
    • same here (Score:3, Interesting)

      by KnightStalker ( 1929 )
      i had the same results more than a year ago. Eventually I settled on emacs + tramp which does a suitable job of allowing me to edit remote files over SSH with little pain.
    • look up the lazy unmount option ... And besides, NFS has this problem too.

      umount -fl /mnt/partition will work, even when the filesystem is having problems. It might not actually be removed from the kernel if you do it that way, but at least it won't bother you any more.
  • 3 week experience. (Score:5, Informative)

    by Anonymous Coward on Monday April 19, 2004 @10:20PM (#8912501)
    I have been using shfs for a few weeks now, and here are the pros and cons with my limited experience with it.

    Pros:
    (i) mounting remote filesystems over ssh is great, as you don't have to worry about opening up new ports.
    (ii) read-only performance is good (I haven't had any problems).

    Cons:
    (i) definitely *buggy* (do not even think of using this for mounting partitions w/ critical data). For e.g., I mounted it read-only and by mistake opened a file with vim. When I tried to !wq, vim refused to write (obviously!), and I just escaped with a q!. Much to my chagrin, the file was gone--- I later figured that this was not a random bug; it was repeatable.
    (ii) write performance (across a 1Mbps DSL conn.) *sucks*!
    • by gid ( 5195 )
      That's really too bad. I tried shfs probably at least 3 years ago or so. I had similar problems. I tried reporting bugs about it, went back and forth a bit, trying new versions as new versions came out with supposed fixes which never quite resolved it. I eventually just lost interest and ended up using something else. There were also some permission problems, it's been awhile now, I can't remember exactly what the problem was but I remember being very frustrated and losing data. (test data mind you).
  • LUFS (Score:5, Interesting)

    by telemnar ( 68532 ) on Monday April 19, 2004 @10:21PM (#8912510)
    sounds a lot like LUFS ( http://lufs.sf.net ) which lets you mount remote filesystems via SSH, FTP, and several other novel protocols.
    • Re:LUFS (Score:4, Informative)

      by jefu ( 53450 ) on Monday April 19, 2004 @11:18PM (#8912936) Homepage Journal
      I've been using lufs across several machines (mostly for the sshfs filesystem) for a bit now with good results - a couple problems (keeping dates in sync for make has been a problem), but nothing insurmountable.

      Easy install, easy to use. Good stuff.

      • Re:LUFS (Score:2, Insightful)

        by dabrepus ( 135235 )
        A good way to keep dates in sync would be to use ntp, don't you think?

        Just a small tip :)
        • dates (Score:3, Informative)

          by jefu ( 53450 )
          I've been using ntp on all the machines, but it doesn't seem to help much, there is still enough drift to cause make to fail. I'm not using any of the machines as an ntp server - though I have been thinking about that. Would that help? Especially given that one group of machines is rather a longish network trip from the rest (though only about three blocks in euclidean space).
  • by wan-fu ( 746576 ) on Monday April 19, 2004 @10:47PM (#8912729)
    I tried using shfs but it didn't work very well (YMMV, I'm running a Gentoo 2.6.3 kernel) with my system. Frequent timeouts and the program had problems unmounting shfs mounts. I recently switched to using the "FISH" feature in KDE (fish://username@host/path_to_stuff/) and that has worked fairly well for my purposes.
    • If you're not forced to do so, change it to sftp://. It is a much better solution if your ssh target has an sftp backend. FISH uses basic shell tools at the remote, like shfs, and that's just too much complexity to be a good solution.
  • by Anonymous Coward on Tuesday April 20, 2004 @06:51AM (#8914749)
    i have been using it for months at home. works great on the home wireless network. well, up until i switched to using my 15" powerbook.

    despite being quite excited about the possibilities, i'd never run this in a production environment. alot of people run down nfs for being insecure and sucky on any number of levels. i have to say, we had a very active messaging system behind a very high profile website [sprintpcs.com] use nfs for two years due to a combination of stupid developers and vendor going out of business. it NEVER broke. and we were churning 100's of thousands of files over nfs per day.

    eventually i had to stop bringing it up in meetings cause it never broke. of course YMMV, mine sure did.
  • by dwoolridge ( 69316 ) on Tuesday April 20, 2004 @07:05AM (#8914783) Journal
    From the main page for SFS (Self-certifying File System [fs.net]):
    SFS is a secure, global network file system with completely decentralized control. SFS lets you access your files from anywhere and share them with anyone, anywhere. Anyone can set up an SFS server, and any user can access any server from any client. SFS lets you share files across administrative realms without involving administrators or certification authorities.
  • Recommendations (Score:2, Informative)

    by winchester ( 265873 )
    I would most definitely not recommend SHFS for production use. The reason behind this is very simple. It is unproven for production use. With unproven i mean multi-year experience running it in a large-scale, mission critical environment. Contrary what you might think, your home setup is not a large-scale, mission critical environment.

    The place where I work is a UNIX shop, we use NFS all the time, because it operates reliably between various UNIX flavours. Every vendor has a robust implementation. We shar

  • by danpritts ( 54685 ) on Tuesday April 20, 2004 @11:19AM (#8917078) Homepage
    I am not familiar with shfs other than a brief read of the website and this thread.

    w/r/t NFS security, NFSv4 should solve most if not
    all of the problems. Fundamentally two things always bothered me about NFS security.

    RPC - NFS makes heavy use of sun-style RPC, requiring you to use the RPC libraries and the portmapper. This stuff has a bad reputation for security problems, eg, buffer overflows, and there is a lot of it, and it runs on random ports so it's difficult to filter/firewall/tunnel it.

    no user credentials - NFS through V3 doesn't provide any user credentials - root on the client has access to all users' files on the mounted filesystem. There's no server-enforced security.

    NFSv4 [umich.edu] fixes the RPC/multiple ports problem.
    I don't know about the user credential problem but i bet it fixes that too.

    On to the quick-and-dirty:
    In the past, I've set up a samba server and used the linux smbfs client to access it, and tunneled the whole business over SSH. It worked reliably, to the limited extent that i tested it (YMMV).
    I don't really remember how well it performed - it was more of a proof-of-concept for me.
    • by phorm ( 591458 )
      It's a pain in the butt, but you can at least fix the ports for NFSv3 in order to firewall:

      In your init:
      /sbin/rpc.mountd --port ${RPCMOUNTDPORT}
      /sbin/rpc.statd --port ${LISTEN_PORT}


      And in /etc/modutils/nfs (or whatever works on your distro to set module params):
      options lockd nlm_udpport=4001 nlm_tcpport=4001

      For the above, be sure to run update-modules after in deb. Then afterwards, allow "RPCMOUNTDPORT, LISTEN_PORT, and udp/tcp ports 4001 (or whichever you choose) through the firewall.
    • Thanks for your comments. I especially appreciate the smbfs-ssh idea. Right now, we're running MS SFU, so we aren't really needing smbfs. However, I always want to know more options, because I tend to get into the habit of doing things only 1 way.
      • it did seem kinda "dirty" to me to be using smb to access a linux server from a linux client, but i really liked the ability to tunnel a single port (if i recall correctly) over ssh.

        One thing I remember about this that I forgot to mention is that smbfs didn't properly display unix file modes across the connection. Wasn't surprising when i thought about it, presumably no way in this windows-centric protocol to pass that info. I didn't investigate whether NT acls were somehow emulated, etc.
        • it did seem kinda "dirty" to me to be using smb to access a linux server from a linux client, but i really liked the ability to tunnel a single port (if i recall correctly) over ssh.

          I think that I know what you mean.

          I like using 1 "thing" to do as much as possible. So, when I found out that our customers could give us some data through ssh, I implemented it right away. I'm not sure if that's the best method, but I like the idea that we are supposed to be using ssh anyways, so we may as well use it as much

  • I've been using shfs to connect to my uni ssh server to access my files from home for about 6 months now, no problems at all (with both 2.4 and 2.6 kernels).

    Would highly recommend it.
  • If you only need to browse your shfs shares and read some data here and there, then go for it. But for heavier usage and especially if you need to write data... NO WAY. Check out the shfs source code if you need to find out, why. The write routine erm... could be better.

As long as we're going to reinvent the wheel again, we might as well try making it round this time. - Mike Dennison

Working...