Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Cloud Data Storage Encryption Networking IT

Ask Slashdot: Which Encrypted Cloud Storage Provider? 200

An anonymous reader writes "Almost three years ago, I started looking for a cloud storage service. Encryption and the "zero-knowledge" concept were not concerns. Frankly, after two weeks testing services, it boiled down to one service I used for almost 2 years. It was perfect — in the technical sense — because it simply works as advertised and is one of the cheapest for 500GB. But this year, I decided changing that service for another one, that would encrypt my files before leaving my machine. Some of these services call themselves 'zero-knowledge' services, because (as they claim) clear text does not leave your host: they only receive encrypted data — keys or passwords are not sent. I did all testing I could, with the free bit of their services, and then, chose one of them. After a while, when the load got higher (more files, more folders, more GB...), my horror story began. I started experiencing sync problems of all sorts. In fact, I have paid for and tested another service and both had the same issues with sync. Worse, one of them could not even handle restoring files correctly. I had to restore from my local backup more than once and I ended up losing files for real. In your experience, which service (or services) are really able to handle more than a hundred files, in sync within 5+ hosts, without messing up (deleting, renaming, duplicating) files and folders?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Which Encrypted Cloud Storage Provider?

Comments Filter:
  • by symbolset ( 646467 ) * on Sunday November 03, 2013 @04:50AM (#45316381) Journal
    Build a couple Backblaze boxes and work out a deal with some KC residents. That gets you 180TB offsite stuff with whatever sw leverage you want to lay on top of that.
    • Compared to what SANs cost you might consider buying up some KC residential real estate. BTW: you're already late to this game so expect to pay a premium.
  • I'm not sure if it is "zero-knowledge" or just "little-knowledge" (file meta-data might be transmitted. I honestly don't know), but I've had very good luck with Copy, which was created by Barracuda (the company that's always advertising in airports, for some reason). Check them out: https://www1.copy.com/home/ [copy.com]
  • by Anonymous Coward

    Bittorrent sync + local encryption. Leave a box, or several boxes, running in some datacentres somewhere without the encryption key and you have failover backups (and increased bandwidth).

  • Give it up. (Score:5, Insightful)

    by philip.paradis ( 2580427 ) on Sunday November 03, 2013 @04:58AM (#45316409)

    Write yourself a simple set of scripts that use rdiff-backup or rsnapshot to perform differential/incremental backups to an internal host, make a secondary mirror encrypted at a file level with GPG/PGP, and use rsync to sync the encrypted mirror to several offsite hosts. Done. If this level of security matters to you, do it yourself.

    • Re:Give it up. (Score:5, Informative)

      by Rosyna ( 80334 ) on Sunday November 03, 2013 @05:32AM (#45316471) Homepage

      Indeed. Mostly give up the idea of having the host encrypt files for you. You never know if they have a backdoor of some sort. Find/write software (I use Arq) to encrypt files and then send the encrypted files to a host like Amazon S3. It's really the only way for the host to have the "zero-knowledge" you desire.

      • by mpe ( 36238 )
        Indeed. Mostly give up the idea of having the host encrypt files for you. You never know if they have a backdoor of some sort.

        Even "pre-Snowden" relying on a remote service or software provided by that service to perform such "encryption" was a bad idea. Even without deliberate "backdoors" there are many ways in which such a system can can fail, especially if proprietary software in involved.
    • Comment removed based on user account deletion
    • Re:Give it up. (Score:4, Informative)

      by Sun ( 104778 ) on Sunday November 03, 2013 @07:42AM (#45316739) Homepage

      <plug>Or, better yet, use rsyncrypto [lingnu.com].</plug>

      The advantage is that the incremental diffs don't accumolate on your computer, making your entire archive volatile (lose one rdiff, lose everything after that point). You just sync like you always do.

      Theoretically, rsyncrypto is less secure. I am, of course, far from being objective about this point, but I believe this is not a practical weakness for most people, even with the renewed (justified) paranoia. Then again, the tradeoffs are clearly discussed on the project's site, so you are free to draw your own conclusions on the matter.

      Shachar

      • It sounds to me like Rsyncrypto have no idea what they're doing:
        "Rsyncrypto is a modified encryption scheme. It is based on industry standard AES for symmetric encryption, as well as RSA for having different keys for each file while allowing a single key to decrypt all files. It even uses an encryption mode that is based on CBC.

        Rsyncrypto does, however, do one thing differently. It changes the encryption schema from plain CBC to a slightly modified version. This modification ensures that two almost identica

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Uhm, that property is exactly what you DON'T want in an encryption algorithm. There's a reason we don't use ECB mode. And if you rely on compression for security, you're doing something wrong. Anyway, if you just want to be able to diff encrypted files, what's wrong with counter mode? No need to invent a new mode, right?

          I also don't understand why RSA is needed here. What's the point of asymmetric crypto when there's only one party involved?

          • Re:Give it up. (Score:4, Informative)

            by Sun ( 104778 ) on Sunday November 03, 2013 @03:56PM (#45319457) Homepage

            Uhm, that property is exactly what you DON'T want in an encryption algorithm. There's a reason we don't use ECB mode. And if you rely on compression for security, you're doing something wrong. Anyway, if you just want to be able to diff encrypted files, what's wrong with counter mode? No need to invent a new mode, right?

            I also don't understand why RSA is needed here. What's the point of asymmetric crypto when there's only one party involved?

            1. Rsyncrypto is very very very far from ECB. I am hard pressed (but open to counter examples) to find a real life file that exhibits cypher text repetitions due to plain text repetitions. This is not the case with ECB, as clearly evident from the ECB wikipedia page.
            2. Your statement about compression is strange. It is quite customary to compress before encrypting. Partly because compressing after encrypting makes no sense at all, but also because compression increases the bit entropy of the data, making known plain text attacks harder. It is true that rsyncrypto is more sensitive to such things than other algorithms. It is this little thing I like to call a "trade off". Anticipating your objection, ECB with compression is better than ECB without, but still pretty horrible. You will get repetitions the length of the compression blocks. Like I said above, this is not the case with rsyncrypto.
            3. RSA is needed because you do not want to encrypt all files involved using the same symmetric key, but you also don't want the secret your backup depends on to need constant updating. With this scheme, you only need to reliably and securely store one key (the RSA key), but each file is encrypted with a different key.

            Counter mode is horrible for this application, for two reasons:

            First, any change to the file that adds or removes even a single byte causes the entire cypher text to change from that point on. This makes it quite rsync unfriendly indeed. This is not the case with rsyncrypto.

            The more horrible reason, however, is that counter mode has zero resilience to key reuse. A simple XOR of the cypher texts from two encryption passes will cancel out the encryption, key and all, and leave you with a XOR of the plain texts.

            Shachar

        • by Sun ( 104778 )

          I have posted an answer to AC below (in this comment [slashdot.org]). On the off-chance that your use of Ad-Hominem was not a sign of the ordinary way you conduct discussions, I hope you will find the comment linked above provides an answer to why RSA is, in fact, needed.

          As for inventing our own crypto - you are more than welcome to offer a standard way that resolves the core need. The AC below you offered counter mode, and I hope I showed why "we" didn't think it was a good idea to use it (thus also refuting your claim t

          • > 3. RSA is needed because you do not want to encrypt all files involved using the same symmetric key, but you also don't want the secret your backup depends on to need constant updating. With this scheme, you only need to reliably and securely store one key (the RSA key), but each file is encrypted with a different key.

            a) You do not want to encrypt all files involved using the same symmetric key.
            Care to explain why? There is nothing wrong with using the same key for multiple files.
            Anyway, if the main ke

            • by Sun ( 104778 )

              I should point out that this comment, beginning to end, is criticism over rsyncrypto for following industry best practices, while previous comments (including this one [slashdot.org], from the same author) were criticism for not following best practices. I do wish you'd make up your mind :-)

              a) You do not want to encrypt all files involved using the same symmetric key.
              Care to explain why? There is nothing wrong with using the same key for multiple files.

              If two files are identical (or even start off identical), having them encrypted to different cypher texts is a nice bonus to have. This also makes an attacker's work more complicated, as there is no one jackpot to concentrate all effor

              • > The more important question, however, is why did the alleged "needless use of
                > RSA" trigger such a huge red flag for you? What is the reason we should not
                > use RSA?

                Because public-key cryptography, while being super awesome and very useful in
                many situations, is orders of magnitude slower than private-key
                cryptography (not only RSA, but everything that needs to deal with
                exponentiations and multiplications of 2048bit numbers).
                When I hear about hard-disk I/O and synchronization, I think of
                encryption t

                • by Sun ( 104778 )

                  > The more important question, however, is why did the alleged "needless use of
                  > RSA" trigger such a huge red flag for you? What is the reason we should not
                  > use RSA?

                  Because public-key cryptography, while being super awesome and very useful in
                  many situations, is orders of magnitude slower than private-key
                  cryptography

                  Wow. All of this over performance? I was prepared to hear some convoluted explanation about security, but performance got you so angry?

                  Unless you are backing up 10,000 files with an average length of 10 bytes, the time for performing 10,000 RSA decryptions is negligible compared to the time it will take to actually encrypt said 10,000 files (not to mention, store the to the disk and/or transmit them over the network). I will gladly accept counter arguments if they are backed up by actual data.

                  For the record

                  • by Sun ( 104778 )

                    I forgot to add an important note. The test I ran was to encrypt the files using the public key. Since you are claiming that a symmetric key would have done just a swell, it would make more sense to run the test with the private key. If you know RSA as well as you claim to, you should know that using the private key is even faster.

                    Shachar

              • A bit late to the party, but.. There are some things I'm curious about.

                If two files are identical (or even start off identical), having them encrypted to different cypher texts is a nice bonus to have.

                Wouldn't different IV (or nonce in CTR mode) effectively stop that potential problem?

                And from another of your posts that I was wondering about:

                Counter mode is horrible for this application, for two reasons:

                First, any change to the file that adds or removes even a single byte causes the entire cypher text to change from that point on. This makes it quite rsync unfriendly indeed. This is not the case with rsyncrypto.

                The more horrible reason, however, is that counter mode has zero resilience to key reuse. A simple XOR of the cypher texts from two encryption passes will cancel out the encryption, key and all, and leave you with a XOR of the plain texts.

                A bit change in CTR mode shouldn't alter anything past it's block, from what I understand. There's no state that's moved on from one block to the other (well, except the counter, but that's not affected by the block data).

                And again, wouldn't different IV / Nonce effectively stop that problem?

                • by Sun ( 104778 )

                  Wouldn't different IV (or nonce in CTR mode) effectively stop that potential problem?

                  It would that particular problem, yes. As rsyncrypto is currently implemented, the IV is not embedded in the file at all (except as part of the encrypted key) - hence my confusion. My other points regarding why not one symmetric key for the entire archive do still stand, however.

                  Counter mode is horrible for this application, for two reasons:

                  First, any change to the file that adds or removes even a single byte causes the entire cypher text to change from that point on. This makes it quite rsync unfriendly indeed. This is not the case with rsyncrypto.

                  The more horrible reason, however, is that counter mode has zero resilience to key reuse. A simple XOR of the cypher texts from two encryption passes will cancel out the encryption, key and all, and leave you with a XOR of the plain texts.

                  A bit change in CTR mode shouldn't alter anything past it's block, from what I understand. There's no state that's moved on from one block to the other (well, except the counter, but that's not affected by the block data).

                  And again, wouldn't different IV / Nonce effectively stop that problem?

                  No. A bit change won't propagate a change beyond that bit, but an attacker watching the two stream (before and after the update) will have complete knowledge on precisely which bits have changed and which remained the same. Reusing

                  • I see, thanks for the answer :)

                    At least you have thought about things, and seem to have a good understanding of the mechanics, and have reasons for the changes. Which is more than sadly too many who develop crypto code have.

                    As for the changes you've made, I haven't looked at them closely so I won't even try to discuss them :)

                    The only thing I can say about it is the basic gut feeling that any novel approach to crypto should be distrusted until verified by time and experienced people :)

    • I'm curious. I've always thought that encrypting a lot of files individually (as opposed to as a block) would open you to attacks based on the content of well known files (example configuration files, etc.) that you may add to the lot. That is, if the attacker has knowledge of the content of a couple of files, could he derive the keys for unencrypting the rest?

      • Not that I know anything about cracking encryption or have given this much thought, but wouldn't packing well known and unknown files into single files -- e.g. zip,tar,etc -- prior to encrypting make known content analysis pretty much impractical? ... for todays computers anyway?

      • Knowing plaintext and ciphertext does not make retrieving the key easier for real cryptosystems.

    • by dyfet ( 154716 )

      Indeed, locally encrypting and then mirroring is a good solution. Another can be to use something like ecryptfs if one wants "live" usable files shared in a folder and synced over multiple machines. The service (dropbox, gdrive, whomever) only see the encrypted files, and are happy to mirror that without awareness that they are encrypted at all. You only need to make sure to not pick a NSA friendly cipher ;). You can then access your files on each machine directly through the ecryptfs mount point. ecry

    • by gspear ( 1166721 )
      I rsync to an ext2 filesystem on a LUKS (cryptsetup) volume running on an S3-backed virtual device (using s3backer).
    • Re:Give it up. (Score:4, Informative)

      by fnj ( 64210 ) on Sunday November 03, 2013 @01:11PM (#45318305)

      I'll go you one better than rsnapshot (and make no mistake, I think rsnapshot was an absolutely wonderful idea and a superb invention).

      Just use rsync to a zfs backup point. Take a zfs snapshot after each backup, or not; your call. Make zfs snapshots whenever you feel like it. There is no undue performance or storage problem with many, many snapshots. You could make one snapshot a day and have a simple cron job that deletes all the snapshots older than the last couple of weeks, except retains all the Sundays for a couple of months, all the first Sunday of the months for a couple of years, and all the first Sunday of the years forever. That would leave you with about 50 snapshots plus 1 for every year, which is very light. Or suit yourself with your own schedule.

      Zfs snapshots are essentially instantaneous to make, and very quick to delete. Every single snapshot is a directly addressable representation of the entire store: every file. The differential mechanics are all handled by zfs internally. It's as if you are making a full (not differential) backup every day and somehow finding and financing a small city to store them all in. But your actual storage is only differentially larger than a single backup. OK, so far that's essentially what rsnapshot does, with a bunch of code.

      The advantage over rsnapshot is efficiency and simplicity. All those zillions of hard links behind rsnapshot's strategy are time consuming to create and delete.

      Obviously, either way you do have to be reasonably smart about database files, sparse files and open files.

      BTW, rsyncing an encrypted fs to a remote, well, err, it doesn't really work. Because normal encryption turns small localized file deltas into completely different file contents, turning every rsync in which a lot of large files are modestly changed into a huge data transfer. You can use rsyncrypto to try to work around this, at the cost of some of the security of the encryption.

      • by unrtst ( 777550 )

        +1 to parent.

        The advantage over rsnapshot is efficiency and simplicity. All those zillions of hard links behind rsnapshot's strategy are time consuming to create and delete.

        I love rsnapshot, but the zillions of hard links can indeed be difficult to work with. I recently had to copy the backup data to another server/disk. I initially reached for rsync to do the job, and it couldn'thandle it (same issue with BackupPC pools, btw) - ran out of memory. Ended up using cpio (I think it was something like "find . -depth -print | cpio -pdm /destination/path"). One can stick ssh in the pipe too to get it to a remote location if needed. While this worked, it still took a LON

        • by fnj ( 64210 )

          Heh, I hear you. I have rsync'ed some pretty big rsnapshot repositories to backup repositories, but I was helped by having 16 GB of RAM :-) But still, it was appallingly slow reconciling all those links, even if the actual transfer for an incremental is not that much.

      • I should have clarified the remote sync bit. The idea is to only rsync the encrypted deltas of your primary mirror. Doing it this way with an added layer of tracking does incur a ton of additional overhead for your local storage to gain minimized network transfer, though. A better method, one I've actually used in the past, involves a script that scans your rdiff-backup mirror for changed files, encrypts them, and shuttles the encrypted files off to remote servers. The state of your mirror is saved in a sim

        • by fnj ( 64210 )

          Fair enough. My present requirements do not include encryption. If they did, maybe zfs send | encrypt | ssh target 'cat > snapshotxxx.crypt.img' might be worth consideration. Recovery from the remote would then be more complex, as it would involve reconstructing from a series of differentials.

    • This is good advice, but note that it still leaves your vulnerable to traffic analysis; if this level of security matters to you, consider doing regular updates of fixed size to the cloud even if your local data hasn't changed. For example, put your data in a TrueCrypt volume, and run a script to do minor changes on a regular basis and upload the whole file to the cloud. This will cost more bandwidth (obviously) but the attacker will only see your regular daily/weekly/whatever upload of a fixed length binar

  • by Jane Q. Public ( 1010737 ) on Sunday November 03, 2013 @05:08AM (#45316427)
    For the money you're paying a service, why not just hoop up an inexpensive machine for a server, put a TB or two in it, and use BitTorrent Sync [wikipedia.org]?

    It's pretty secure, you can share files with others, it's available for all major OSes (including iOS and Android), you don't have to mess with any 3rd parties seeing your data... what more do you want?
    • s/hoop/hook
    • For the money you're paying a service, why not just hoop up an inexpensive machine for a server, put a TB or two in it?

      Fires, thefts, etc can happen to pretty much anyone. There's something to be said for encrypted off-site storage. OTOH, there's no particular reason that can't be on a usb flash drive in the glove compartment of a car. (I'd suggest in the trunk under the spare tire instead). After all, the data is encrypted. What can possibly go wrong?)

      • "Fires, thefts, etc can happen to pretty much anyone. There's something to be said for encrypted off-site storage. "

        This is not a "backup", it is a SYNC application. If anything happens to one copy, everybody else has another copy.

        Why pay somebody else for what you already have?

  • by Anonymous Coward on Sunday November 03, 2013 @05:12AM (#45316431)

    I've not tried this, but always meant to. Sparkleshare is an attempt to make an open source Dropbox - and a couple of years after I first bookmarked it it's still going strong.

    You can get a cheap dedicated server for under £10 a month and roll your own based on this?

    Also has client-side encryption
    https://github.com/hbons/SparkleShare/wiki/Client-Side-Encryption

    • Depending on your needs this might not be the right choice, as stated on their home:

      Great:

      Frequently changing project files, like text, office documents, and images
      Tracking and syncing files edited by multiple people
      Reverting a file to any point in its history
      Preventing spying on your files on the server using encryption

      Not so great:

      Full computer backups
      Storing your photo or music collection
      Large binary files that change often, like video editing projects

      For general purpose Dropbox replacement I recommend o

    • by SpzToid ( 869795 )

      I've tried SparkleShare and it works really well, so long as you don't have many large binary files, like images or videos. It fails where traditional GIT fails.

      What I found that works better is git-annex assistant and either your own redundant and cheap hardware disks, or you can also ssh somewhere, OR you can also use Amazon Glacier for a very very low cost. Yes, you can also encrypt everything before it leaves your machine. Check out the nice video tutorials.

      http://git-annex.branchable.com/assistant [branchable.com]

  • None of them. (Score:5, Insightful)

    by MrL0G1C ( 867445 ) on Sunday November 03, 2013 @05:26AM (#45316451) Journal

    After all of this NSA business, why would you ask which storage provider keeps you safe when clearly none of them do.

    If you want your data encrypted, why would you not do it yourself, then you don't need to pay for an encrypted storage provider because you can upload your encrypted data to any storage provider. Paying extra for something you're not guaranteed to get is not very intelligent.

    This article brought to you by an anonymous reader / encrypted storage provider.

    • by MrL0G1C ( 867445 )

      Perhaps I should RTFS before posting. I still wouldn't trust these services anyway, how do you know the keys are made securely and stay secure?

      • by rvw ( 755107 )

        Perhaps I should RTFS before posting. I still wouldn't trust these services anyway, how do you know the keys are made securely and stay secure?

        Exactly! How will you ever know for sure that the program won't send your private key to the server - encrypted with another key so you will never see it if you would try to monitor traffic? I think it's impossible with hundreds of gigabytes of traffic.

        • by rmstar ( 114746 )

          Exactly! How will you ever know for sure that the program won't send your private key to the server - encrypted with another key so you will never see it if you would try to monitor traffic? I think it's impossible with hundreds of gigabytes of traffic.

          You need an open source client, and you have to build it yourself.

          That said - this slashdot news item looks like a psyop to me. Why would a halfway decent cloud storage provider botch up data integrity so badly? This isn't rocket science after all. I suspect

          • by rvw ( 755107 )

            EncFS might just do what you and I need!

          • Why would a halfway decent cloud storage provider botch up data integrity so badly? This isn't rocket science after all. I suspect this is FUD to keep people from using these encrypted storage solutions.

            While it may not be rocket science, a lot of people underestimate the amount of corruption that files incur from bits flipping at random in storage or during transfer. It's one of the reasons many of those cloud services have checksums at EVERY step in the process. And if you use the wrong type of checksum you're going to get collisions once the number of customers goes up. Apart from that, you get sync issues if the clocks of all devices don't match up exactly, and you need to make sure you have a globally

        • by mpe ( 36238 )
          How will you ever know for sure that the program won't send your private key to the server - encrypted with another key so you will never see it if you would try to monitor traffic?

          Unless you control the "client side" software you can't even know if it is even using the key you think it is. Never mind doing something as elaborate as steganography to send data you don't know it's sending.
  • by tiznom ( 1602661 ) on Sunday November 03, 2013 @05:39AM (#45316495)

    Your problem isn't the storage, it's whatever you are doing locally that is the issue. I've got tens of thousands of files backed up with no issues, across several devices.

    You didn't mention your OS. I'll assume you are running Linux because if you are running WIndows/MacOS you are missing a fundamental weakness already.

    On Linux, use EncFS which also has a nice GUI manager via GEncfsM [bitbucket.org] for those that prefer it.

    Using EncFS means you don't have to upload entire files when you edit them, only the changes are synced. This is efficient, open-source, and works perfectly.

    Once EncFS is working, pick any cloud storage you want and sync the encrypted folder(s). I do it with Dropbox + symlinks and it is flawless, no issues for years now.

    • Parent needs voting up. With EncFS, you can even use the reverse function, in case you like your local files unencrypted for some reason, to get an encrypted "view" of the files and sync that. You can mount a remote Windows machine's drive, for instance, get an encrypted view of said drive and sync that to the cloud. Also, check out Jottacloud if you have a Windows machine available. I don't think their "unlimited storage" deal can be beaten.
  • TarSnap (Score:5, Interesting)

    by broknstrngz ( 1616893 ) on Sunday November 03, 2013 @05:48AM (#45316509)

    tarsnap.com. Not very user-friendly, but it does what it says on the tin.

  • 127.0.0.1 or 10.6.6.6 or 192.168.69.42. Those are my encrypted cloud service providers. Public address varies so I ping my web server for a redirect; You could use dynamic DNS. Since we're using pre-shared-key encryption no MITM can insert themselves -- data is encrypted before the session is even initiated -- No need to worry about SSL PKI shenanigans.

  • Only solution. If you want a job done properly, then you best carry it out yourself. Buy a relatively cheap server, equip it with whatever you need to get a backup to it working, and have that server hosted. You'll pay for the hosting and the bandwidth ( in many cases, the latter is included in the former ). All cards are in your own hands. It takes some work, but except for man-in-the-middle attacks - which are always possible, BTW, and in any scenario - you are safe.
  • If you don't control the software itself, you can't be sure that there aren't backdoors. Even if there aren't backdoors when you start, they can always get introduced later.

    If you're really concerned about this, put a server somewhere and use encrypted rsync or something similar. Even then, be aware that backdoors can still be pushed onto your machine with a software update.

  • Try filecloud.io they are an Irish company with servers in Amsterdam

    free accounts come with up to 1000GB free storage (reduced redundancy sort of like amazon) and more if you pay 6.99$ / month (29.99/6 months) that comes with raided storage

    you can encrypt files yourself before uploading whichever method you want

    they have previously went to courts and got advice from Irish Data Protection commissioner that if any 3rd party wants your data they have to get an Irish court order.

  • Truecrypt + Dropbox (Score:5, Informative)

    by joelleo ( 900926 ) on Sunday November 03, 2013 @07:12AM (#45316673)

    I use Truecrypt's encrypted drive containers in my local Dropbox folder. The file sync'd to Dropbox is encrypted when the sync occurs, so that is all they ever see. Because Dropbox does a binary diff of the file and only uploads the differences which makes syncing large encrypted files feasible.

    I've seen some chatter that Truecrypt may have been compromised - Bruce Schneier and Snowden use it so I'll trust in their judgement.

    • That's a pretty good idea, actually. I think I'll go this route myself as well.

    • by Pav ( 4298 )
      Is it recommended for this use case? If not I'd be leary of using it without expert advice... it's very easy to break a secure system by applying it to problem domains for which it wasn't designed. Could an attacker infer things by watching changes over time?
  • Seafile (Score:4, Informative)

    by Juba ( 790756 ) on Sunday November 03, 2013 @08:00AM (#45316791)
    I've found Seafile [seafile.com] to be quite good and reliable. It's a multiplatform, free software, self-hosted Dropbox alternative that provides file syncing, sharing, a web interface, and tools for team work. Libraries can be encrypted server-side.
    I use it for several months now and it is both fast and reliable (much more than the owncloud versions I tested previously). It handles my whole pictures collection (about 90GB) very easily. You can install your own Seafile server (there's even a raspberry pi version), or buy storage space from them. Clients are multiplatform (Windows, Mac, Linux, Android, iPhone/iPad).
    • by Wonko ( 15033 )

      I've been using Seafile since April [patshead.com] and I am very, very pleased with it. It is one of the few self hosted options that supports client side encryption and manages to scale up to a reasonably large number of files. It fell down on me in the 100k file range, but I was able to get around that issue by breaking my data up into smaller libraries.

      The client side encryption was slightly problematic in the 1.x releases. Back then, you had to type your encryption passphrase into the server to create a new library

  • ...to think about this HARD and give us some solutions. An insecure solution seems to work just as well as a secure one, and I'm a geek generalist... and I know what I don't know. Hopefully the big guns have already been thinking about exactly this problem for a while. We know there's no such thing as perfect security... but it would be nice to have something good, and some best-practice guides so we know how to avoid compromising ourselves too obviously.
  • Have you tried this?

    http://blog.genie9.com/index.php/tag/amazon-s3/ [genie9.com]

    Cheap to check-in, expensive to check out, not super fast, but you get what you pay for.

  • For all values of ___, never pay for an encrypted ___ service. Whether it's mass storage, email, or whatever. All service providers who offer this kind of stuff, are snake oil sellers. What happened to Lavabit this year wasn't news; we already knew about CALEA and have known for twenty years.

    Twenty years in the tech world is a long time and ought to have conditioned your thinking by now. Even well-meaning, loyal professional allies can be subverted. The popular example case is government pointing guns

  • it is not a good idea to store on any cloud company's server. the NSA can compell any company in the US to give up the decryption key for pretty much anything. if you want to store your stuff on the internet, either make your own server and store on it, or find a company based in a country that does not listen to the NSA or any other spying conglomerate
  • another idea would be to encrypt it yourself BEFORE putting it on a cloud storage server. this way even if the NSA compells them to give up data... they cant get access to it without YOUR key.
  • https://www.jungledisk.com/ [jungledisk.com]

    I suppose it all depends on ones level of paranoia and which risks you fear most. Having all the data securely encrypted but in private homes means a couple of natural disasters and the data is gone.

    One can layer encryption on top of theirs (as folks propose above with Dropbox) for an extra level of complexity.

  • ‘Encrypted Storage Provider’. What fantasy land are you from? The NSA has forced every single on line provider to hand over encryption keys. A few have refused (lookup lavabit) and have shut down / walked away from their businesses). Meaning there is no more encryption. Everything is copied, tracked, evaluated, and reviewed. Everything. The US government has effectively killed the notion of corporate AND personal ‘cloud’ storage for anyone concerned about security.
    • by Skapare ( 16644 )

      Encrypt your own data yourself before handing it over to some online/cloud storage provider.

  • First demand would be that the company lies outside of the US and has no US representation the US can target. Currently I know only of Mega to fulfill those requirements. Added bonus is that Kim Dotcom, the owner, is now seriously pissed at the US government so he won't cooperate with them.

  • Get BitTorrent Sync from http://labs.bittorrent.com/experiments/sync.html [bittorrent.com] and set up your own server, either locally or "in the cloud" (which you control). There are clients for all major platforms, including Android, and it works well. Traffic is encrypted and storage is only on computers you control yourself.
    There is one drawback, though: It's not open source so you have to trust BitTorrent Inc.

  • No matter what you think of the Cloud, you have resilient cloud like Amazon that goes away sometimes, or you can have cloud like Everpix, that refused to give me my pix after they went to price model and told me “screw you” and is about to go away forever.

    Nothing is permanent. Eventually some natural disaster is going to make a huge chunk of data away for services that are not geographically redundant.

What is research but a blind date with knowledge? -- Will Harvey

Working...