It would be nice to have a slightly more flexible and integrated system set up though, as well as multiple version support. I have found that anything which isn't entirely automatic tends to get neglected.
By the way, there is a tool called rclone that does something similar to rsync but for cloud services. Handy for syncing off-site backup sets, although you should of course encrypt first. That brings up another issue with many solutions - the encryption prevents effective differential backups.
By the way, there is a tool called rclone that does something similar to rsync but for cloud services. Handy for syncing off-site backup sets, although you should of course encrypt first.
I believe rclone offers encryption so you can make your unencrypted local backups encrypted on the cloud. Also various cloud services have their own encryption systems.
That brings up another issue with many solutions - the encryption prevents effective differential backups.
Thanks, I hadn't looked at encryption in rsync. I only used it to periodically backup my cloud services. I also use Thunderbird for that, having it automatically start in the middle of the night, sync IMAP mailbox with Gmail and then close.
Borg seems quite good, similar to Duplicati but no Windows support. How robust is it? As I say occasionally Duplicati uploads fail (due to internet connection problems) and you have to run a manual command to recover, but it's never lost any data or completely trashed a b
[borg] How robust is it? As I say occasionally Duplicati uploads fail (due to internet connection problems) and you have to run a manual command to recover, but it's never lost any data or completely trashed a backup set.
Seems OK. So, the only nonlocal mode it has is SSH, and I use rclone for uploading. I've had many backups abort half way through, and it's something that's explicitly supported. Essentially it always creates BACKUPNAME.[0,1,2,etc], and on completion, renames it to BACKUPNAME. If it aborts,
Duplicati just creates more encrypted zip files every time, and occasionally removes old ones that no longer contain any required data. Once a file has been made it never gets renamed or added to, it's always a new file.
It seems to handle large amounts of data and large numbers of files without any problem, which is the downfall of many apps.
Duplicati just creates more encrypted zip files every time, and occasionally removes old ones that no longer contain any required data. Once a file has been made it never gets renamed or added to, it's always a new file.
Borg uses some sort of transactional database (made out of files), so it's just a symbolic name given to a snapshot. Quantity wise I've got one repository doing all my machines and it's fine, though it turns out that's not necessarily the best way of doing things. I'm probably happily into t
Borg produces an opaque* directory structure representing some sort of database and rclone just syncs that to a remote directory, so you can handle moving the database around, but renaming individual files in the database would break the structure. Actual file names are stored inside the database itself.
* opaque for normal use. It's FOSS and documented, but the internal structure is opaque to anyone not digging around in the guts of borg.
Maybe I misunderstood about the renames. How does it handle versioning, or doesn't it?
What I mean is you say it creates a file BACKUPNAME, so is there an older one like BACKUPNAME.20210307 or something? Like the current one renamed or some sort of shuffle?
Oh right, the backname is for the snapshot, so each backup is a complete (or incomplete) snapshot of your entire tree. You get file versions by going to older snapshots. So you can list all the snapshots you have, then dive into one to get the file. The files within a snapshot keep their names.
One other thing about Duplicati, it only keeps one file locally while the upload is in progress so you don't need much disk space if you are only using remote targets. Each file has a max size, I think they default is 50MB.
That's the downside of my method, you need a disk either on the local machine, or something you can SSH to, and I don't know how well it works over high latency connections. I use my desktop PC to hold that disk.
Top Ten Things Overheard At The ANSI C Draft Committee Meetings:
(5) All right, who's the wiseguy who stuck this trigraph stuff in
here?
Rsync (Score:5, Insightful)
1. rsync to a HDD which is stored in the desk drawer.
2. rsync to a remote HDD (remember that offsite backups are crucial).
Re:Rsync (Score:3)
It would be nice to have a slightly more flexible and integrated system set up though, as well as multiple version support. I have found that anything which isn't entirely automatic tends to get neglected.
By the way, there is a tool called rclone that does something similar to rsync but for cloud services. Handy for syncing off-site backup sets, although you should of course encrypt first. That brings up another issue with many solutions - the encryption prevents effective differential backups.
Re: (Score:3)
By the way, there is a tool called rclone that does something similar to rsync but for cloud services. Handy for syncing off-site backup sets, although you should of course encrypt first.
I believe rclone offers encryption so you can make your unencrypted local backups encrypted on the cloud. Also various cloud services have their own encryption systems.
That brings up another issue with many solutions - the encryption prevents effective differential backups.
Borg can encrypt its database. You provide a key
Re: (Score:2)
Thanks, I hadn't looked at encryption in rsync. I only used it to periodically backup my cloud services. I also use Thunderbird for that, having it automatically start in the middle of the night, sync IMAP mailbox with Gmail and then close.
Borg seems quite good, similar to Duplicati but no Windows support. How robust is it? As I say occasionally Duplicati uploads fail (due to internet connection problems) and you have to run a manual command to recover, but it's never lost any data or completely trashed a b
Re: (Score:2)
[borg] How robust is it? As I say occasionally Duplicati uploads fail (due to internet connection problems) and you have to run a manual command to recover, but it's never lost any data or completely trashed a backup set.
Seems OK. So, the only nonlocal mode it has is SSH, and I use rclone for uploading. I've had many backups abort half way through, and it's something that's explicitly supported. Essentially it always creates BACKUPNAME.[0,1,2,etc], and on completion, renames it to BACKUPNAME. If it aborts,
Re: (Score:2)
Duplicati just creates more encrypted zip files every time, and occasionally removes old ones that no longer contain any required data. Once a file has been made it never gets renamed or added to, it's always a new file.
It seems to handle large amounts of data and large numbers of files without any problem, which is the downfall of many apps.
Re: (Score:2)
Duplicati just creates more encrypted zip files every time, and occasionally removes old ones that no longer contain any required data. Once a file has been made it never gets renamed or added to, it's always a new file.
Borg uses some sort of transactional database (made out of files), so it's just a symbolic name given to a snapshot. Quantity wise I've got one repository doing all my machines and it's fine, though it turns out that's not necessarily the best way of doing things. I'm probably happily into t
Re: (Score:2)
So is rclone smart enough to handle renames on the server and not re-upload? Sounds like a decent solution if you don't need the web UI.
Re: (Score:2)
Not sure what you mean by renames on the server?
Borg produces an opaque* directory structure representing some sort of database and rclone just syncs that to a remote directory, so you can handle moving the database around, but renaming individual files in the database would break the structure. Actual file names are stored inside the database itself.
* opaque for normal use. It's FOSS and documented, but the internal structure is opaque to anyone not digging around in the guts of borg.
Re: (Score:2)
Maybe I misunderstood about the renames. How does it handle versioning, or doesn't it?
What I mean is you say it creates a file BACKUPNAME, so is there an older one like BACKUPNAME.20210307 or something? Like the current one renamed or some sort of shuffle?
Re: (Score:2)
Oh right, the backname is for the snapshot, so each backup is a complete (or incomplete) snapshot of your entire tree. You get file versions by going to older snapshots. So you can list all the snapshots you have, then dive into one to get the file. The files within a snapshot keep their names.
Re: (Score:2)
One other thing about Duplicati, it only keeps one file locally while the upload is in progress so you don't need much disk space if you are only using remote targets. Each file has a max size, I think they default is 50MB.
Re: (Score:2)
That's the downside of my method, you need a disk either on the local machine, or something you can SSH to, and I don't know how well it works over high latency connections. I use my desktop PC to hold that disk.