Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While it works well, Duplicity is rather slow and only supports expensive full-incremental-full backup cycles.

Modern incremental-forever backup tools like borg[1] (fork of Attic[2]) are much faster since they are based on block hashing, which also gets you deduplication for free.

Basically, a backup is a set of hashes - this means that you can selectively delete or retain old backups without having to merge them.

I built a >50 TB enterprise backup cluster with borg and it works extremely well.

[1]: https://borgbackup.readthedocs.io/en/stable/

[2]: https://www.stavros.io/posts/holy-grail-backups/



For the "cheap" part one'll probably need a system that has erasure coding and can survive partial outages.

E.g. if one backs up to hubiC, OneDrive and Google Drive, with a RAID5-like redundancy, they can have just 33% overhead and still be sure shall any of those vendors discontinue the service or suffer a failure, their data would be still safe. Some call that RAIC - Redundant Array of Inexpensive Clouds.

git-annex and Tahoe-LAFS do this, but they're not actual backup solutions.


Borg is amazing, except for the lack of public key encryption, which makes it unusable: https://github.com/borgbackup/borg/issues/672


Been keeping my eye on restic for just this reason.

https://github.com/restic/restic


oops, missed the 'public key' part - restic, last I checked does not have that... the project author is thinking about the design of it:

https://github.com/restic/restic/issues/187


Agree, but lack of public key support is the only reason for me not using it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: