While it works well, Duplicity is rather slow and only supports expensive full-incremental-full backup cycles.
Modern incremental-forever backup tools like borg[1] (fork of Attic[2]) are much faster since they are based on block hashing, which also gets you deduplication for free.
Basically, a backup is a set of hashes - this means that you can selectively delete or retain old backups without having to merge them.
I built a >50 TB enterprise backup cluster with borg and it works extremely well.
For the "cheap" part one'll probably need a system that has erasure coding and can survive partial outages.
E.g. if one backs up to hubiC, OneDrive and Google Drive, with a RAID5-like redundancy, they can have just 33% overhead and still be sure shall any of those vendors discontinue the service or suffer a failure, their data would be still safe. Some call that RAIC - Redundant Array of Inexpensive Clouds.
git-annex and Tahoe-LAFS do this, but they're not actual backup solutions.
Modern incremental-forever backup tools like borg[1] (fork of Attic[2]) are much faster since they are based on block hashing, which also gets you deduplication for free.
Basically, a backup is a set of hashes - this means that you can selectively delete or retain old backups without having to merge them.
I built a >50 TB enterprise backup cluster with borg and it works extremely well.
[1]: https://borgbackup.readthedocs.io/en/stable/
[2]: https://www.stavros.io/posts/holy-grail-backups/