Assuming you have family living elsewhere but reachable through a fast internet connection you can do what I do by making a deal with them: they hang your backup box off their net and you will do the same for them. The backup box is some piece of computing equipment with storage media attached, e.g. a single board computer hooked up to a JBOD tower. Depending on the level of trust between you and your family you can use the thing as an rsnapshot target - giving you fine-grained direct access to time-based snapshots (I use 4-hour intervals for my rsnapshot targets which are located on-premises in different buildings spread over the farm) or as a repository of encrypted tarballs, or something in between. Allow the drives to spin down to save power, they'll be active only a fraction of the day. The average power consumption of the whole contraption does not need to exceed 10-15W making electricity costs negligible. You can have as much storage capacity as you want/can afford at the moment, keep for for as long as you want or until it breaks without having to pay any fees (other than hosting their contraption on your network - possibly including building it for them if they're not that computer-savvy).
> Depending on the level of trust between you and your family you can use the thing as an rsnapshot target - giving you fine-grained direct access to time-based snapshots (I use 4-hour intervals for my rsnapshot targets which are located on-premises in different buildings spread over the farm) or as a repository of encrypted tarballs, or something in between.
These days, openzfs native encryption is the best of all worlds, I think.
In that case, use that. ZFS seems to be a somewhat touchy subject with some people using it as widely as possible while others - myself included - prefer more modular storage systems where the tasks of volume manager, striping/slicing/raid management, encryption layers and file systems are performed by discrete software layers. Both systems work, both have their pros and cons, in the end it comes down what you value the most.
I actually agree with you that ZFS is a massive layering violation (from the user's perspective, at least; apparently under the hood it is a bunch of separate components layered together), but the features are good enough that I personally think it's worth it.
ZFS straddles layers - from block device to file system - which normally are handled by discrete components. It does this in its own way which does not always fit my way. It is resource hungry which does not play well with resource-starved hardware. Growing ZFS vdevs is possible but it is not nearly as flexible as an mdraid/lvm/filesystem_of_choice solution.
In short, ZFS works fine on capable hardware with plenty of memory. It offers more amenities than a system based on mdraid/lvm/filesystem_of_choice. The latter combination can work on lower-spec hardware with less memory where ZFS would bog down or not work at all.
Unless you're using deduplication with ZFS (which you shouldn't) you can usually limit ARC size to a certain amount of ram and then the resource hungriness isn't an issue. You lose the benefit of more dynamic caching, and that sucks, but for lots of workloads this is fine.
Not on hardware with 2 GB or less of RAM unless you reserve most of it for ZFS and even then the feasibility depends on the combined size of the attached storage. Remember that we're talking backup systems, not full servers. These tend to be storage-heavy, focused on sequential rather than random access throughput, preferably low power - i.e. mostly single board computers like Raspberry Pi or older re-purposed hardware like that laptop without a screen, hooked up to a bundle of drives in some enclosure.
As a tangentially related aside I wonder why bringing up (potential) downsides of ZFS tend to lead to heated discussions where nothing like this happens when the same is said about e.g. hardware raid/$file_system or a modular stack like mdraid/lvm/$file_system.
Yeah, 2GB is pretty low... my own experience with backup systems is having full servers to power them, and usually having to over spec them so they can live 5 year lifespans without ever becoming the bottleneck in the backup chain.
re: tangent; I wouldn't really call my response heated, I've just ran into workloads with ZFS where limiting the ARC cache solved problems. I've also ran into frustrations with ZFS that there really aren't easy solutions to (The slab allocator on FreeBSD not playing well with ZFS on this particular servers workload, so having to change the memory allocation system being used to one that doubles CPU usage. Not having ZFSs extended ACLs exposed on Linux, meaning migrating one of our FreeBSD systems to linux will require serious effort)
With 2G of RAM you're talking about a $15 part. Bump it up to 4G and you've got plenty for 36T of zpool - I know from my own experience. Consider the price of the hard drives and it's a single digit % of the overall cost.
With 2GB or RAM I'm talking about the maximum amount possible in a number of machines in use here. Sure it is possible to buy new machines but why would I when there is no need? They work just fine as is with those 2GB, the only restriction is that they can not be used for ZFS since that needs more RAM. Since ZFS is a means to get to an end and there are different means to achieve the same end I just chose one of those - problem solved.
Anyone who needs storage space for pictures cares about backup, they just may not be aware that they care about it or they may just conflate the two needs.
House fires, electrical mayhem (lightning strikes have released more magical smoke than I care to mention here), burglary, flooding, law enforcement coming by to take your things because of $reason, earthquake damage or any other localised threat which can not touch remote backups.