ZFS straddles layers - from block device to file system - which normally are handled by discrete components. It does this in its own way which does not always fit my way. It is resource hungry which does not play well with resource-starved hardware. Growing ZFS vdevs is possible but it is not nearly as flexible as an mdraid/lvm/filesystem_of_choice solution.
In short, ZFS works fine on capable hardware with plenty of memory. It offers more amenities than a system based on mdraid/lvm/filesystem_of_choice. The latter combination can work on lower-spec hardware with less memory where ZFS would bog down or not work at all.
Unless you're using deduplication with ZFS (which you shouldn't) you can usually limit ARC size to a certain amount of ram and then the resource hungriness isn't an issue. You lose the benefit of more dynamic caching, and that sucks, but for lots of workloads this is fine.
Not on hardware with 2 GB or less of RAM unless you reserve most of it for ZFS and even then the feasibility depends on the combined size of the attached storage. Remember that we're talking backup systems, not full servers. These tend to be storage-heavy, focused on sequential rather than random access throughput, preferably low power - i.e. mostly single board computers like Raspberry Pi or older re-purposed hardware like that laptop without a screen, hooked up to a bundle of drives in some enclosure.
As a tangentially related aside I wonder why bringing up (potential) downsides of ZFS tend to lead to heated discussions where nothing like this happens when the same is said about e.g. hardware raid/$file_system or a modular stack like mdraid/lvm/$file_system.
Yeah, 2GB is pretty low... my own experience with backup systems is having full servers to power them, and usually having to over spec them so they can live 5 year lifespans without ever becoming the bottleneck in the backup chain.
re: tangent; I wouldn't really call my response heated, I've just ran into workloads with ZFS where limiting the ARC cache solved problems. I've also ran into frustrations with ZFS that there really aren't easy solutions to (The slab allocator on FreeBSD not playing well with ZFS on this particular servers workload, so having to change the memory allocation system being used to one that doubles CPU usage. Not having ZFSs extended ACLs exposed on Linux, meaning migrating one of our FreeBSD systems to linux will require serious effort)
With 2G of RAM you're talking about a $15 part. Bump it up to 4G and you've got plenty for 36T of zpool - I know from my own experience. Consider the price of the hard drives and it's a single digit % of the overall cost.
With 2GB or RAM I'm talking about the maximum amount possible in a number of machines in use here. Sure it is possible to buy new machines but why would I when there is no need? They work just fine as is with those 2GB, the only restriction is that they can not be used for ZFS since that needs more RAM. Since ZFS is a means to get to an end and there are different means to achieve the same end I just chose one of those - problem solved.