Certainly it makes sense to not have deep copies of VM base images, but the deduplication is not the right way to do it in ZFS. Instead, you can clone the base image and before changes it will take almost no space at all. This is thanks to the copy-on-write nature of ZFS.
ZFS deduplication instead tries to find existing copies of data that is being written to the volume. For some use cases it could make a lot of sense (container image storage maybe?), but it's very inefficient if you already know some datasets to be clones of the others, at least initially.
When a new VM is created from a template on a ZFS file system with dedupe enabled what actually happens? Isn't the ref count of every block of the template simply incremented by one? The only time new data will actually be stored is when a block hash a hash that doesn't already exist.
That's right, though deduplication feature is not the way to do it. The VM template would be a zvol, which is a block device backed by the lower levels of ZFS, and it would be cloned to a new zvol for each VM. Alternatively, if image files were used, the image file could be a reflinked copy. In both cases, new data would be stored only when changes accumulate.
Compare this to the deduplication approach: the filesystem would need to keep tabs on the data that's already on disk, identify the case where the same data is being written and then make that a reference to the existing data instead. Very inefficient if on application level you already know that it is just a copy being made.
In both of these cases, you could say that the data ends up being deduplicated. But the second approach is what the deduplication feature does. The first one is "just" copy-on-write.
ZFS deduplication instead tries to find existing copies of data that is being written to the volume. For some use cases it could make a lot of sense (container image storage maybe?), but it's very inefficient if you already know some datasets to be clones of the others, at least initially.