For people doing enterprise work and backups it's been a nightmare - here's one backup vendor that's been tracking issues with high RAM+CPU usage for almost 2 years now [1]. Early on if data reached over 2.0 TB, it would silently corrupt on certain cluster sizes and when deduplication was enabled. [2] Per the Veeam thread, the "fix" for [1] is only preventative, meaning that currently affected volumes will need to reformat entirely.
This doesn't excuse the APFS goofs, but silent data corruption and grinding servers to a halt just writing data to the system are pretty major show stoppers, never mind that ReFS can't be used for a host of every-day operations (i.e., it's a storage level solution, not really an every-day-driver style File System).
For people doing enterprise work and backups it's been a nightmare - here's one backup vendor that's been tracking issues with high RAM+CPU usage for almost 2 years now [1]. Early on if data reached over 2.0 TB, it would silently corrupt on certain cluster sizes and when deduplication was enabled. [2] Per the Veeam thread, the "fix" for [1] is only preventative, meaning that currently affected volumes will need to reformat entirely.
This doesn't excuse the APFS goofs, but silent data corruption and grinding servers to a halt just writing data to the system are pretty major show stoppers, never mind that ReFS can't be used for a host of every-day operations (i.e., it's a storage level solution, not really an every-day-driver style File System).
[1] - https://forums.veeam.com/veeam-backup-replication-f2/refs-4k...
[2] - https://blogs.technet.microsoft.com/filecab/2017/01/30/windo...