My (limited) understanding of APFS is that it forgoes some integrity checks on the assumption that they have already been done by lower-level hardware. This is of course a debatable design decision, but it may indeed be unwise to use APFS on non-Apple hardware.
All modern hardware has this feature and I am unaware of apple hardware being significantly safer in this respect.
It was notable that this design decision because other modern fs refs,btrfs,zfs do feature additional integrity checks.
I guess the question would be if you suspect that Apple which is famous for marketing and ui are just smarter than the man centuries Microsoft, Oracle, Sun, have poured into filesystem research or if perhaps this is just a bad design decision.
"Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The engineers contend that Apple devices basically don’t return bogus data. NAND uses extra data, e.g. 128 bytes per 4KB page, so that errors can be corrected and detected. (For reference, ZFS uses a fixed size 32 byte checksum for blocks ranging from 512 bytes to megabytes. That’s small by comparison, but bear in mind that the SSD’s ECC is required for the expected analog variances within the media.) The devices have a bit error rate that’s tiny enough to expect no errors over the device’s lifetime. In addition, there are other sources of device errors where a file system’s redundant check could be invaluable. SSDs have a multitude of components, and in volume consumer products they rarely contain end-to-end ECC protection leaving the possibility of data being corrupted in transit. Further, their complex firmware can (does) contain bugs that can result in data loss."
(sorry for the edits, I finally found the paragraph my memory was referring to)
But if they're so confident in the disk, then why do they checksum the metadata? They should either trust the disk and have no checksums or not trust the disk and checksum everything.
There are plenty of other reasons not to checksum user data, as it's a choice many have made, but that they trust the disk is an invalid argument.
ZFS is the only widely deployed file system to do data checksumming by default though. You can’t blame APFS for not doing it when no other file system does it either.
I am ignorant as to how that could matter? It's managed writes to a hard disk. How could the brand of hard disk, or the mobo, or whatever, matter in this situation?
For the most part the hardware in on a hackintosh isn't going to be worse than what apple is selling. I might even say its likely better given that ECC memory or maybe SAS/FC attached disks may be in the hackintosh (although as others have said the disk sector ECC, ECC on the transport layers (SAS, SATA, etc) are all much better today then they were 30 years ago). So while the rates of silent hardware based corruption may be the same or lower, the real reason for using CRC/Hashing at the filesystem/application level is to detect software bugs.
The latter may be more prevalent on the hacintosh due to simply being a different hardware environment. A disk controller driver variation, or even having 2x as many cores as any apple products might be enough to trigger a latent bug.
So basically, I would be willing to bet that the vast majority of data corruption is happening due to OS bugs (not just the filesystem, but page management/etc) with firmware bugs on SSD's in a distant second. The kinds of failures that get all the press (media failures, link corruption, etc) are rarely corrupting data because as they fail the first indication is simply failure to read the data back because the ECC's cannot reconstruct the data and simply return failure codes. Its only once some enormous number of hard failures have been detected does it get to the point where a few of them leak through as false positives (the ECC/data protection thinks the data is correct and returns an incorrect block).
The one thing that is more likely is getting the wrong sector back, but overwhelmingly the disk/etc vendors have gotten smart about assuring that they are encoding the sector number alongside the data (and DIF for enterprise products) so that one of the last steps before returning it is verifying that the sector numbers actually match the requested sector. That helps to avoid RAID or SSD firmware bugs that were more common a decode ago.