For the most part the hardware in on a hackintosh isn't going to be worse than what apple is selling. I might even say its likely better given that ECC memory or maybe SAS/FC attached disks may be in the hackintosh (although as others have said the disk sector ECC, ECC on the transport layers (SAS, SATA, etc) are all much better today then they were 30 years ago). So while the rates of silent hardware based corruption may be the same or lower, the real reason for using CRC/Hashing at the filesystem/application level is to detect software bugs.
The latter may be more prevalent on the hacintosh due to simply being a different hardware environment. A disk controller driver variation, or even having 2x as many cores as any apple products might be enough to trigger a latent bug.
So basically, I would be willing to bet that the vast majority of data corruption is happening due to OS bugs (not just the filesystem, but page management/etc) with firmware bugs on SSD's in a distant second. The kinds of failures that get all the press (media failures, link corruption, etc) are rarely corrupting data because as they fail the first indication is simply failure to read the data back because the ECC's cannot reconstruct the data and simply return failure codes. Its only once some enormous number of hard failures have been detected does it get to the point where a few of them leak through as false positives (the ECC/data protection thinks the data is correct and returns an incorrect block).
The one thing that is more likely is getting the wrong sector back, but overwhelmingly the disk/etc vendors have gotten smart about assuring that they are encoding the sector number alongside the data (and DIF for enterprise products) so that one of the last steps before returning it is verifying that the sector numbers actually match the requested sector. That helps to avoid RAID or SSD firmware bugs that were more common a decode ago.
The latter may be more prevalent on the hacintosh due to simply being a different hardware environment. A disk controller driver variation, or even having 2x as many cores as any apple products might be enough to trigger a latent bug.
So basically, I would be willing to bet that the vast majority of data corruption is happening due to OS bugs (not just the filesystem, but page management/etc) with firmware bugs on SSD's in a distant second. The kinds of failures that get all the press (media failures, link corruption, etc) are rarely corrupting data because as they fail the first indication is simply failure to read the data back because the ECC's cannot reconstruct the data and simply return failure codes. Its only once some enormous number of hard failures have been detected does it get to the point where a few of them leak through as false positives (the ECC/data protection thinks the data is correct and returns an incorrect block).
The one thing that is more likely is getting the wrong sector back, but overwhelmingly the disk/etc vendors have gotten smart about assuring that they are encoding the sector number alongside the data (and DIF for enterprise products) so that one of the last steps before returning it is verifying that the sector numbers actually match the requested sector. That helps to avoid RAID or SSD firmware bugs that were more common a decode ago.