Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> It seems strange that the filesystem driver would cache 500 GB of sequentially written data in RAM.

That was the most interesting/worrying part of TFA, and I would love to see how the checksum tests were conducted clarified in the text.

Presumably, the "md5" commandline tool has no special fallback to the filesystem checksum cache (if it does, rather a lot of my life has been a lie, I'm afraid). Since that's the case, could we assume that, if the "lost" writes totalled $X GB of data, that any evil memory-caching of the file will only work in the presence of at least $X GB of free system memory (RAM plus swap).

I'd also be interested in learning what happens if there's less than that amount of memory available. Will the checksum fail? Will an error occur elsewhere? Will the system have some sort of memory (and swap) exhaustion failure/panic?



The video embedded in TFA shows md5 reporting identical checksums before unmounting the disk image, so it must be reading the data from a cache.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: