Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Was thinking the same thing when I saw those zeros in the checksum field. Perhaps the consequences are significant.

Here's a benchmarking exercise I found: https://www-staging.commandprompt.com/uploads/images/Command...

With a tidy summary:

> Any application with a high shared buffers hit ratio: little difference. > Any application with a high ratio of reads/writes: little difference. > Data logging application with a low ratio of reads/inserts, and few updates and deletes: little difference. > Application with an equal ratio of reads/inserts, or many updates or deletes, and a low shared buffers hit ratio (for example, an ETL workload), especially where the rows are scattered among disk pages: expect double or greater CPU and disk I/O use. > Run pg_dump on a database where all rows have already been previously selected by applications: little difference. > Run pg_dump on a database with large quantities of rows inserted to insert-only tables: expect roughly double CPU and disk I/O use.



On my M1 mac "dd ... | cksum" takes 3 seconds while "dd | shasum" (sha1) takes 2 seconds. So cksum might not be the best tool for performance checking.

There is CPU specific code in the PG source in src/include/storage/checksum_impl.h

It is written as a plain nested loop in C. So performance is fully dependent on the compiler being able to parallelize or vectorize the code.

I would not be surprised if manually written SIMD code would be faster.


The bottleneck isn't at all the checksum computation itself. It's that to keep checksums valid we need to protect against the potential of torn pages even in cases where it doesn't matter without checksums (i.e. were just individual bits are flipped). That in turn means we need to WAL log changes we don't need to without checksums - which can be painful.


Interesting. I guess M1 doesn't have the 'crc32' "acceleration" that is included in SSE4.2.



So when using these intrinsics an Intel Core i7 can do 30 GB/s but the performance check linked above (by isosphere ) says only 300 MB/s, i.e. 1%

Something is amiss here.

If a CPU can do 30 GB/s then a CRC check should not have any real performance impact.


I don't know where you're getting 300 MB/s from.


Page 5 of https://www-staging.commandprompt.com/uploads/images/Command... says "This system can checksum data at about 300 MB/s per core."

It lacks page numbers. Page 5 is first page with gray box at the top of the page.


That's measuring 'cksum', which must have an awfully slow implementation. The document notes that this is distinct from measuring PG's checksum performance. (I think it's a pretty useless measurement.)

Earlier (page 4):

> How much CPU time does it take to checksum...

> ...a specific amount of data? This is easy to estimate because PostgreSQL uses the crc32 algorithm which is very simple, and (GNU) Linux has a command line program that does the same thing: cksum.

Yeah, using cksum as an estimate here appears to be very flawed.


That is weird. Seems like crc optimization is quite a rabbit hole.

https://github.com/komrad36/CRC has a massive section about it in the README. Really interesting.


Yeah. crc32 may be simple in theory, but doing it as fast as possible utilizing the various execution units of modern hardware is challenging.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: