Or from the other perspective of the trade-off: One caveat with MSSQL is that ALL concurrent transactions must pay the overhead if _some_ transactions need serializable guarantees?
Nice job, eugene-khyst. Looks very comprehensive from an initial skim.
I've worked on something in the same space, with a focus on reliable but flexible synchronization to many consumers, where logical replication gets impractical.
> A long-running transaction in the same database will effectively "pause" all event handlers.
… as the approach is based on the xmin-horizon.
My linked code works with involving the MVCC snapshot's xip_list as well, to avoid this gotcha.
Also, note that when doing a logical restore of a database, you're working with different physical txids, which complicates recovery. (So my approach relies on offsetting the txid and making sure the offset is properly maintained)
Reading that thread it doesn't seem like the official image shipped with any cryptominer at any point, and that it's more likely that the container got compromised in other ways. (A compromised [superuser connection to Postgres can execute shell code](https://medium.com/r3d-buck3t/command-execution-with-postgre...), so that seems more likely than the image shipping with a miner)
Advisory locks are purely in-memory locks, while row locks might ultimately hit disk.
The memory space reserved for locks is finite, so if you were to have workers claim too many queue items simultaneously, you might get "out of memory for locks" errors all over the place.
> Both advisory locks and regular locks are stored in a shared memory pool whose size is defined by the configuration variables max_locks_per_transaction and max_connections. Care must be taken not to exhaust this memory or the server will be unable to grant any locks at all. This imposes an upper limit on the number of advisory locks grantable by the server, typically in the tens to hundreds of thousands depending on how the server is configured.
It’s the wat I’ve seen have the most security impact.
Deep merging two JSON parsed objects is innocuous enough everywhere else that most don’t think twice about doing it. Lots of widely used libraries that provide deep merging utilities have had security vulnerabilities because of this.
I guess you could argue that the wat is that objects coming out of JSON.parse don’t have null as its prototype.