Hacker Newsnew | past | comments | ask | show | jobs | submit | brasetvik's commentslogin


pganalyze's blog is a goldmine of Postgres tidbits, a lot of which is recapped in their YouTube channel: https://www.youtube.com/@pganalyze6516

I'm not at all affiliated, just a happy consumer of their material.


I didn't realize they had a YT channel, thanks for the share!


100%!


Or from the other perspective of the trade-off: One caveat with MSSQL is that ALL concurrent transactions must pay the overhead if _some_ transactions need serializable guarantees?


Only if they touch the same data. If they are touching disjoint sets of data then there is no overhead to be paid by non-SERIALIZABLE transactions.


There has been some recent improvement to locking behavior:

https://learn.microsoft.com/en-us/sql/relational-databases/p...


A bit of birthday paradox too? :)


Nice job, eugene-khyst. Looks very comprehensive from an initial skim.

I've worked on something in the same space, with a focus on reliable but flexible synchronization to many consumers, where logical replication gets impractical.

I have a mind to do a proper writeup, but at least there is code at https://github.com/cognitedata/txid-syncing (MIT-licensed) and a presentation at https://vimeo.com/747697698

The README mentions …

> A long-running transaction in the same database will effectively "pause" all event handlers.

… as the approach is based on the xmin-horizon.

My linked code works with involving the MVCC snapshot's xip_list as well, to avoid this gotcha.

Also, note that when doing a logical restore of a database, you're working with different physical txids, which complicates recovery. (So my approach relies on offsetting the txid and making sure the offset is properly maintained)


Thanks for sharing.

> My linked code works with involving the MVCC snapshot's xip_list as well, to avoid this gotcha.

I will definitely take a look. It would be great to fix this problem. This problem really concerns me, although in most cases it is not critical.


This is also a pretty cool presentation on the topic: https://www.blackhat.com/docs/us-17/thursday/us-17-Tsai-A-Ne...


Some overlap, but my similar post mentions a few other things too, with a lot of links to sources to learn more :)

https://medium.com/cognite/postgres-can-do-that-f221a8046e


Reading that thread it doesn't seem like the official image shipped with any cryptominer at any point, and that it's more likely that the container got compromised in other ways. (A compromised [superuser connection to Postgres can execute shell code](https://medium.com/r3d-buck3t/command-execution-with-postgre...), so that seems more likely than the image shipping with a miner)


Several other issues also mention this problem across versions, with one issue mentioning that changing the default password fixed the issue.

It looks like the miner was installed because the Postgres port got exposed with a weak password.


Advisory locks are purely in-memory locks, while row locks might ultimately hit disk.

The memory space reserved for locks is finite, so if you were to have workers claim too many queue items simultaneously, you might get "out of memory for locks" errors all over the place.

> Both advisory locks and regular locks are stored in a shared memory pool whose size is defined by the configuration variables max_locks_per_transaction and max_connections. Care must be taken not to exhaust this memory or the server will be unable to grant any locks at all. This imposes an upper limit on the number of advisory locks grantable by the server, typically in the tens to hundreds of thousands depending on how the server is configured.

https://www.postgresql.org/docs/current/explicit-locking.htm...


  >> JSON.parse("{}")["__proto__"]["A"] = "T"
  "T"
  >> W = {}
  Object {  }
  >> W.A
  "T"


And this is not deserving of WAT. This is actually a result of how awesome JavaScript is.

But if you ever actually do this, then... WAT.


That’s literally just prototype inheritance vs a UI nicety in node I assume.

What alternative behaviour would you expect?


It’s the wat I’ve seen have the most security impact.

Deep merging two JSON parsed objects is innocuous enough everywhere else that most don’t think twice about doing it. Lots of widely used libraries that provide deep merging utilities have had security vulnerabilities because of this.

I guess you could argue that the wat is that objects coming out of JSON.parse don’t have null as its prototype.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: