Hacker News new | past | comments | ask | show | jobs | submit login

That's true, but who are the executives that buy this and then force developers to create a monstrous architecture that has all sorts of race conditions outside of the ledger?



To be clear, TB moves the code to the data, rather than the data to the code, and precisely so that you don't have "race conditions outside the ledger".

Instead, all kinds of complicated debit/credit contracts (up to 8k financial transactions at a time, linked together atomically) can be expressed in a single request to the database, composed in terms of a rich set of debit/credit primitives (e.g. two-phase debit/credit with rollback after a timeout), to enforce financial consistency directly in the database.

On the other hand, moving the data to the code, to make decisions outside the OLTP database was exactly the anti-pattern we were wanting to fix in the central bank switch, as it tried to implement debit/credit primitives but over general-purpose DBMS. It's really hard to get these things right on top of Postgres.

And even if you get the primitives right, the performance is fundamentally limited by row locks interacting with RTTs and contention. Again, these row locks are not only external, but also internal (i.e. how I/O interacts with CPU inside the DBMS), and why stored procedures or extensions aren't enough to fix the performance.


Can you expand on why sproc isn't a good solution (e.g. send set of requests, process those that are still in valid state, error those that aren't, return responses)?

Maybe knowing the volumes you are dealing would help also.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: