Hacker News new | past | comments | ask | show | jobs | submit login

We prioritize both!

We actually measured latency and throughput to find the efficient frontier, the number of logical transactions per physical DBMS query, that optimizes both.

What you find is that the relation between latency and throughput looks more like a U-shaped curve.

As you process only 1 debit/credit at a time, you get worse throughput but also worse latency, because things like networking or fsync have a fixed cost component, so your system can't process incoming work fast enough and queues start to build up, impacting latency.

Whereas, as you process more debit/credits per batch, you get better throughput but also better latency, because for the same fixed costs your system is able to do more work, and so keep queueing times short.

At some point, which for TigerBeetle tends to be around 8k debit/credits per batch, you get the best of both, and thereafter latency starts increasing.

You can think of this like the Eiffel Tower. If you only let 1 person in the elevator at a time, you're not prioritizing latency, because queues are going to build up. What you want to do rather is find the sweet spot of the lift capacity, and then let that many people in at a time (or let 1 person in immediately and send them up if there's no queue, then let the queue build and batch when the lift comes back!).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: