I'm not "we" but I have some experience in this area.
Computers are fast, basically. ACID transactions can be slow (if they write to "the" disk before returning success), but just processing data is alarmingly speedy.
If you break down things into small operations and you aggregate by day, you can always have big numbers. The monitoring system that I wrote for Google Fiber ran on one machine and processed 40 billion log lines per day, with only a few seconds of latency from upload start -> dashboard/alert status updated. (We even wrote to Spanner once-per-upload to store state between uploads, and this didn't even register as an increase in load to them. Multiple hundred thousand globally-consistent transactional writes per minute without breaking a sweat. Good database!)
apenwarr wrote a pretty detailed look into the system here: https://apenwarr.ca/log/20190216 And like him, I miss having it every day.
I have a plan to write a book on how to write reactive applications like that. Mostly collection of observations, tips, tricks, patterns for reactive composition, some very MongoDB specific solutions, etc.
Not sure how many people would be interested. Reactor has quite steep learning curve but also very little literature on how to use for anything non-trivial.
The aim is not just enable good throughput, but also achieve this without compromising on clarity of implementation. Which is where I think reactive, and specifically ReactiveX/Reactor, shines.
I'm interested in getting your book published. Career in publishing and specialist media but a lot of it spent on related problems to your subject. Semi retired have risk capital to get to the right distribution maintaining well above industry standard terms. Email in profile.
Thanks. I will try to self publish. I want to keep freedom over content and target and I am not looking for acclaim for having my name on a book from a well known publisher. I am just hoping to help people solve their problems.