> I just keep everything in memory in native data structures and flush to disk periodically;
Can you expand on what you mean by this? I am interpreting that to mean that you are actually writing to a file and reading from a file rather than using SQL?
They're storing their data in native Rust objects and read or write them in server application RAM. In case data becomes larger than available RAM, they have set up a large swap, which is a file or a disk partition that the OS uses when no more real RAM is available, while acting like nothing is happening (and everything gets 1000x slower). They periodically serialize these in-memory objects to a file on disk, and presumably (they didn't specify, but I feel it's implied) they're reading this file and deserialize it during initialization/startup phase.
Correct. Although I don't read everything on startup. Some of the data (e.g. per user data) is only loaded once it's needed.
Also, the swap doesn't make everything slower as long as my active working set is smaller than the available RAM. That is, I have more data loaded into "memory" than I have RAM, but I'm not using all of it all the time, so things are still fast. It's basically an OS managed database. (If I were using a normal SQL database I'd get a similar behavior where actively used data would sit in RAM and less used data would be read from the disk.)
The obvious difference is that swap files are not as optimized for disk io. There is a serious performance issue if you ever have to hit disk often. It all depends on how you structure your in memory/on disk data. But it might never be an issue, have scaled to tenish million user in memory there is so much else that can go wrong first.
Can you expand on what you mean by this? I am interpreting that to mean that you are actually writing to a file and reading from a file rather than using SQL?