RocksDB has two big limitations that preclude its use for many types of high-performance data infrastructure (which it sounds like the OP's use case was). First, its throughput performance is much worse (integer factor) than what can be achieved with a different design for some applications. Second, it isn't designed to work well for very large storage volumes. Again, easy to remedy if you design your own storage engine or use an alternative one. There are storage engines that will happily drive a petabyte of storage across a large array of NVMe devices at the theoretical limits of the hardware, though not so much in open source.
Another thing to consider is that you lose significant performance in a few different dimensions if your storage I/O scheduler design is not tightly coupled to your execution scheduler design. While it requires writing more code it also eliminates a bunch of rough edges. This alone is the reason many database-y applications write their own storage engines. For people that do it for a living, writing an excellent custom storage engine isn't that onerous.
RocksDB is a fine choice for applications where performance and scale are not paramount or your hardware is limited. On large servers with hefty workloads, you'll probably want to use something else.
agreed w/ andrew. rocksdb is pretty heavy. for streaming logs, something much much simpler yields significant performance improvements specially when tied to the IO+CPU priority scheduling.
This is a hard question because everything is really dependent on your threading model. One has to start w/ the threading model. At the time I wrote the first line of code in Jan 2019, there wasn't anything that was really amenable to the seastar::future<> / task-based scheduler with truly async IO (enforced by a reactor stall if greater than 500 micros).... so we wrote our own from scratch .... in fact we wrote it many times over, the first version attempted to use flatbuffers atop my old project - https://github.com/smfrpc/smf but the linearization of buffers proved too costly for long running processes which led to the fragmented buffer approach in the blog post mentioned.
Another thing to consider is that you lose significant performance in a few different dimensions if your storage I/O scheduler design is not tightly coupled to your execution scheduler design. While it requires writing more code it also eliminates a bunch of rough edges. This alone is the reason many database-y applications write their own storage engines. For people that do it for a living, writing an excellent custom storage engine isn't that onerous.
RocksDB is a fine choice for applications where performance and scale are not paramount or your hardware is limited. On large servers with hefty workloads, you'll probably want to use something else.