minikv actually supports a fully S3-compatible API (PUT/GET/BATCH, including TTL extensions and real-time notifications).
By default, the storage engine is segmented/append-only with object records in blob files, not “one file per object”.
However, you can configure a backend (like the in-memory mode for dev/test, or Sled/RocksDB) and get predictable, transparent storage behavior for objects.
Storing each object as an individual file isn’t the default — for durability and atomics, objects are grouped inside segment files to enable fast compaction, consistent snapshots, and better I/O performance.
If you need “one file per object” for a specific workflow, it’s possible to add a custom backend or tweak volume logic — but as you noted, most production systems move away from that model for robustness.
That said, minikv’s flexible storage API makes experimentation possible if that’s what your use-case demands and you’re fine with the trade-offs.
Let me know what your usage scenario is, and I can advise on config or feature options!
Yes, I do split my working tree into separate commits whenever possible!
I use interactive staging (git add -p) to split logical chunks: features, fixes, cleanups, and documentation are committed separately for clarity.
Early in the project (lots of exploratory commits), some changes were more monolithic, but as minikv matured, I've prioritized clean commit history to make code review and future changes easier.
Always happy to get workflow tips — I want the repo to be easy to follow for contributors!
Absolutely: for all meaningful work I prefer small, logical commits using git add -p or similar, both for history clarity and for reviewer sanity.
In initial “spike” or hack sessions (see early commits :)), it’s sometimes more monolithic, but as the codebase stabilized I refactored to have tidy, atomic commit granularity.
I welcome suggestions on workflow or PR polish!
There’s not an “official” image on Docker Hub yet, but the repo ships with a ready-to-use Dockerfile and a Compose cluster example.
You can build with docker build . and spin up multi-node clusters trivially.
Static Rust binaries make the image compact (typically ≤30MB zipped; nothing compared to MinIO :)), with no heavy runtimes.
Requirements are dead simple: a recent Docker engine, any x86_64 (or ARM) host, and a few tens of MB RAM per instance at low load, scaling with data size/traffic.
I plan to push an official image (and perhaps an OCI image with scratch base) as the project matures — open to suggestions on ideal platforms/formats.
Very relevant question!
The memory profile in minikv depends on usage scenario and storage backend.
- With the in-memory backend: Every value lives in RAM (with HashMap index, WAL ring buffer, TTL map, and Bloom filters). For a cluster with a few million objects, you’ll typically see a node use as little as 50–200 MB, scaling up with active dataset size and batch inflight writes;
- With RocksDB or Sled: Persistent storage keeps RAM use lower for huge sets but still caches hot keys/metadata and maintains Bloom + index snapshots (both configurable). The minimum stays light, but DB block cache, WAL write buffering, and active transaction state all add some baseline RAM (tens to a few hundreds of MB/node in practice);
- Heavy load (many concurrent clients, transactions, or CDC enabled): Buffers, Raft logs, and transaction queues scale up, but you can cap these in config (batch size, CDC buffer, WAL fsync policy, etc);
- Prometheus /metrics and admin API expose live stats, so you can observe resource use per node in production.
If you have a specific workload or dataset in mind, feel free to share it and I can benchmark or provide more precise figures!
Thanks for the feedback and for the question!
A number of choices in minikv are explicitly made to explain distributed system ideas clearly, even if not always optimal for hyperscale prod environments:
- Raft + 2PC together, as above, so people can see how distributed consensus and cross-shard atomicity actually operate and interplay (with their trade-offs);
- Several subsystems are written for readability and transparency (clean error propagation, explicit structures) even if that means a few more allocations or some lost microseconds;
- The storage layer offers different backends (RocksDB, Sled, in-memory) to let users experiment and understand their behavior, not because it’s always ideal to support so many;
- Features such as CDC (Change Data Capture), admin metrics, WAL status, and even “over-promiscuous” logs are exposed for teaching/tracing/debugging, though those might be reduced or hardened in production;
- Much of the CLI/admin API exposes “how the sausage is made,” which is gold for learning but might be hidden in a SaaS-like setting;
So yes, if I targeted only hyperscale production, some internals would be simplified or streamlined, but the educational and transparency value is central to this project’s DNA.
Thank you for this sharp and detailed question!
In minikv, both Raft and 2PC are purposefully implemented, which may seem “overkill” in some contexts, but it serves both education and production-grade guarantees:
- Raft is used for intra-shard strong consistency: within each "virtual shard" (256 in total), data and metadata are replicated via Raft (with leader election and log replication), not just for cluster membership;
- 2PC (Two-Phase Commit) is only used when a transaction spans multiple shards: this allows atomic, distributed writes across multiple partitions. Raft alone is not enough for atomicity here, hence the 2PC overlay;
- The design aims to illustrate real-world distributed transaction tradeoffs, not just basic data replication. It helps understand what you gain and lose with a layered model versus simpler replication like chain replication (which, as you noted, is more common for the data path in some object stores).
So yes, in a pure object store, consensus for data replication is often skipped in favor of lighter-weight methods. Here, the explicit Raft+2PC combo is an architectural choice for anyone learning, experimenting, or wanting strong, multi-shard atomicity.
In a production system focused only on throughput or simple durability, some of this could absolutely be streamlined.
Thanks a lot!
I make distinct commits "every 30s" because I'm focused and I test my project.
If the CI is green, I don't touch of anything.
If not, I work on the project until the CI is fully green.
Yes, in minikv, I set up GitHub Actions for automated CI.
Every push or PR triggers tests, lint, and various integration checks — with a typical runtime of 20–60 seconds for the core suite (thanks to Rust’s speed and caching).
This means that after a commit, I get feedback almost instantly: if a job fails, I see the logs and errors within half a minute, and if there’s a fix needed, I can push a change right away.
Rapid CI is essential for catching bugs early, allowing fast iteration and a healthy contribution workflow.
I sometimes use small, continuous commits (“commit, push, fix, repeat”) during intense development or when onboarding new features, and the fast CI loop helps maintain momentum and confidence in code quality.
If you’re curious about the setup, it’s all described in LEARNING.md and visible in the repo’s .github/workflows/ scripts!
I had the opportunity to request a review of my first post (which was flagged) following my email to the moderators of HN.
I didn’t use AI for the codebase, only for .md files & there's no problem with that.
My project was reviewed by moderators, don't worry.
If the codebase or architecture was AI generated this post would not have been authorized and therefore it would not have been published.
No, I was helped (.md files only) by AI to rewrite but the majority of the doc is written by myself, I just asked for help from the AI for formatting for example.
If you need “one file per object” for a specific workflow, it’s possible to add a custom backend or tweak volume logic — but as you noted, most production systems move away from that model for robustness. That said, minikv’s flexible storage API makes experimentation possible if that’s what your use-case demands and you’re fine with the trade-offs.
Let me know what your usage scenario is, and I can advise on config or feature options!