I was hoping for a response, but no one bothered. I had noted the following when I made that comment and will just wrap up from my end so this could be used by others for reference later.
I'm skeptical that the re-execution approach can scale for complex queries, the latency and throughput improvements would be offseted by the computational cost and bottlenecks introduced for achieving it via its reactivity mechanism (query subscription), this might not work at scale and serve niche use cases.
There are various ways throughput and latency for kv stores can be improved, so bar is really high here.
The messaging with Dice seems unclear and confusing to describe its purpose/use-cases over alternatives, or how it achieves them, which could just be how it's marketed. But it seems to be a collection of ideas and a WIP project.
I think reducing data fetching complexity and complex key dependencies for end clients could be appealing, and it would be great to have it at the KV store level, but there is no reason this type of reactivity can't be implemented on top of various clients for existing KV stores (like Redis). And basic WATCH with transactions are even offered out of the box in them.
Deno kv seem nice but its vendor locked, also there are many others like dragonfly, valkey etc, redis could still work, even something over sqlite can work, deno has a selfhosted kv on top of sqlite - https://github.com/denoland/denokv
From that and the thread so far it seems, they want to make some super cache by building a realtime multi-threaded kv store, improving latency and reducing its read load via its reactivity mechanism. Solving the problem of cache invalidation.
Not sure how this will be achieved but there is no harm in trying. From what is said and shared, rationale behind this design and its tradeoffs are not clear, code could be fixed/improved but providing clarity on this is essential for adoption.
I'm skeptical that the re-execution approach can scale for complex queries, the latency and throughput improvements would be offseted by the computational cost and bottlenecks introduced for achieving it via its reactivity mechanism (query subscription), this might not work at scale and serve niche use cases.
There are various ways throughput and latency for kv stores can be improved, so bar is really high here.
The messaging with Dice seems unclear and confusing to describe its purpose/use-cases over alternatives, or how it achieves them, which could just be how it's marketed. But it seems to be a collection of ideas and a WIP project.
I think reducing data fetching complexity and complex key dependencies for end clients could be appealing, and it would be great to have it at the KV store level, but there is no reason this type of reactivity can't be implemented on top of various clients for existing KV stores (like Redis). And basic WATCH with transactions are even offered out of the box in them.
Deno kv seem nice but its vendor locked, also there are many others like dragonfly, valkey etc, redis could still work, even something over sqlite can work, deno has a selfhosted kv on top of sqlite - https://github.com/denoland/denokv
Also with dice its creator had made this talk
https://hasgeek.com/rootconf/2024/sub/how-we-made-dicedb-a-t...
From that and the thread so far it seems, they want to make some super cache by building a realtime multi-threaded kv store, improving latency and reducing its read load via its reactivity mechanism. Solving the problem of cache invalidation.
Not sure how this will be achieved but there is no harm in trying. From what is said and shared, rationale behind this design and its tradeoffs are not clear, code could be fixed/improved but providing clarity on this is essential for adoption.