We got hung up with the migration tooling for popular frameworks. If we can get those migrations to work with minimal drama, we want to basically show people “global full stack” with app + cache + database.
Hi- product manager from CRL here. You are right--we hope to demonstrate that CockroachDB is built to scale horizontally. Deployments of CockroachDB can grow by easily adding more nodes to the cluster which in turn linearly scales throughput. We have posted the most recent published Aurora numbers as a comparison to demonstrate how architecture can influence scale.
We also hear your point about efficiency as tpmC (throughput) alone isn't sufficient to compare systems without taking hardware into account. TPC-C asks users to provide a tpmC per dollar amount. We conducted this price comparison previously in this blog post https://www.cockroachlabs.com/blog/cockroachdb-2dot1-perform.... These results are even lower in 19.2 because we can achieve greater tpmC with fewer nodes.
This page wasn't met to be competitive-we simply showed Aurora as a reference point. Since we want to focus on CockroachDB we will remove the Aurora comparison.
1. Between 90 and 135 16 vCPU nodes depending on cloud hardware
2. The cluster replicates this data three ways across all three nodes (so the cluster actually contains 12+tb of data) ensuring high availability. We intentionally reported the unreplicated number for clarity and comparison to TPC-C spec
3. Our graph is mislabeled. It should read transactions per second `tps`. Nice catch!
4. We can't comment on other database performance as they haven't release any TPC-C numbers.
“Between 90 and 135 16 vCPU nodes depending on cloud hardware ”
How many nodes did you use in the CRDB 2.0 TPC-C 10k benchmark? Could I say that the "5x increment" is on the same hardware condition? Thanks!