Hacker News new | past | comments | ask | show | jobs | submit | awoods187's comments login

PM at CRL here--we love Fly too! Definitely can see our two products working together!


PM@cockroach labs here. Which tools did you experiment with? We've been working to increase our tooling capabilities!


We got hung up with the migration tooling for popular frameworks. If we can get those migrations to work with minimal drama, we want to basically show people “global full stack” with app + cache + database.


PM @CockroachDB here. Have you seen our performance page? https://www.cockroachlabs.com/docs/dev/performance.html CRDB can easily scale throughput.


At Cockroach Labs we switched to Calver covered in this thread https://news.ycombinator.com/item?id=19658969


This is super cool


Hi- product manager from CRL here. You are right--we hope to demonstrate that CockroachDB is built to scale horizontally. Deployments of CockroachDB can grow by easily adding more nodes to the cluster which in turn linearly scales throughput. We have posted the most recent published Aurora numbers as a comparison to demonstrate how architecture can influence scale.

We also hear your point about efficiency as tpmC (throughput) alone isn't sufficient to compare systems without taking hardware into account. TPC-C asks users to provide a tpmC per dollar amount. We conducted this price comparison previously in this blog post https://www.cockroachlabs.com/blog/cockroachdb-2dot1-perform.... These results are even lower in 19.2 because we can achieve greater tpmC with fewer nodes.

This page wasn't met to be competitive-we simply showed Aurora as a reference point. Since we want to focus on CockroachDB we will remove the Aurora comparison.


The labels are incorrect (we combined the charts in the final draft)--fix incoming. GCP was much worse than AWS on this test.


Looks fixed now. Thanks.


We can consider expanding this in our testing next year! Glad you found it helpful



Looking for the experimental settings as it would be difficult to reproduce without detail. Could you post the scripts to GitHub?


I'm the author of the post.

1. Between 90 and 135 16 vCPU nodes depending on cloud hardware 2. The cluster replicates this data three ways across all three nodes (so the cluster actually contains 12+tb of data) ensuring high availability. We intentionally reported the unreplicated number for clarity and comparison to TPC-C spec 3. Our graph is mislabeled. It should read transactions per second `tps`. Nice catch! 4. We can't comment on other database performance as they haven't release any TPC-C numbers.


“Between 90 and 135 16 vCPU nodes depending on cloud hardware ” How many nodes did you use in the CRDB 2.0 TPC-C 10k benchmark? Could I say that the "5x increment" is on the same hardware condition? Thanks!


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: