Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, I just explained how the world does strongly consistent distributed databases for transactional data, which is the exact question here.

DuckDB does not yet handle strong consistency. Blockchains and SQL databases do.



Blockchains are a fantastic way to run things slowly ;-) More seriously: Making crypto fast does sound like a fun technical challenge, but well beyond what our finance/gov/cyber/ai etc customers want us to do.

For reference, our goal here is to run around 1 TB/s per server, and many times more when a beefier server. Same tech just landed at spot #3 on the graph 500 on its first try.

To go even bigger & faster, we are looking for ~phd intern fellows to run on more than one server, if that's your thing: OSS GPU AI fellowship @ https://www.graphistry.com/careers

The flight perspective aligns with what we're doing. We skip the duckdb CPU indirections (why drink through a long twirly straw?) and go straight to arrow on GPU RAM. For our other work, if duckdb does gives reasonable transactional guarantees here, that's interesting... hence my (in earnest) original question. AFAICT, the answers are resting on operational answers & docs that don't connect to how we normally talk about databases giving you answers on consistent vs inconsistent views of data.


Do you think that blockchain engineers are incapable of developing high throughout distributed systems due to engineering incapacity or due to real limits to how fast a strongly consistent, sufficiently secured cryptographic distributed system can be? Are blockchain devs all just idiots, or have they dumbly prioritized data integrity because that doesn't matter it's about big data these days, nobody needs CAP?

From "Rediscovering Transaction Processing from History and First Principles" https://news.ycombinator.com/item?id=41064634 :

> metrics: Real-Time TPS (tx/s), Max Recorded TPS (tx/s), Max Theoretical TPS (tx/s), Block Time (s), Finality (s)

> Other metrics: FLOPS, FLOPS/WHr, TOPS, TOPS/WHr, $/OPS/WHr

TB/s in query processing of data already in RAM?

/? TB/s "hnlog"

- https://news.ycombinator.com/item?id=40423020 , [...] :

> The HBM3E Wikipedia article says 1.2TB/s.

> Latest PCIe 7 x16 says 512 GB/s:

fiber optics: 301 TB/s (2024-05)

Cerebras: https://en.wikipedia.org/wiki/Cerebras :

WSE-2 on-chip SRAM memory bandwidth: 20 PB/s / 220 PB/S

WSE-3: 21 PB/S

HBM > Technology: https://en.wikipedia.org/wiki/High_Bandwidth_Memory#Technolo... :

HBM3E: 9.8 Gbit/s , 1229 Gbyte/s (2023)

HBM4: 6.4 Gbit/s , 1638 Gbyte/s (2026)

LPDDR SDRAM > Generations: https://en.wikipedia.org/wiki/LPDDR#Generations :

LPDDR5X: 1,066.63 MB/S (2021)

GDDR7: https://en.m.wikipedia.org/wiki/GDDR7_SDRAM

GDDR7: 32 Gbps/pin - 48 Gbps/pin,[11] and chip capacities up to 64 Gbit, 192 GB/s

List of interface bit rates: https://en.wikipedia.org/wiki/List_of_interface_bit_rates :

PCIe7 x16: 1.936 Tbit/s 242 GB/s (2025)

800GBASE-X: 800 Gbps (2024)

DDR5-8800: 70.4 GB/s

Bit rate > In data communications: https://en.wikipedia.org/wiki/Bit_rate# In_data_communications ; Gross and Net bit rate, Information rate, Network throughout, Goodput

Re: TPUs, NPUs, TOPS: https://news.ycombinator.com/item?id=42318274 :

> How many TOPS/W and TFLOPS/W? (T [Float] Operations Per Second per Watt (hour ?))*

Top 500 > Green 500: https://www.top500.org/lists/green500/2024/11/ :

PFlop/s (Rmax)

Power (kW)

GFlops/watts (Energy Efficiency)

Performance per watt > FLOPS/watts: https://en.wikipedia.org/wiki/Performance_per_watt#FLOPS_per...

Electrons: 50%–99% of c the speed of light ( Speed of electricity: https://en.wikipedia.org/wiki/Speed_of_electricity , Velocity factor of a CAT-7 cable: https://en.wikipedia.org/wiki/Velocity_factor#Typical_veloci... )

Photons: c (*)

Gravitational Waves: Even though both light and gravitational waves were generated by this event, and they both travel at the same speed, the gravitational waves stopped arriving 1.7 seconds before the first light was seen ( https://bigthink.com/starts-with-a-bang/light-gravitational-... )

But people don't do computation with gravitational waves.


To a reasonable rounding error.. yes


How would you recommend that appends to Parquet files be distributedly synchronized with zero trust?

Raft, Paxos, BFT, ... /? hnlog paxos ... this about "50 years later, is two-phase locking the best we can do?" https://news.ycombinator.com/item?id=37712506

To have consensus about protocol revisions; To have data integrity and consensus about the merged sequence of data in database {rows, documents, named graphs, records,}.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: