Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In distributed banking back-ends, if one of your datacenters goes down, you don't take down the other one for safety. You don't force global consistency/linearity of all transactions before allowing UI updates. There are delays in financial reconciliation all the time, the important thing is they are eventually consistent with a ledger, not that if one thing fails, you stop the train. And the reality of distributed systems is things fail constantly. Hard drives, networks, bugs, clock drift...

This is in contrast to something like a supercomputer, or a distributed map-reduce job, where if one node fails as part of a distributed process, it will corrupt your data, and you have the luxury to stop the whole thing, fix the issue, and restart the whole process.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: