Hacker News new | past | comments | ask | show | jobs | submit login

Theoretically that should just be a data sync while maintaining double-write read-primary, and then delete the data from the nodes you don't need anymore once the data has been synced? Of course with non-hash indexes the deletions start to slow down with size...

I'm assuming joins, indexes, etc are all isolated to the shard data?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: