Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

RDS is performant well above 100 GB. Not to say citus isn't a good product, but to imply sharing is required anywhere near the 100 GB threshold is a bit disengenous.

Note if using aurora you also get an increase in memory and cores if using multiple nodes...



I believe he was suggesting a lower bound. ==> If you're over 100GB you should probably have a discussion around future plans and what you'll need to do to scale to 10 times that while keeping response times etc.


As other commenters have mentioned, indeed my goal wasn't to imply that you need to shard at 100 GB.

Yes, RDS still works great at 100 GB of data, for 99% of applications. I used to generally never advise to shard until 1 TB or really think about it until about 500 GB. From dealing with customers that have sharded as early as 50 GB of data and as late as 3 TB of data, those that do shard earlier always have a smoother time.

The hard question is if you actually need to, if your data and indexes stay in cache then there, of course, is never a reason it's a matter of if you predictably know you'll grow and need to. Often for various B2B products that are about to sign up a large customer that will guarantee this growth ahead of time you can plan for this which makes life a bit easier. As a rule of thumb I wouldn't think about it before 100 GB these days, and once you hit that point I'd start to look at what your growth pattern is and at least have a plan if you expect to outgrow a single node.


The only implication here is that above 100 GB sharding becomes something that's reasonable to consider.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: