Hacker News new | past | comments | ask | show | jobs | submit login

All that infra doesn’t integrate itself. Everywhere I’ve worked that had this kind of stack employed at least one if not a team of DevOps people to maintain it all, full time, the year round. Automating a database backup and testing it works takes half a day unless you’re doing something weird



Setting up a multi-az db with automatic failover, incremental backups and PiTR, automated runbooks and monitoring all that doesn't take half a day, not even with RDS.


No, but again, that sounds like a lot of complexity your average startup does not need. Multi-az? Why?


Because their Enterprise client requires it on their due diligence paperwork.


Which makes little sense anyway as in practice the real problems you have are from region/connectivity issues, not AZ failures.


A startup sized company using this many tools? They're for sure doing something weird (and that's not a compliment :) )

Totally on your side with this one - but alas, people associate value with complexity.


> Automating a database backup and testing it works takes half a day unless you’re doing something weird

True story bro

I'm sure that's possible if you're storing the backup on the same server you're restoring on and everything is on top of the line nvme storage. Otherwise your backup just started to run and will need another few days to finish. And that's only if you're running single master.

You're massively underestimating the challenge to get that kind of automation done in a stable manner - and the maintenance required to keep it working over the years.


I’ve implemented such a process for companies multiple times, bro. I know what I’m talking about.


And that's the problem. "It's easy for me because I've done it a dozen times so it's easy for everyone" is a very common fallacy.


This is an oversimplification, but! Dumping postgres to a file is one command. scp the file to a different server is two commands. (Granted you need to setup ssh keys there too). I have implemented backups this way.

With sqlite you only need the scp part.

You can even push your backup file to an S3 bucket... with one command!

Honestly, this argument mystifies me.

Of course you can make it as complicated as you want to, too. I've also worked on replicating anonymized data from a production OLTP database to a data warehouse. That's a lot more work.


And that works right until you get to publish an incident report like this:

https://about.gitlab.com/blog/2017/02/01/gitlab-dot-com-data...


> Our backups to S3 apparently don’t work either: the bucket is empty

It took them a data loss incident to find this out? This is just one of the many red flags mentioned in the article, IMO this incident isn't about relying on cloud backups vs self managing it


Yeah. Testing your backup works can be almost as much work as setting the thing up in the first place, too, but you do need to do it


What happened to having people trained by external trainers for what you need? That’s much cheaper than having everything externally “managed” and still having to integrate all of it. The number of services listed in TFA is just ridiculous.


I've done it before,too. For toy project, it's easy as you said. It's not once you're at scale. It's hilarious that people are down voting my comment. I guess there are a lot of juniors suffering from the dunning Kruger syndrome around right now


I worked at a place with its own colo where they ran several multi TB MySQL database servers. We did weekly backups and it could take days. Our backups were stored on external USB disks. The I/O performance was abysmal. Taking a filesystem snapshot and copying it to USB could take days. The disks would occasionally lock up and someone would have to power cycle them. Total clown show.

I would rather pay for RDS. Databases are the one thing you don't want to screw up.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: