Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

scripts will run on things other than the cloud. The thing that made that difference? It may have been you, not the cloud.

Can you run a script that creates a VM that autoscales your build server from a size that runs 1 build to 50 builds simultaneously? That’s the equivalent of CodeBuild.

* On HA: You're listing scaling, not HA. You don't need to do anything special to make interchangable, largely stateless machines "HA" no matter your infrastructure. HA is relevant if you have any single points of failure, and how you avoid the consequences of having those; e.g. having a live database replica or whatever. But sure: this 'll be easier on some clouds!*

HA and scaling are conceptually different, but in the cloud they are practically the same thing.

An autoscaling group for instance that you set for a min/max of two across availability zones will ensure that you have two instances running whether your computer goes down or the entire zone goes down. It’s just a matter of rules what it scales based on.

The only practical difference between autoscaling and HA is scaling across AZ’s (overly simplified).

Then again; most people probably don't need to. If your workload is merely a little spiky (like day vs. night) then the extra costs due to the cloud will be greater even if you spin down instances sometimes; and if you do that, you're spending time to do so, which undercuts the clouds other selling point "I don't want to manager all that junk".

A “little” spiky going from 1 VM to 20? And that’s just one workload with Windows servers. We have other workloads where we need to reindex our entire database from Mysql to ElasticSearch and it automatically autoscales Mysql read replicas. How much would it costs to keep multiple read replicas up just to have throughout you only need once a month?

How much “management” do you think autoscaling based on an SQS is? You set up a rule that says when x number of messages are in the queue, scale up to the maximum, when their are less than y messages you scale down. This happens when everyone is sleep. Of course we do this with CloudFormation but even clicking in the console it literally takes minutes. It took me about 2 hours to create the CF template to do it and I was new at the time.

There are all sorts of spiky workloads. Colleges are well known for having spiky workloads during registration for instance.

but if you're saving 170k on network engineering personnel costs; well, you didn't need the cloud to do that unless you're really, really huge, to the point that other costs are dominating anyhow.

It doesn’t take being huge to get complicated:

- networking infrastructure

- permissions

- load balancers

- VMs (of course we need 20x the capacity just to handle peak)

- Mysql server + 1 read replica. Again we would need 3 or 4 more just to sit idle most of the time.

- enough servers to run our “serverless” workloads at peak.

- a server for our messaging system (instead of SNS/SQS)

- an SFTP server (instead of a managed solution)

- a file server with backups (instead of just using S3)

- whatever the open source equivalent of just being able to query and analyze data on S3 using Athena.

- a monitoring and alerting system (instead of CloudWatch)

- an ElasticSearch cluster

- Some type of OLAP database to take the place of Redshift.

- a build server instead of just using CodeBuild

- of course we can’t host our own CDN or just host a bunch of files in S3 and serve them up as a website and all of the server APIs hosted in lambda.

- We would have to host our own domain server.

- we would Still need load balancers.

And...Did I mention that most of this infrastructure would need to be duplicated for four different environments - DEV, QA, STG, and Prod?

And none of this is overly complicated with AWS, on prem we would have to have someone to manage it. Imagine managing all of that at a colo? We would definitely need someone on call. The only things that would go down and that we would have to do anything about are our web and API servers.

While there might be some thrashing with them being killed and brought back up until we figured out what was wrong, autoscaling would at least keep them up.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: