Hacker News new | past | comments | ask | show | jobs | submit login

To some extent, all of these things start looking even more expensive when you realize that you're buying burstable CPU that you aren't allowed to fully utilize. At least with Amazon's t3 instance class, you are assigned "burst credits" which you accumulate by not using the CPU and consume by using the CPU. So you are paying $70 per month for "2 CPUs" but you do not get 5184000 cpu seconds / month, you get significantly less. Digital Ocean, at least, is the same for those cheap instances though I'm not sure how they dole out CPU credits (I think they yolo it and you cross your fingers that you're not sharing a physical machine with the guy doing video transcoding and CI builds).

Both Amazon and Digital Ocean do have dedicated CPU instance types. The prices will make your eyes water. (I have not used the other cloud providers, so I don't know what they're up to.)

Obviously selling CPU on a time-sharing basis is a good idea; most customers aren't maxing out their CPU 24/7, and this lets them pack more customers into fewer machines. (RAM is the killer, though, and you'll see that at every cloud provider, RAM is really what you pay for.) But when you compare these prices to what a CPU costs... it starts to make you think.

My last job really warmed me up to the idea of just running my own servers. All our workloads were containers running in Kubernetes. With Kubernetes, I don't care about individual computers anymore. If one malfunctions, workloads can be easily moved to another computer. All the machine setup work is made machine readable by building containers and authoring Deployment objects, so there is no mental investment in a particular computer or disk image that you get from logging in, installing Debian, screwing around with files in /etc for an hour, etc. Basically, any sort of maintenance involved with physical computers no longer concerned me; repairing a hardware failure basically meant just repairing that hardware failure at my leisure and then moving on.

Combined with this was the fact that we were an ISP, so we had a datacenter, power, networking, and all that the floor below our office. I eventually convinced myself that for a month worth of AWS fees, we could have 10x the computing resources and 10x the bandwidth for a one-time cost. Nobody was motivated to hand over the credit card and let me build the cluster... but for me, running dedicated servers went from "thing that only Google does" to "why isn't everyone doing this!?" It's just not that expensive. And, consumer-grade hardware is really good these days. I drooled over the fast builds we got from a c5.4xlarge AWS instance. A consumer Threadripper build would blow that out of the water for about the cost of 6 months of AWS. (Amazon pays the Intel tax. You don't have to, though.)




Hey mate, article author here. At work right now so don't have time to read and respond to your post properly but wanted to say thanks for writing the emacs<->quartz bridge at BAML. I found it and brought it back to life and it kept me sane for the 18 months I worked there by allowing me to use emacs instead of QzDev. If you are ever in London I will buy you a pint.


Hey, nice to hear!


It seems like you have a good business idea. At some point people will get off the cloud treadmill and build in house setups. If you create a company that sets up and maintains these at a lower cost than other cloud providers it would get you some business


Ironically AWS also provides that: https://aws.amazon.com/outposts/


Can you imagine how low that cost has to be? Unsure that's a worthwhile busines. AWS has the economy of scale.


Was talking about inhouse/onpremise setups. Does AWS do that?


Aren't you just describing a lower-priced cloud provider?


You had/have k8 cluster running on top of bare metal servers?


I do, it works. I wouldn’t recommend it unless you’re confident that you can actually manage it.

It’d probably be much easier to run the kubernetes cluster within some virtualization/container setup.


Interesting. How much time do you spent operating that thing?


I spent a week making sure I could reliably tear it down and build it up again within minutes, this was by far the hardest part. I ended up hacking around kubespray, but in the future I’ll move to a IPMI-based MAAS setup.

After the initial work to get everything set up it’s been working absolutely trouble free.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: