Hacker News new | past | comments | ask | show | jobs | submit login

Phone posters, Eternal September, tech industry interlopers.



Unrelated to you comment here, but I figured you'll see this reply rather than one from your week old comment.

I replied to your serverless comment here: https://news.ycombinator.com/item?id=40054222 Very interested in what more you have to say, if you're interesting in discussing it more.


Apologies, I'm on a phone -- and quite tired -- but I will try.

The general ethos of my gripes with serverless are here (not my post): https://www.reddit.com/r/aws/comments/14noh28/comment/jq95xz...

It excels in only one area: scaling. You can "scale down to zero" (to the point where if there is no usage, you theoretically are not paying for stand-by hardware). You also do not have to -- theoretically -- worry about scaling rules and about provisioning more hardware when your service needs more workers to handle demand.

However, everywhere else it is a shit-show. Not being able to do anything resembling local testing & debugging requires you either: keep the architecture very simple (why use serverless in that case?), try and spin up one of the numerous half-baked virtualization solutions, or spend more time writing, testing, and maintaining scaffolding/mocking/whatever to interface between all your "add-ons" (e.g. SQS, S3, etc.). Your only way to track message flow is with extensive and expressive logging.

Deployment is a pain. What should be a sub 30second operation is routinely 5mins for even the smallest packages (AWS deployment tools are stupid inefficient) -- annoyingly breaking any sort of flow every time you have to deploy to test things you cannot test locally. Now imagine that being your working loop every day -- utter madness if you want to keep any sort of velocity or morale.

The ecosystem is not mature. I still have packaging utilities (built and maintained by a mag7) that fail silently causing production outages. The other tooling is also half-baked and a pain in the ass to use (much less learn the edge cases around). Serverless (the framework) is a shitty replacement for Terraform. The lack of a language server to understand whether or not my YAML IaC will actually do what I want it to do without dry-running is tedious.

Containers are wasteful, half-baked, and unperformant.

My workloads have always been either CPU or I/O heavy. No I do not need 2GB of RAM and a single (unknown spec) vCPU. I need at most a 100MB of RAM for my JVM/CLR and a fast-enough CPU. But the only way to provision a faster/less gimped CPU is to "bump the tier" of the lambda by provisioning more memory. Ergo you pay for memory you do not use nor need, simply so your lambda doesn't time out in its maximum 15min container lifespan, on heavy workloads.

The file handle limits are also something asinine, like 128 open handles per lambda with no way to modify. So I cannot open more than ~128 network sockets when I need to fan out compute to get past kneecapped container resources.

Cold-starts: it's been beaten to death. But if you're running a language with a bytecode interpreter your options are to either provision concurrency (i.e. force a container to always be warm/spun-up, and incurring all those costs, which would have been unarguably cheaper with a server, even an EC2) or modify your source and ahead-of-time compile everything you can. Otherwise, you will not get sub 300ms cold invocations (a sever in the most optimal location would get you sub 10ms latencies).

If you have long-running workloads or are trying to squeeze the most performance out of your backend: serverless is not going to cut it. This is ignoring all the inter-infrastructure communication that add even more latency.

N.B. this is for moderately complex web apps built on AWS, it may not be wholly representative of the landscape.

And apologies for the harsh language, but the entire "serverless" hype has given me plenty of scar tissue -- especially in a "move fast, now!" startup landscape.


Thanks for the reply. It was a good read. And I can agree on the points from experience.

One suggestion/correction: "Your only way to track message flow is with extensive and expressive logging.", you can do distributed tracing. It not a silver bullet, nor does it replace a proper debugger, but it's better than following logs. You can use a number of distributed tracing SaaS's, but you still have to do at least some manual instrumentation in your code to add additional info.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: