I also worked on some quite large organizations with quite large services that would easily take 10x to 50x the amount of time to ship if they were a smaller org.
Most of the time people were mistaking complexity caused by bad decisions (tech or otherwise) with "domain complexity" and "edge cases" and refusing to acknowledge that things are now harder because of those decisions. Just changing the point of view makes it simple again, but then you run into internal politics.
With microservices especially, the irony was that it was mostly the decisions justified as being done to "save time in the future" that ended up generating the most amount of future work, and in a few cases even problems around compliance and data sovereignty.
Problem is that you can’t create a system in vacuum.
Mostly it is not like a movie where you hand pick the team for the job.
Usually you have to play the cards you’re dealt with so you take whatever your team is comfortable building.
Which in the end is dealing with emotions, people ambition, wishes.
I have seen stuff gold plated just because one vocal person was making fuss. I have seen good ideas blocked just because someone wanted to feel important. I have seen teams who wanted to „do proper engineering” but they thought over engineering was proper way and anything less than gold plating makes them look like amateurs.
For me it's useful in those areas I don't venture into very often. For example I needed a powershell script recently that would create a little report of some registry settings. Claude banged out something that worked perfectly for me and saved me an hour of messing around.
It’s useful for almost any one-off script I write. It can do the work much faster than me and produce nicer looking output than I’d ever bother to spend time to write myself. It can also generate cli args and docs I’d never "waste time" on myself, which I’d later waste even more time fumbling without
They’re insanely useful. I don’t get why people pretend otherwise, just because they aren't fulfilling the prophesies of blowhards and salesmen
yeah they work pretty well for scripts. I use claude to create scripts i need for .csv transforms and things. Like read all these files in a directory, merge them together on some key and convert this to that and all kinds of things and then output a csv with the following header row. For things like that they work pretty well.
Yes, totally agree.
The 2nd thing I found it great for was to explain errors, it either finds the exact solution, or sparked a thought that lead to the answer.
Tools that can build you a quick clickable prototype are everywhere. Replit, claude code, cursor, ChatGPT Pro, v0.app, they're all totally capable.
From there it's the important part: discussing, documenting, and making sure you're on the same page about what to actually build. Ideally, get input from your actual customers on the mockup (or multiple mockups) so you know what resonates and what doesn't.
This idea that performance is irrelevant gets under my skin. It's how we ended up with Docker and Kubernetes and the absolute slop stack that is destroying everything it touches.
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
You mean creating a different container that is exactly equal to the previous one?
It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.
It's useful if you want to bring up a containerized service, optionally update OS, run tests, and if everything is good, copy that instance a bunch of times rather than starting fresh.
It gets you scale out a batch of VMs remarkably quickly, while leaving the original available for os/patch updates.
If I'm willing to pay the cost of keeping an idle VM around, subsequent launches are probably an order of magnitude faster than docker hello-world.
Your cloud provider may be doing it for you. Ops informed me one day that AWS was pushing out a critical security update to their host OS. So of course I asked if that meant I needed to redeploy our cluster, and they responded no, and in fact they had already pushed it.
Our cluster keeps stats on when processes start. So we can alert on crashes, and because new processes (cold JIT) can skew the response numbers, and are inflection points to analyze performance improvements or regressions. There were no restarts that morning. So they pulled the tablecloth out from under us. TIL.
None of this is making live forking a container desirable to me, I'm not a cloud hosting company (and if I was, I'd be happy to provide a VPS as a VM rather than a container)
For the VM case, I'm sure I might have benefited from it, if Digital Ocean have been able to patch something live without restarting my VPS. Great. Nothing I need to care about, so I have never cared about live forking a VM. It hasn't come up in my use of VMs.
It's not a feature I miss in containers, is what I'm saying.
That's another reason they're so infuriating. Containers are intended to make things faster and easier. But the allure of virtualization has made most work much, much slower and much, much worse.
If you're running infra at Google, of course containers and orchestration make sense.
If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.
The contexts in which they are appropriate and actually improve anything at all are vanishingly small.
I have wasted enough time caressing Linux servers to accommodate for different PHP versions that I know what good containers can do. An application gets tested, built, and bundled with all its system dependencies, in the CI; then pushed to the registry, deployed to the server. All automatic. Zero downtime. No manual software installation on the server. No server update downtimes. No subtle environment mismatches. No forgotten dependencies.
I fail to see the churn and destruction. Done well, you decouple the node from the application, even, and end up with raw compute that you can run multiple apps on.
Part of why I adopted containers fairly early was inspired by the time we decided to make VMs for QA with our software on it. They kept fucking up installs and reporting ghost bugs that were caused by a bad install or running an older version and claiming the bugs we fixed weren’t fixed.
Building disk images was a giant pain in the ass but less disruptive to flow than having QA cry wolf a couple times a week.
Performance matters, but at least initially only as far as it doesn't complicate your code significantly. That's why a simple static website often beats some hyper modern latest framework optimization journey websites. You gotta maintain that shit. And you are making sacrifices elsewhere, in the areas of accessibility and possibly privacy and possibly ethics.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.
Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.
IME, making things fast almost always also makes them simpler and easier to understand.
Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.
It's not a trade-off, it's valuable all the way down.
Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.
Agreed, though containers and K8s aren’t themselves to blame (though they make it easier to get worse results).
Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.
Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.
Docker is just making all the same promises we were made in 1991 that never came to fruition. Preemptive multitasking OSes with virtual memory were suppose to solve all of our noisy neighbor problems.
There is definitely a lot of misunderstanding here.
This provision can and does lead companies to owe significantly more in taxes than they make.
The only reason it hasn't been bigger news, is because most companies are pretending it doesn't exist and just sweeping it under the rug, hoping it will get fixed before enforcement gets serious.
I think the real reason it isn't bigger news is because the second you talk about tax code people start to tune out. It's easier to wind people up over AI taking jobs than it is to try and explain what amortization means.
> The only reason it hasn't been bigger news, is because most companies are pretending it doesn't exist and just sweeping it under the rug, hoping it will get fixed before enforcement gets serious.
Why pretend that it doesn’t exist? Why not vocally lobby for a change in the tax code?
There is bipartisan support to repeal the change. Meanwhile, further changes to the tax code are being prepared by the administration, very probably containing further such time-delayed footguns that will be the problem of the next administration to clean up, making them look like they raise taxes.
Nope, payroll is a significant part of the expenses even of FAANGs. Or at least of the entities that employ people in the US. And they very much benefit from the startup ecosystem as they can just cherry-pick among them, buy up prospective disruptors and new technologies, and disassemble them for spare parts.
Which is why with airlines it's set up who is liable for your lost bag, let the carriers fight amongst themselves about who actually lost it. Unfortunately, compensation has not kept up with the value of the stuff we typically carry.
> Anyone proclaiming simplicity just hasnt [sic] worked at scale
I've worked in startups and large tech organizations over decades and indeed, there are definitely some problems in those places that are hard.
That said, in my opinion, the majority of technical solutions were over engineered and mostly waste.
Much simpler, more reliable, more efficient solutions were available, but inappropriately dismissed.
My team was able to demonstrate this by producing a much simpler system, deploying it and delivering it to many millions of people, every day.
Chesterton's fence is great in some contexts, especially politics, but the vast majority of software is so poorly made, it rarely applies IMO.