Hacker Newsnew | past | comments | ask | show | jobs | submit | more anymouse123456's commentslogin

Okay, I'll bite.

> Anyone proclaiming simplicity just hasnt [sic] worked at scale

I've worked in startups and large tech organizations over decades and indeed, there are definitely some problems in those places that are hard.

That said, in my opinion, the majority of technical solutions were over engineered and mostly waste.

Much simpler, more reliable, more efficient solutions were available, but inappropriately dismissed.

My team was able to demonstrate this by producing a much simpler system, deploying it and delivering it to many millions of people, every day.

Chesterton's fence is great in some contexts, especially politics, but the vast majority of software is so poorly made, it rarely applies IMO.


Hard agree.

I also worked on some quite large organizations with quite large services that would easily take 10x to 50x the amount of time to ship if they were a smaller org.

Most of the time people were mistaking complexity caused by bad decisions (tech or otherwise) with "domain complexity" and "edge cases" and refusing to acknowledge that things are now harder because of those decisions. Just changing the point of view makes it simple again, but then you run into internal politics.

With microservices especially, the irony was that it was mostly the decisions justified as being done to "save time in the future" that ended up generating the most amount of future work, and in a few cases even problems around compliance and data sovereignty.


Problem is that you can’t create a system in vacuum.

Mostly it is not like a movie where you hand pick the team for the job.

Usually you have to play the cards you’re dealt with so you take whatever your team is comfortable building.

Which in the end is dealing with emotions, people ambition, wishes.

I have seen stuff gold plated just because one vocal person was making fuss. I have seen good ideas blocked just because someone wanted to feel important. I have seen teams who wanted to „do proper engineering” but they thought over engineering was proper way and anything less than gold plating makes them look like amateurs.


So, case by case then?


AI has been great for UX prototypes.

Get something stood up quickly to react to.

It's not complete, it's not correct, it's not maintainable. But it's literal minutes to go from a blank page to seeing something clickable-ish.

We do that for a few rounds, set a direction and then throw it in the trash and start building.

In that sense, AI can be incredibly powerful, useful and has saved tons of time developing the wrong thing.

I can't see the future, but it's definitely not generating useful applications out of whole cloth at this point in time.


For me it's useful in those areas I don't venture into very often. For example I needed a powershell script recently that would create a little report of some registry settings. Claude banged out something that worked perfectly for me and saved me an hour of messing around.


It’s useful for almost any one-off script I write. It can do the work much faster than me and produce nicer looking output than I’d ever bother to spend time to write myself. It can also generate cli args and docs I’d never "waste time" on myself, which I’d later waste even more time fumbling without

They’re insanely useful. I don’t get why people pretend otherwise, just because they aren't fulfilling the prophesies of blowhards and salesmen


yeah they work pretty well for scripts. I use claude to create scripts i need for .csv transforms and things. Like read all these files in a directory, merge them together on some key and convert this to that and all kinds of things and then output a csv with the following header row. For things like that they work pretty well.


Yes, totally agree. The 2nd thing I found it great for was to explain errors, it either finds the exact solution, or sparked a thought that lead to the answer.


It's the height of absurdity to me that this is possible and devs will still say outrageous shit like "These tools have no use"


> We do that for a few rounds, set a direction and then throw it in the trash and start building.

Unfortunately PMs tend to forget the throw-it-in-the-trash part, so the prototype still ends up in prod.

But good for you, if you found a way to make it work.


Can you elaborate on your process and tools here? This use case may actually be valuable for me and my team.


Tools that can build you a quick clickable prototype are everywhere. Replit, claude code, cursor, ChatGPT Pro, v0.app, they're all totally capable.

From there it's the important part: discussing, documenting, and making sure you're on the same page about what to actually build. Ideally, get input from your actual customers on the mockup (or multiple mockups) so you know what resonates and what doesn't.


This idea that performance is irrelevant gets under my skin. It's how we ended up with Docker and Kubernetes and the absolute slop stack that is destroying everything it touches.

Performance matters.

We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.

Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.

Thankfully some people like the folks at Figma took the risk and proved the point.

Even if we're innovating on hard technical problems (which most of us are not), performance still matters.


Containers were invented because VMs were too slow to cold start and used too much memory. Their whole raison d'être is performance.


Yeah, I think Electron would be the poster child


Can you live fork containers like you can VMs?

VM clone time is surprisingly quick once you stop copying memory, after that it's mostly ejecting the NIC and bringing up the new one.


You mean creating a different container that is exactly equal to the previous one?

It's absolutely possible, but I'm not sure there's any tool out there with that command... because why would you? You'll get about the same result as forking a process inside the container.


It's useful if you want to bring up a containerized service, optionally update OS, run tests, and if everything is good, copy that instance a bunch of times rather than starting fresh.

It gets you scale out a batch of VMs remarkably quickly, while leaving the original available for os/patch updates.

If I'm willing to pay the cost of keeping an idle VM around, subsequent launches are probably an order of magnitude faster than docker hello-world.


I can't say I've ever cared about live forking a container (or VM, for that matter)


Your cloud provider may be doing it for you. Ops informed me one day that AWS was pushing out a critical security update to their host OS. So of course I asked if that meant I needed to redeploy our cluster, and they responded no, and in fact they had already pushed it.

Our cluster keeps stats on when processes start. So we can alert on crashes, and because new processes (cold JIT) can skew the response numbers, and are inflection points to analyze performance improvements or regressions. There were no restarts that morning. So they pulled the tablecloth out from under us. TIL.


None of this is making live forking a container desirable to me, I'm not a cloud hosting company (and if I was, I'd be happy to provide a VPS as a VM rather than a container)


There’s using a feature, having a vendor use it for you, or denying its worth.

Anything else is dissonant.


For the VM case, I'm sure I might have benefited from it, if Digital Ocean have been able to patch something live without restarting my VPS. Great. Nothing I need to care about, so I have never cared about live forking a VM. It hasn't come up in my use of VMs.

It's not a feature I miss in containers, is what I'm saying.


Why would you, if you can simply start replacement containers in another location and reroute traffic there, then dispose of the old ones?


That's another reason they're so infuriating. Containers are intended to make things faster and easier. But the allure of virtualization has made most work much, much slower and much, much worse.

If you're running infra at Google, of course containers and orchestration make sense.

If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.

The contexts in which they are appropriate and actually improve anything at all are vanishingly small.


I have wasted enough time caressing Linux servers to accommodate for different PHP versions that I know what good containers can do. An application gets tested, built, and bundled with all its system dependencies, in the CI; then pushed to the registry, deployed to the server. All automatic. Zero downtime. No manual software installation on the server. No server update downtimes. No subtle environment mismatches. No forgotten dependencies.

I fail to see the churn and destruction. Done well, you decouple the node from the application, even, and end up with raw compute that you can run multiple apps on.


Part of why I adopted containers fairly early was inspired by the time we decided to make VMs for QA with our software on it. They kept fucking up installs and reporting ghost bugs that were caused by a bad install or running an older version and claiming the bugs we fixed weren’t fixed.

Building disk images was a giant pain in the ass but less disruptive to flow than having QA cry wolf a couple times a week.

I could do the same with containers, and easier.


Performance matters, but at least initially only as far as it doesn't complicate your code significantly. That's why a simple static website often beats some hyper modern latest framework optimization journey websites. You gotta maintain that shit. And you are making sacrifices elsewhere, in the areas of accessibility and possibly privacy and possibly ethics.

So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.


> way too complicated for what they do

Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.


This kind of thinking is exactly the problem.

Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.

IME, making things fast almost always also makes them simpler and easier to understand.

Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.

It's not a trade-off, it's valuable all the way down.

Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.


> It's not a trade-off, it's valuable all the way down.

This.


Agreed, though containers and K8s aren’t themselves to blame (though they make it easier to get worse results).

Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.

Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.


Docker good actually


nah - we'll look back on Docker the same way many of are glaring at our own sins with OO these days.


Docker is just making all the same promises we were made in 1991 that never came to fruition. Preemptive multitasking OSes with virtual memory were suppose to solve all of our noisy neighbor problems.


If you’re implying that Docker is the slop, instead of an answer to the slop, I haven’t seen it.


Better. Thanks!


uh oh...


I love the repeated phrase, '...and the world wouldn’t turn to ash.'


There is definitely a lot of misunderstanding here.

This provision can and does lead companies to owe significantly more in taxes than they make.

The only reason it hasn't been bigger news, is because most companies are pretending it doesn't exist and just sweeping it under the rug, hoping it will get fixed before enforcement gets serious.


I think the real reason it isn't bigger news is because the second you talk about tax code people start to tune out. It's easier to wind people up over AI taking jobs than it is to try and explain what amortization means.


> The only reason it hasn't been bigger news, is because most companies are pretending it doesn't exist and just sweeping it under the rug, hoping it will get fixed before enforcement gets serious.

Why pretend that it doesn’t exist? Why not vocally lobby for a change in the tax code?


There is bipartisan support to repeal the change. Meanwhile, further changes to the tax code are being prepared by the administration, very probably containing further such time-delayed footguns that will be the problem of the next administration to clean up, making them look like they raise taxes.


This change was added in 2017, triggered in 2021/2022. It's been the policy for years now.

There is very little pressure on elected officials because big cos can afford it and it bankrupts their tiny future disruptors.

Why would you let it be fixed?


Nope, payroll is a significant part of the expenses even of FAANGs. Or at least of the entities that employ people in the US. And they very much benefit from the startup ecosystem as they can just cherry-pick among them, buy up prospective disruptors and new technologies, and disassemble them for spare parts.

Anyway, here is more information about the bill. Let's see what happens to it: https://www.kbkg.com/feature/lawmakers-introduce-bill-to-ret...


Because then the people we watching might notice you and dig in.


If you didn't know about Section 174 until 2025 you have no business being in a leadership position anywhere, period.

This has been a slow moving disaster for years now and people have repeatedly tried to raise the alarm.

Just crickets and layoffs.


Installed a pair of these in our renovation from 2018.

Would never go back.

Seat warmers, auto open, night light and auto flush are features no one seems to talk about but these are as incredible as the washlet itself.


For what it's worth Home Depot Services operate exactly the same way as the sub-contracted moving companies.

It's the circle-of-pointing-spider-man-meme but with everyone reaching into your wallet while doing absolutely nothing but damage.


Which is why with airlines it's set up who is liable for your lost bag, let the carriers fight amongst themselves about who actually lost it. Unfortunately, compensation has not kept up with the value of the stuff we typically carry.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: