That may be the future, but we're not there yet. If you're having the LLM write to a high level language, eg java, javascript, python, etc, at some point there will be a bug or other incident that requires a human to read the code to fix it or make a change. Sure, that human will probably use an LLM as part of that, but they'll still need be able to tell what the code is doing, and LLMs simply are not reliable enough yet that you just blindly have them read the code, change it, and trust them that it's correct, secure, and performant. Sure, you can focus on writing tests and specs to verify, but you're going to spend a lot more time going in agentic loops trying to figure out why things aren't quite right vs a human actually being able to understand the code and give the LLM clear direction.
So long as this is all true, then the code needs to be human readable, even if it's not human-written.
Maybe we'll get to the point that LLMS really are equivalent to compilers in terms of reliability -- but at that point, why would be have them write in Java or other human-readable languages? LLMs would _be_ a compiler at that point, with a natural-language UI, outputing some kind of machine code. Until then, we do need readable code.
Me: My code isn’t giving the expected result $y when I do $x.
Codex: runs the code, reproduces the incorrect behavior I described finds the bug, reruns the code and gets the result I told it I expected. It iterates until it gets it right and runs my other unit and integration tests.
Somewhat against the common sentiment, I find it's very helpful on a large legacy project. At work, our main product is a very old, very large code base. This means it's difficult to build up a good understanding of it -- documentation is often out of date, or makes assumptions about prior knowledge. Tracking down the team or teams that can help requires being very skilled at navigating a large corporate hierarchy. But at the end of the day, the answers for how the code works is mostly in the code itself, and this is where AI assistance has really been shining for me. It can explore the code base and find and explain patterns and available methods far faster than I can.
My prompts end to be in the pattern of "I am looking to implement <X>. <Detailed description of what I expect X to do.>. Review the code base to find similar examples of how this is currently done, and propose a plan for how to implement this."
These days I'm on Claude Code, and I do that first part in Plan mode, though even a few months ago on earlier, not-as-performant models and tools, I was still finding value with this approach. It's just getting better, as the company is investing in shared skills/tools/plugins/whatever the current terminology is that is specific to various use cases within the code base.
I haven't been writing so much code directly, but I do still very much feel that this is my code. My sessions are very interactive -- I ask the agent to explain decisions, question its plans, review the produced code and often revise it. I find it frees me up to spend more time thinking through and having higher level architecture applied instead of spending frustrating hours hunting down more basic "how does this work" information.
I think it might have been an article by Simon Willison that made the case for there being a way to use AI tooling to make you smarter, or to make you dumber. Point and shoot and blindly accept output makes you dumber -- it places more distance between you and your code base. Using AI tools to automate away a lot of the toil give you energy and time to dive deeper into your code base and develop a stronger mental model of how it works -- it makes you smarter. I keep in mind that at the end of the day, it's my name on the PR, regardless of how much Claude directly created or edited the files.
If the goal is to reduce the need for SWE, you don’t need AI for that. I suspect I’m not alone in observing how companies are often very inefficient, so that devs end up spending a lot of time on projects of questionable value—something that seems to happen more often the larger the organization. I recall at one job my manager insisted I delegate building a react app for an internal tool to a team of contractors rather than letting me focus for two weeks and knock it out myself.
It’s always the people management stuff that’s the hard part, but AI isn’t going to solve that. I don’t know what my previous manager’s deal was, but AI wouldn’t fix it.
> There's this notion of software maintenance - that software which serves a purpose must be perennially updated and changed - which is a huge, rancid fallacy. If the software tool performs the task it's designed to perform, and the user gets utility out of it, it doesn't matter if the software is a decade old and hasn't been updated.
If what you are saying is that _maintenance_ is not the same as feature updates and changes, then I agree. If you are literally saying that you think software, once released, doesn't ever need any further changes for maintenance rather than feature reasons, I disagree.
For instance, you mention "security implications," but as a "might" not "will." I think this vastly underestimates security issues inherent in software. I'd go so far say that all software has two categories of security issues -- those that known today, and those that will be uncovered in the future.
Then there's the issue of the runtime environment changing. If it's web-based, changing browser capabilities, for instance. Or APIs it called changing or breaking. Etc.
Software may not be physical, but it's subject to entropy as much as roads, rails, and other good and infrastructure out in the non-digital world.
Some software - what I take issue with is the notion that all software must be continuously updated, regardless. There are a whole lot of chunks of code that never get touched. There are apps and daemons and widgets that do simple things well, and going back to poke at them over and over for no better reason than "they need updates" is garbage.
There's the whole testing paradigm issue, driven by enshittification, incentivizing surveillance in the guise of telemetry, numbing people to the casual intrusion on their privacy. The midwit UX and UI "engineers" who constantly adjust and tweak and move shit around in pursuit of arbitrary metrics, inflicting A/B testing for no better reason than to make a number go up on a spreadsheet be it engagement, or number of clicks, or time spent on page, or whatever. Or my absolute favorite "but the users are too dumb to do things correctly, so we will infantilize by default and assume they're far too incompetent and lack the agency to know what they want."
Continuous development isn't necessary for everything. I use an app daily that was written over 10 years ago - it does a mathematical calculation and displays the result. It doesn't have any networking, no fancy UI, everything is sleek and minimal and inline, there aren't dependencies that open up a potential vulnerability. This app, by nearly every way in which modern software gets assessed, is built entirely the wrong way, with no automatic updates mechanism, no links back to a website, to issue reporting menu items, no feature changelog, and yet it's one of the absolute best applications I use, and to change it would be a travesty.
Maybe you could convince me that some software needs to be built in the way modern apps are foisted off on us, but when you dig down to the reasons justifying these things, there are far better, more responsible, user respecting ways to do things. Artificial Incompetence is a walled garden dark pattern.
It's shocking how much development happens simply so that developers and their management can justify continued employment, as opposed to anything any user has ever actually wanted. The wasteful, meaningless flood of CI slop, the updates for the sake of updates, the updates because they need control, or subscriptions, or some other way of squeezing every last possible drop of profit out of our pockets, regardless of any actual value for the user - that stuff bugs the crap out of me.
These posts are in a thread about someone pumping out a large amount of software in a short amount of time using AI. I'm guessing that you and I would agree that programs flung out of an AI shotgun are highly unlikely to be the kind of software that will work well and satisfy users with no changes over 10 years.
> In order to sell anything, people need to know about it. Google and Meta provide a way to make this possible. If they didn't exist, you wouldn't somehow have a more affordable way to get people to know about your product. However frustrating the current situation is, it is still more accessible than needing access to the airwaves or print media to try to sell anything new.
The places people can find out about your product are controlled by a very small number of companies. And those companies not only own those spaces, they also own the means of advertising on those spaces. So if you have a product you want to advertise, you're not paying to distribute your message broadly to consumers, you're paying a toll to a gatekeeper that stands between you and your potential customers.
but that’s not really true. You’re not paying, you’re bidding. You are competing against thousands of other advertisers for eyeballs. If you are the only advertiser targeting a group of people, you will spend almost nothing to advertise. If you are targeting a group of people that everyone targets (e.g: rich people in their 30s) you will pay through the nose.
Facebook, Google etc. are the most “fair” forms of advertising. We can dislike advertising, their influence, product etc. but when you compare them to almost every other type of advertising, they’re the best for advertisers.
The reason they generate so much revenue is because they are so accessible and because they are so easy to account for. The reason LTV and CAC are so widely understood by businesses today is because of what Google, Facebook etc. offer.
No financial market would be able to run the way Google and Facebook run their ad markets. They are the supplier, the exchange, and the broker all at the same time. This is not a competitive market. It's a captured one where the supplier effectively gets to set their price, and the exchange and the broker incentivize and advise you to trade at that price.
Google has famously and repeatedly rigged this bidding system in anti-competitive ways and has had to pay billions in fines because of it (which I am sure were less than the amount they profited from)
> I still remember in the early 2000s Barnes and Noble would still have massive shelf space devoted to every technical topic you could imagine.
B&N, and Borders, are how I learned to code. Directionless after college, I thought, hey, why not learn how to make websites? And I'd spend a lot of time after work reading books at these stores (and yes, buying too).
Over my career, I've been in a big company twice. This article definitely tracks with my experience. At one company, I think management actively didn't care, and in fact my direct manager was pretty hostile to any attempts at improving our code base as it meant disruption to what was, for him, a stable little niche he had set up.
At the second, it wasn't hostility but more indifference -- yes, in theory they'd like higher quality code, but none of the systems to make this possible were set up. My team was all brand new to the company, except for two folks who'd been at the company for several years but in a completely different domain , with a manger from yet another domain. The "relative beginner" aspect he calls out was in full effect.
So long as this is all true, then the code needs to be human readable, even if it's not human-written.
Maybe we'll get to the point that LLMS really are equivalent to compilers in terms of reliability -- but at that point, why would be have them write in Java or other human-readable languages? LLMs would _be_ a compiler at that point, with a natural-language UI, outputing some kind of machine code. Until then, we do need readable code.
reply