I think the article is mostly wrong about why it is right.
> It's architecting systems. And that's the one thing AI can't do.
Why do people insist on this? AI absolutely will be able to do that, because it increasingly can do that already, and we are now goalposting around what "architecting systems" means.
What it cannot do, even in theory, is decide for you to want to do something and decide for you what that should be. (It can certainly provide ideas, but the context space is so large that I don't see how it would realistically be better at seeing an issue that exists in your world, including what you can do, who you know, and what interests you.)
For the foreseeable future, we will need people who want to make something happen. Being a developer will mean something else, but that does not mean that you are not the person most equipped to handle that task and deal with the complexities involved.
Not really. Programming means explaining the machine what to do. How you do it has changed over the years. From writing machine language and punching cards to gluing frameworks and drawing boxes. But the core is always the same: take approximative and ambiguous requirements from someone who doesn't really knows what he wants and turn it into something precise the machine can execute reliably, without supervision.
Over the years, programmers have figured out that the best way to do it is with code. GUIs are usually not expressive enough, and English is too ambiguous and/or too verbose, that's why we have programming languages. There are fields that had specialized languages before electronic computers existed, like maths, and for the same reason.
LLMs are just the current step in the evolution of programming, but the role of the programmer is still the same: getting the machine to do what people want, be it by prompting, drawing, or writing code, and I suspect code will still prevail. LLMs are quite good at repeating what has been done before, but having them write something original using natural language descriptions is quite a frustrating experience, and if you are programming, there is a good chance there is at least something original to it, otherwise, why not use an off-the-shelf product?
We are at the peak of the hype cycle now, but things will settle down. Some things will change for sure, as always when some new technology emerges.
I like to joke with people that us programmers automated our jobs away decades ago, we just tell our fancy compilers what we want and they magically generate all the code for us!
I don't see LLMs as much different really, our jobs becoming easier just means there's more things we can do now and with more capabilities comes more demand. Not right away of course.
What's different is compilers do deterministic, repetitive work that's correct practically every time. AI takes the hard part, the ambiguity, and gets it sorta ok some of the time.
The hard part is not the ambiguous part and it never were. You just need to talk with the stakeholder to sort it out. That's the requirement phase and all it requires is good communication skills.
The hard part is to have a consistent system that can evolve without costing too much. And the bigger the system, the harder it is to get this right. We have principles like modularity, cohesion, information hiding,... to help us on that front, but not a clear guideline on how to achieve it. That's the design phase.
Once you have the two above done, coding is often quite easy. And if you have a good programming ecosystem and people that know it, it can be done quite fast.
No he's right - compilers pretty much do the same thing every time. It's very rare that there's bugs in compilers, and even if the assembly is different, if it works the same it doesn't matter.
100% agree with this thread, because it's the discussion about why no code (and cloud/SaaS to a lesser degree) failed to deliver on their utopian promises.
Largely, because there were still upstream blockers that constrained throughput.
Typically imprecise business requirements (because someone hadn't thought sufficiently about the problem) or operation at scale issues (poorly generalizing architecture).
> our jobs becoming easier just means there's more things we can do now and with more capabilities comes more demand
This is the repeatedly forgotten lesson from the computing / digitization revolution!
The reason they changed the world wasn't because they were more capable (versus their manual precursors) but because they were economically cheaper.
Consequently, they enabled an entire class of problems to be worked on that were previously uneconomical.
E.g. there's no company on the planet that wouldn't be interested in more realtime detail of its financial operations... but that wasn't worth enough to pay bodies to continually tabulate it.
>> The NoCode movement didn't eliminate developers; it created NoCode specialists and backend integrators. The cloud didn't eliminate system administrators; it transformed them into DevOps engineers at double the salary.
Similarly, the article feels around the issue here but loses two important takeaways:
1) Technologies that revolutionize the world decrease total cost to deliver preexisting value.
2) Salary ~= value, for as many positions as demand supports.
Whether are more or fewer backend integrators, devops engineers, etc. post-transformation isn't foretold.
In recent history, those who upskill their productivity reap larger salaries, while others' positions disappear. I.e. the cloud engineer supporting millions of users, instead of the many bodies that used to take to deliver less efficiently.
It remains to be seen whether AI coding will stimulate more demand or simply increase the value of the same / fewer positions.
PS: If I were career plotting today, there's no way in hell I'd be aiming for anything that didn't have a customer-interactive component. Those business solution formulation skills are going to be a key differentiator any way it goes. The "locked in a closet" coder, no matter how good, is going to be a valuable addition for fewer and fewer positions.
I agree, I see AI as just a level of abstraction. Make a function to do X, Y, Z? Works great. Even architect a DAG, pretty good. Integrate everything smoothly? Call in the devs.
On the bright side, the element of development that is LEAST represented in teaching and interviewing (how to structure large codebases) will be the new frontier and differentiator. But much as scripting language removed the focus on pointers and memory management, AI will abstract away discrete blocks of code.
It is kind of the dream of open source software, but advanced - don't rebuild standard functions. But also, don't bother searching for them or work out how to integrate them. Just request what you need and keep going.
> I agree, I see AI as just a level of abstraction. Make a function to do X, Y, Z? Works great. Even architect a DAG, pretty good. Integrate everything smoothly? Call in the devs.
"Your job is now to integrate all of this AI generated slop together smoothly" is a thought that is going to keep me up at night and probably remove years from my life from stress
I don't mean to sound flippant. What you are describing sounds like a nightmare. Plumbing libraries together is just such a boring, miserable chore. Have AI solve all the fun challenging parts and then personally do the gruntwork of wiring it all together?
The problem with this idea is that the current systems have gone from being completely incapable of taking the developer role in this equation to somewhat capable of taking the developer role (i.e. newer agents).
At this clip it isn't very hard to imagine the developer layer becoming obsolete or reduced down to one architect directing many agents.
In fact, this is probably already somewhat possible. I don't really write code anymore, I direct claude code to make the edits. This is a much faster workflow than the old one.
I am finding LLMs useful for coding in that it can do a lot of heavy lifting for me, and then I jump in and do some finishing touches.
It is also sort of decent at reviewing my code and suggesting improvements, writing unit tests etc.
Hidden in all that is I have to describe all of those things, in detail, for the LLM to do a decent job. I can of course do a "Write unit tests for me", but I notice it does a much better job if I describe what are the test cases, and even how I want things tested.
My way of thinking about this has been: code is to a developer as bricks are to a builder. Writing a line of code is merely the final 10% of the work, there's a whole bunch of cognitive effort that precedes it. Just like a builder has already established a blueprint, set up straight lines, mixed cement, and what-have-you, prioir to laying a brick.
The article is mostly wrong, companies are already not recruiting as many junior/fresh college graduates as before. If AI is doing everything but architecting (which is a false argument but let's roll with it), naturally companies will need fewer engineers to architect and supervise AI systems.
I suspect that any reduction in hiring is more a function of market sentiment than jobs being replaced by AI. Many companies are cutting costs rather than expanding as rapidly as possible during the capture the flag years.
People keep forgetting that the hiring cuts were happening before AI was hyped up. AI is merely the justification right now because it helps stock price.
I don’t think people imagined this would be lasting for over 3 years. People were ready for a bumpy 12-18 months and not for this trend to be the new normal.
You think executives are gonna be saying, “yeah we’re laying off people because our revenue stinks and we have too high of costs!” They’re gonna tell people, “yeah, we definitely got AI. It’s AI, that’s our competitive edge and why we had to do layoffs. We have AI and our competitors don’t. That’s why we’re better. (Oh my god, I hope this works. Please get the stock back up, daddy needs a new yacht.)”
His comments read a lot like typical Indian-superiority complex types believing that Indians are the superior race. It's tiring that shit like this even gets on HN. You're not going to convince him of anything.
Depends on where too. Was just talking to a friend yesterday who works for a military sub (so not just software) and they said their projects are basically bottlenecked by hiring engineers.
They're not hiring juniors and now my roles consist of 10x as much busywork as they used to. People are expanding to fill these gaps; I'm not seeing much evidence that AI is "replacing" these people as much as businesses think they now don't need to hire junior developers. The thing is though, in 5 years there is not going to be as many seniors and if AI doesn't close that gap, businesses are going to feel it a lot more than whatever they think they're gaining by not hiring now.
>> The thing is though, in 5 years there is not going to be as many seniors.
This is already happening. Over the past 4-5 years I've known more than 30 senior devs either transition into areas other than development, or in many case, completely leave development all together. Most have left because they're getting stuck in situations like you describe. Having to pick up more managerial stuff and AI isn't capable of even doing junior level work so many just gave up and left.
Yes, AI is helping in a lot of different ways to reduce development times, but the offloading of specific knowledge to these tools is hampering actual skill development.
We're in for a real bumpy ride over the next decade as the industry comes to gripes with how to deal with a lot of bad things all happening at the same time.
There's the software factory hypothesis though that states that LLMs will bring down the level of skill required to produce software, reducing the skill bar required to produce the same software (i.e. automation leads to SWE being like working on factory line). In this scenario, unskilled cheap labor would be desired, making juniors more preferable.
My guess though is that the lack of hiring is simply a result of the over saturation of the market. Just looking at the growth of CS degrees awarded you have to conclude that we'd be in such a situation eventually.
The equilibriums wouldn't quite work out that way. The companies would still hire the most capable software engineers (why not?), but the threat of being replaced by cheap juniors means that they don't have much leverage and their wages drop. It'll still be grizzled veterans and competitive hiring processes looking for people with lots of experience.
These things don't happen overnight though, it'll probably take a few years yet for the shock of whatever is going on right now to really play out.
The amount and sophistication of sailing ships increased considerably as steam ships entered the market. Only once steam ships were considerably better in almost every regard that mattered to the market did the sailing ships truly get phased out to become a mere curiosity.
I think the demand for developers will similarly fluctuate wildly while LLM:s are still being improved towards the point of being better programmers than most programmers. Then programmers will go and do other stuff.
Being able to make important decisions about what to build should be one of those things that should increase in demand as the price of building stuff goes down. Then again, making important technical decisions and understand their consequences have always been part of what developers do. So we should be good at that.
The advantages of steam over sails were clear to everyone. The only issues left was engineering, solving each mini problem as they went and make the engine more efficient. Since the advent of ChatGPT, hallucinations were pointed out as a problem. Today we're no way close to even a hint on how to correct it.
> superior in some areas, inferior in others, with the balance changing with technological advancement.
So what are the areas that AI are superior to traditional programming? If your answer is suggestion, then refinement with traditional tooling, then it's just a workflow addon like contextual help, google search, and github code search. And I prefer the others because they are more reliable.
We have six major phases in the software development lifecycle: 1) Planning, 2) Analysis, 3) Design, 4) Implementation, 5) Testing, 6) Maintenance. I failed to see how LLM assistance is objectively better even in part than not having it at all. Everything I've read is mostly anecdote where the root cause is inexperience and lack of knowledge.
There's definitely this broader argument and you can even find it in academic papers. Is AI best at complementing expertise or just replacing base-level skills? Probably a bit of both but an open question.
Please provide your evidence for the implied claim that CS programs outside of the US are producing better workers than CS programs inside the US minus the "top-7" you reference.
Maybe I misunderstood your phrasing, butI think with enough context an AI could determine what you want to do with reasonable accuracy.
In fact, I think this is the scary thing that people are ringing the alarm bells about. With enough surveillance, organizations will be able to identify you reliably out in the world enough to build significant amount of context, even if you aren’t wearing a pair of AI glasses.
And with all that context, it will become a reasonable task for AI to guess what you want. Perhaps even guess a string of events or actions or activities that would lead you towards an end state that is desirable by that organization.
This is primarily responding to that one assertion though and is perhaps tangential to your actual overall point.
Take a look at your life and the signals you use to operate. If you are anything like me, summarizing them in a somewhat reasonable fashion feels basically impossible.
For example, my mother calls and asks if I want to come over.
How is an AI ever going to have the context to decide that for me? Given the right amount and quality of sensors starting from birth or soon after – sure, it's not theoretically impossible.
But as a grown up person that has knowledge about the things we share, and don't share, the conflicts in our present and past, the things I never talked about to anyone and that I would find hard to verbalize if I wanted to, or admit to myself that I don't.
It can check my calendar. But it can't understand that I have been thinking about doing something for a while, and I just heard someone randomly talking about something else, that resurfaced that idea and now I would really rather do that. How would the AI know? (Again, not theoretically impossible given the right sensors, but it seems fairly far away.)
I could try and explain of course. But where to start? And how would I explain how to explain this to mum? It's really fucking complicated. I am not saying that llm's would not be helpful here by generalization monsters, actually it's both insane and sobering how helpful they can be giving the amount of context that they do not have about us.
I think that's the key. The only ones who can provide enough accurate context are software developers. No POs or managers can handle such levels of detail (or abstraction) to hand them over via prompts to a chatbot; engineers are doing this on a daily basis.
I laugh at the image of a non-technical person like my PO or the manager of my manager giving "orders" to an LLM to design a high-scalable tiny component for handling payments. There are dozens of details that can go wrong if not-enough details are provided: from security, to versioning, to resilience, to deployment, to maintainability...
I believe this is utter fantasy. That kind of data is usually super messy. LLMs are terrible at disambiguating if something is useful vs harmful information.
It's also unlikely that context windows will become unbounded to the point where all that data can fit in context, and even if it can it's another question entirely whether the model can actually utilize all the information.
Many, many unknown unknowns would need to be overcome for this to even be in the realm of possibility. Right now it's difficult enough to get simple agents with relatively small context to be reliable and perform well, let alone something like what you're suggesting.
That's not the goal of LLMs. CEOs and high-level executives need people beneath them to handle ambiguous or non-explicit commands and take ownership of their actions from conception to release. Sure, LLMs can be configured to handle vague instructions and even say, "sure, boss, I take responsibility for my actions," but no real boss would be comfortable with that.
Think about it: if, in 10 years, I create a company and my only employee is a highly capable LLM that can execute any command I give, who's going to be liable if something goes wrong? The LLM or me? It's gonna be me, so I better give the damm LLM explicit and non-ambiguous commands... but hey I'm only the CEO of my own company, I don't know how to do that (otherwise, I would be an engineer).
I’d definitely be interested in at least giving a shot at working for a company CEO’d by a LLM… maybe, 3 years from now.
I don’t know if I really believe that it would be better than a human in every domain. But it definitely won’t have a cousin on the board of a competitor, reveal our plans to golfing buddies, make promotions based on handshake strength, or get canceled for hitting on employees.
But it will change its business plan the first time someone says "No, that doesn't make sense", and then it'll forget what either plan was after a half hour.
To be CEO is to have opinions and convictions, even if they are incorrect. That's beyond LLMs.
Minor tangential quibble: I think it is more accurate to say that to be human is to have opinions and convictions. But, maybe being CEO is a job that really requires turning certain types of opinions and convictions into actions.
More to the point, I was under the impression that current super-subservient LLMs were just a result of the fine-tuning process. Of course, the LLM doesn’t have an internal mental state so we can’t say it has an opinion. But, it could be fine-tuned to act like it does, right?
That was my point - to be CEO is to have convictions that you're willing to bet a whole company upon.
Who is fine-tuning the LLM? If you're having someone turns the dials and setting core concepts and policies so that they persist outside the context window it seems to me that they're the actual leader.
Generally the companies that sell these LLMs as a service do things like fine-tuning and designing built-in parts of the prompt. If we want to say we consider the employees of those companies to be the ones actually doing <the thing>, I could be convinced, I think. But, I think it is an unusual interpretation, usually we consider the one doing <the thing> to be the person using the LLM.
I’m speculating about a company run by an LLM (which doesn’t exist yet), so it seems plausible enough that all of the employees of the company could use it together (why not?).
Yeah, or maybe even a structure that is like a collection of co-ops, guilds, and/or franchises somehow coordinated by an LLM. The mechanism for actually running the thing semi-democratically would definitely need to be worked out!
I think this is already what happens in social media advertising. It’s not hard to develop a pattern of behaviours for a subset of people that lead to conversion and then build a model that delivers information to people that leads them on those paths. And conversion doesn’t mean they need to buy a product it could also be accept an idea, vote for a candidate, etc. The scary thing, as you point out, is that this could happen in the real world given the massive amount of data that is passively collected about everything and everybody.
I want peace and thrive for all members of humanity¹ to the largest, starting where it makes reciprocal florishing and staying free of excluding anyone by favoring someone else.
See, "AI" don't even have to guess it, I make full public disclosure of it. If anything can help with such a goal, including automated inference (AI) devices, there is no major concern with such a tool per se.
The leviathan monopolizing the tool for its own benefit in a detrimental way for human beings is an orthogonal issue.
¹ this is a bit anthropocentric statement, but it's a good way to favor human agreement, and I believe still actually implicitely require living in harmony with the rest of our follow earth inhabitants
No, AI is not yet able to architect. The confusion here is the inability to discern architecture from planning.
Planning is the ability to map concerns to solutions and project solution delivery according to resources available. I am not convinced AI is anywhere near getting that right. It’s not straightforward even when your human assets are commodities.
Acting on plans is called task execution.
Architecture is the design and art of interrelated systems. This involves layers of competing and/or cooperative plans. AI absolutely cannot do this. A gross hallucination at one layer potentially destroys or displaces other layers and that is catastrophically expensive. That is why real people do this work and why they are constantly audited.
None of my parent comment had anything to do with writing code. Architects in physical engineering don’t hammer nails or pour concrete. Architects in software, likewise, aren’t concerned with islands of code.
Since the inception of AI and everyone firing their AI generated scripts towards random infrastructure, I have significantly more work and it isn't the work where I can take this stuff, throw it to an AI and get a solution. It is the usual development work. Part of it is our own fault, since we supplied AI agents left and right for any user.
I tried to formalize and embed information about infrastructure, so AI generated code can adapt to that, but there are serious challenges. I wish I would get an AI secretary, but in that case please be a real human.
AI is much better at creating code line by line with small context windows. If it gets too big and complicated, the code will certainly not run or work. Statistical certainty with the same mechanisms that make AI work in the first place.
If an AI can create a project, it has to be completely self-contained. Still useful, but also restricted.
And AI embedding is another pain in the arse and even the large tech firms fail here. It is so much work to prepare data for embedding. Human work, since AI isn't fit to do that yet. Maybe it will, but it would be naive to estimate that you can finally rest as a developer. There will be other bullshit to deal with.
Because it isn't even close to entry level architecture. And the structure of LLM's makes it hard to iterate the way architecture requires. It basically gets a little bit of context only to essentially tear down and rebuild at blistering speeds. You can't really architect that way.
>What it cannot do, even in theory, is decide for you to want to do something and decide for you what that should be.
Something sadly common among clients, even large ones.
Well put. It's routinely spoken about as if there's no timescale on which AI could ever advance to match elite human capabilities, which seems delusionally pessimistic.
Given that people love to try and talk as if they can replace developers right now, I wll disagree with any such assertions. Speculating on the future is worthless in this context.
But if we want to do it anyway: the current stalls and approaches to iterating on LLM's do not give much short term promise. This approach seems to already be plateauing.
If you read their use of "AI" as "LLM", then, yes, LLMs can't architect and I don't expect they ever will. You could power them up by a 10, past all the scaling limits we have now, and LLMs would still be fundamentally unsuited for architecture. It's a technology fundamentally unsuited for even medium-small scale code coherence, let alone architectural-level coherence, just by its nature. It is simply constitutionally too likely to say "I need to validate user names" and slam out a fresh copy of a "username validation routine" because that autocompletes nicely, but now you've got a seventh "username validation routine" because the LLM has previously already done this several times before, and none of the seven are the same, and that's just one particularly easy-to-grasp example of their current pathologies.
If anyone's moving the "architecture" goal posts it would be anyone who thinks that "architecture" so much as fits into the context window of a modern LLM, let alone that they are successfully doing it. They're terrible architects right now, like, worse than useless, worse than I'd expect from an intern. An intern may cargo cult design methodologies they don't understand yet but even that is better than what LLMs are producing.
Whatever the next generation AI is, though, who can tell. What an AI could do that could actually construct symbolic maps of a system, manipulate that map directly, then manifest that in code, could accomplish is difficult to say. However nobody knows how to do that right now. It's not for lack of trying, either.
> It's architecting systems. And that's the one thing AI can't do.
Why do people insist on this? AI absolutely will be able to do that, because it increasingly can do that already, and we are now goalposting around what "architecting systems" means.
What it cannot do, even in theory, is decide for you to want to do something and decide for you what that should be. (It can certainly provide ideas, but the context space is so large that I don't see how it would realistically be better at seeing an issue that exists in your world, including what you can do, who you know, and what interests you.)
For the foreseeable future, we will need people who want to make something happen. Being a developer will mean something else, but that does not mean that you are not the person most equipped to handle that task and deal with the complexities involved.