> I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me
It could be that you’re falling into a complete solution fallacy. LLMs can already be great at working each of these problems. It helps to work on a small piece of these problems. It does take practice and any sufficiently complicated problem will require practice and multiple attempts.
But the more you practice with them, you start getting a feel for it and these things start to eat away at this 80% you’re describing.
It is not self driving, if anything, software engineering, automation is only accessible to those who nerd out at it, the same way using a PC used to be sending email or programming.
A lot of the attention is on being able to run increasingly capablemodels on machines with less resources. But there’s not much use to fuss over Gemini 2.5 Pro if you don’t already have a pretty good feel for deep interaction with sonnet or GPT 4o.
It is already impressive and can seriously accelerate software engineering.
But the complete solution fallacy is what the believers are claiming will occur, isn't it? I'm 100% with you that LLMs will make subsets of problems easier. Similar to how great progress in image recognition has been made with other ML techniques. That seems like a very reasonable take. However, that wouldn't be "revolutionary", I don't think. That's not "fire all your developers because most jobs will be replaced by AI in a few years" (a legitimate sentiment shared to me from an AI-hyped colleague).
The thing is you're doing what a lot of critics do - lumping together different people saying different things about LLMs into one bucket - "believers" - and attributing the biggest "hype" predictions to all of them.
Yes, some people are saying the "complete solution" will occur - they might be right or might be wrong. But this whole thread with someone saying LLMs today are useful, so it's not hype. That's a whole different claim that is almost objective, or at least hard for you to disprove. It's people literally saying "I'm using this tool today in a way that is useful to me".
Of course, you also said:
> Keeping in mind that most of our jobs are ultimately largely pointless anyway, so that implies a limit on the true usefulness of any tool.
Yeah, if you think most of the economy and most economic activity people do is pointless, that colors a lot about how you look at things. I don't think that's accurate and have no idea how you can even coherently hold that position.
It could be that you’re falling into a complete solution fallacy. LLMs can already be great at working each of these problems. It helps to work on a small piece of these problems. It does take practice and any sufficiently complicated problem will require practice and multiple attempts.
But the more you practice with them, you start getting a feel for it and these things start to eat away at this 80% you’re describing.
It is not self driving, if anything, software engineering, automation is only accessible to those who nerd out at it, the same way using a PC used to be sending email or programming.
A lot of the attention is on being able to run increasingly capablemodels on machines with less resources. But there’s not much use to fuss over Gemini 2.5 Pro if you don’t already have a pretty good feel for deep interaction with sonnet or GPT 4o.
It is already impressive and can seriously accelerate software engineering.