But the complete solution fallacy is what the believers are claiming will occur, isn't it? I'm 100% with you that LLMs will make subsets of problems easier. Similar to how great progress in image recognition has been made with other ML techniques. That seems like a very reasonable take. However, that wouldn't be "revolutionary", I don't think. That's not "fire all your developers because most jobs will be replaced by AI in a few years" (a legitimate sentiment shared to me from an AI-hyped colleague).
The thing is you're doing what a lot of critics do - lumping together different people saying different things about LLMs into one bucket - "believers" - and attributing the biggest "hype" predictions to all of them.
Yes, some people are saying the "complete solution" will occur - they might be right or might be wrong. But this whole thread with someone saying LLMs today are useful, so it's not hype. That's a whole different claim that is almost objective, or at least hard for you to disprove. It's people literally saying "I'm using this tool today in a way that is useful to me".
Of course, you also said:
> Keeping in mind that most of our jobs are ultimately largely pointless anyway, so that implies a limit on the true usefulness of any tool.
Yeah, if you think most of the economy and most economic activity people do is pointless, that colors a lot about how you look at things. I don't think that's accurate and have no idea how you can even coherently hold that position.