Not when all of the marketing of LLMs is touting their abilities to do the exact thing and that is what investors are being presented.
If it is as you say, then eventually the house of cards will crumble. Then we can finally go back to work and quit being inundated with needing to use AI for everything.
This study and the studies it relies on are based on O*NET descriptions of jobs in the Bureau of Labor Statistics Taxonomy. Here is the entry for Software Developers:
The "exposure" of a job is based on the degree to which its "tasks" can be substituted/augmented by LLM-based tools. Some example tasks are:
- Develop or direct software system testing or validation procedures, programming, or documentation.
- Determine system performance standards.
They are predicting roughly one quarter of U.S. software developers will lose their jobs over the next two to five years.
If you believe that one could quantify how much time could be saved on each of those "tasks" across all software developers as a whole with current and near future LLM-based technologies, you should take these results seriously. They are doing so using data about actual LLM requests and using LLMs to compare the content of those requests to the O*NET task content.
If you don't find that methodologically plausible - as I don't - you shouldn't take the final outputs of their study seriously.
There could of course be other convincing arguments for pending degradation of the software developer labor market that don't rely on this type of task analysis.
I think these arguments tend to reach impasse because one gravitates to one of two views:
1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.
2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.
I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish.
So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.
I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.
Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.
Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer.
It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.
One explanation is that some think we might be getting to the limits of what an LLM can reasonably do. There's a lot of functions of any job that are not easily translated to an LLM and are much more about interacting with people or critical thinking in a way LLMs can't do. I'm not sure if that's everyone's rationale but that's my personal view of the situation. Like the jobs will change but we likely won't be losing them to AI outright.
To the original poster: I don't know you, but based on your posting history I am worried about your well being. Please talk to someone in person about how you're feeling, even if you feel okay (or better than okay).
I’ve read primary text excerpts from Hegel and some secondary sources too, and already knew that he didn’t write in that style, but the general idea that many forces in life develop themselves dialectically (the antithesis sometimes being expressed as alienation) is very similar in concept.
That a myth has developed around the terminology and methodology is persuading, but also there’s nothing wrong with a programming library to call itself Hegel.
Have you seen the meme with three spidermen, labeled "Designer", "Product Manager", and "Engineer" wherein each is pointing to the other two and saying "I don't need you anymore!"?
Most of the time, the person saying that is wrong.
reply