Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What evidence do we have that AI is actually replacing programmers already? The article treats messaging on this as a forgone conclusion, but I strongly suspect it's all hype-cycle BS to cover layoffs, or a misreading of "Meta pivots to AI" headlines.


We'll probably never have evidence either way ... Did Google and Stack Overflow "replace" programmers?

Yes, in the sense that I suspect that with the strict counterfactual -- taking them AWAY -- you would have to hire 21 people instead of 20, or 25 instead of 20, to do the same job.

So strictly speaking, you could fire a bunch of people with the new tools.

---

But in the same period, the industry expanded rapidly, and programmer salaries INCREASED

So we didn't really notice or lament the change

I expect that pretty much the same thing will happen. (There will also be some thresholds crossed, producing qualitative changes. e.g. Programmer CEOs became much more common in the 2010's than in the 1990's.)

---

I think you can argue that some portion of the industry "got dumber" with Google/Stack Overflow too. Higher level languages and tech enabled that.

Sometimes we never learn the underlying concepts, and spin our wheels on the surface

Bad JavaScript ate our CPUs, and made the fans spin. Previous generations would never write code like that, because they didn't have the tools to, and the hardware wouldn't tolerate it. (They also wrote a lot of memory safety bugs we're still cleaning up, e.g. in the Expat XML parser)

If I reflect deeply, I don't know a bunch of things that earlier generations did, though hopefully I know some new things :-P


This is an insightful comment. It smells of Jevron's paradox, right? More productivity leads to increased demand.

I just don't remember anyone saying that SO would replace programmers, because you could just copy-paste code from a website and run it. Yet here we are: GPTs will replace programmers, because you can just copy-paste code from a website and run it.


I completely agree with Jevron's paradox being the right way to think about this. Much like ERP and HR software made it so you needed less back of office staff to accomplish the same task, but it allows these huge, multi-national companies to exist. I don't think these tens of thousands or hundreds of thousands employee companies would be possible without ERP and HR software.

I think another way of thinking about this is with low-code/no code tools. Another comment in this post said they never really took off and they didn't in the way some people expected. But a lot of large companies use them quite a bit for automating internal processes such as document/data aggregation and manipulation. JP Morgan has multiple job listings right now for RPA developers. Before this would needed to be done by actual developers.

I suspect (and hope) AI will follow a similar trajectory. I hope the future is exciting and we build new, more complex systems we can build that wasn't possible before due to lack of


People definitely said this about SO!


Those people never tried googling anything past entry level. It's at best a way to get some example documentation for core languages.


Google Coding is definitely a real problem. And I can't believe how wrong some of the answers on Stack Overflow are.

But the real problems are managerial. Stonks must go up, and if that means chasing a ridiculous fantasy of replacing your workforce with LLMs then let's do that!!!!111!!

It's all fun and games until you realise you can't run a consumer economy without consumers.

Maybe the CEOs have decided they don't need workers or consumers any more. They're too busy marching into a bold future of AI and robot factories.

Good luck with that.

If there's anyone around a century from now trying to make sense of what's happening today, it's going to look like a collective psychotic episode to them.


> It's all fun and games until you realise you can't run a consumer economy without consumers.

If the issue is that the AI can't code, then yes you shouldn't replace the programmers: not because they're good consumers, just because you still need programmers.

But if the AI can replace programmers, then it's strange to argue that programmers should still get employed just so they can get money to consume, even though they're obsolete. You seem to be arguing that jobs should never be eliminated due to technical advances, because that's removing a consumer from the market?


The natural conclusion I see is dropping the delusion that every human must work to live. If automation progresses to a point that machines and AI can do 99% of useful work, there's an argument to be made for letting humanity finally stop toiling, and letting the perhaps 10% of people who really want to do the work do the work.

The idea that "everybody must work" keeps harmful industries alive in the name of jobs. It keeps bullshit jobs alive in the name of jobs. It is a drain on progress, efficiency, and the economy as a whole. There are a ton of jobs that we'd be better off just paying everybody in them the same amount of money to simply not do them.


The problem is that such a conclusion is not stable

We could decide this one minute, and the next minute it will be UN-decided

There is no "global world order", no global authority -- it is a shifting balance of power

---

A more likely situation is that the things AI can't do will increase in value.

Put another way, the COMPLEMENTS to AI will increase in value.

One big example is things that exist in the physical world -- construction, repair, in-person service like restaurants and hotels, live events like sports and music (see all the ticket prices going up), mining and drilling, electric power, building data centers, manufacturing, etc.

Take self-driving cars vs. LLMs.

The thing people were surprised by is that the self-driving hype came first, and died first -- likely because it requires near perfect reliability in the physical world. AI isn't good at that

LLMs came later, but had more commercial appeal, because they don't have to deal with the physical world, or be reliable

So there are are still going to many domains of WORK that AI can't touch. But it just may not be the things that you or I are good at :)

---

The world changes -- there is never going to be some final decision of "humans don't have to work"

Work will still need to be done -- just different kinds of work. I would say that a lot of knowledge work is in the form of "bullshit jobs" [1]

In fact a reliable test of a "bullshit job" might be how much of it can be done by an LLM

So it might be time for the money and reward to shift back to people who accomplish things in the physical world!

Or maybe even the social world. I imagine that in-person sales will become more valuable too. The more people converse with LLMs, I think the more they will cherish the experience of conversing with a real person! Even if it's a sales call lol

[1] https://en.wikipedia.org/wiki/Bullshit_Jobs


To say that self driving cars (a decade later with several real products rolling out) has the same, or lesser, commercial appeal than LLMs now (a year/two in, with mostly VC hype) is a bit incorrect.

Early on in AV cycles there was enormous hype for AVs, akin to LLMs. We thought truck drivers were done for. We thought accidents were a thing of the past. It kicked off a similar panic among tangential fields. Small AV startups were everywhere, and folks were selling their company to go start a new one then sell that company for enormous wealth gains. Yet 5 years later none of the "level 5" promises they made were coming true.

In hindsight, as you say, it was obvious. But it sure tarnished the CEO prediction record a bit, don't you think? It's just hard to believe that this time is different.


So how do you choose who has to work vs who gets to just hang out? Who's gonna fix the machines when they break?

It honestly doesn't matter, because we're hundreds of years from > a point that machines and AI can do 99% of useful work


I would much rather work than not work. Many other people are the same. If I don't have a job, I will work on my free time. I enjoy it. I don't have to work for a living, but I have to work to be alive.

There are many people like me, and we will be the ones to work. It won't be choosing who has to work, it will be who chooses that they want to work.


It's our only conclusion unless/until countries start implementing UBI or similar forms of post scarcity services. And it's not you or me that's fighting against that future.


I don't think this is anyone's plan. It's the biggest argument against why it won't be the plan: who'll pay for all of it? Unless we can Factorio the world, it seems more likely we just won't do that.


It'll happen gradually over time, with more pressure on programmers to "get more done".

I think it's useful to look at what has already happened at another, much smaller profession -- translators -- as a precursor to what will happen with programmers.

1. translation software does a mediocre job, barely useful as a tool; all jobs are safe

2. translation software does a decent job, now expected to be used as time-saving aid, expectations for translators increase, fewer translators needed/employed

3. translation software does a good job, translators now hired to proofread/check the software output rather than translate themselves, allowing them to do 3x to 4x as fast as before, requiring proportionally fewer translators

4. translation software, now driven by LLMs, does an excellent job, only cursory checks required; very few translators required mostly in specialized cases


Yes, but in all 4 of these steps you are literally describing the job transformer LLMs were designed to do. We are at 1 (mediocre job) for LLMs in coding right now. Maybe 2 in a few limited cases (eg boilerplate). There's no reason to assume LLMs will ever perform at 3 for coding. For the same reason natural language programming languages like COBOL are no longer used -- natural language is not precise.


It seems the consensus is that we will reach level 3 pretty quickly given the pace of development in the past 2 years. Not sure about 4 but I’d say in 10 years we’ll be there.


There is definitely no consensus that we will reach level 3 in coding tasks "pretty quickly". (Assuming you are talking about your own definitions of "levels" wrt translation applied to coding -- only proofread/check required)


I actually know a professional translator and while a year ago he was full of worry, he now is much more relaxed about it.

It turns out that like art, many people just want a human doing the translation. There is a strong romantic element to it, and it seems humans just have a strong natural inclination to only want other humans facilitating communication.


I’ve done freelance translating (not my day job) for 20 years. What you describe is true for certain types of specialized translations, particularly anything that is literary in nature. But that is a very small segment. The vast majority of translation work is commercial in nature and for that companies don’t care whether a human or machine did it.


How do they know that a human is doing the translation? What's to stop someone from just c&ping the text into an LLM, giving it a quick proofread, then sending it back to the client and saying "I translated this"?

Sounds like easy money, maybe I should get into the translation business.


The fact that the client is actually going to use the text and they will not find it funny when they're being laughed at. Or worse, being sued because of some situation caused by a confusion. I read Asian novels and you can quickly (within a chapter) discern if the translators have done a good job (And there's so many translation notes if the author relies on cultural elements).


1) almost all clients hire a translation agency who then farms then work out to freelance translators; payment is on a per-source-word basis.

2) the agency encourages translation tools, so long as the final content is okay (proofread by the translator), because they can then pay less (based on the assumption that it should take you less time). I’ve see rates drop in half because of it.

3) the client doesn’t know who did the translation and doesn’t care - with the exception of literary pieces where the translator might be credited on the book. (Those cases typically won’t go through an agency)


I mean, they don't, but I can assure you there are far more profitable ways to be deceptive than being a faux translator haha


The hard part of development isn’t converting an idea in human speak to idea in machine speak. It’s formulating that idea in the first place. This spans all the way from high level “tinder for dogs” concepts to low level technical concepts.

Once AI is doing that, most jobs are at risk. It’ll create robots to do manual labor better than humans as well.


Right. But it only takes 1 person, or maybe a handful, to formulate an idea that might take 100 people to implement. You will still need that one person but not the 100.


> It'll happen gradually over time

How much time? I totally agree with you but being early is the same as being wrong as someone clever once said. There's a huge difference between it happening in less than 5 years like Zuckerberg and Sam Altman are saying and it taking 20 more years. If the second scenario is what happens me and many people on this thread can probably retire rather comfortably, and humanity possibly has enough time to come up with a working system to handle this mass change. If the first scenario happens it's gonna be very very painful for many people.


20 years as in real replacement, maybe. But change will be cca gradual if looking at whole market, even if composed of many smaller jumps. Top management of companies are now itching for that promised paradise of minimal IT with just few experts. Then comes inevitable sobering up, but the direction is clear.

I wouldn't be considering programming if choosing university studies now. With that smart, many other fields look more stable, albeit demand curve and how comfy later years of career looks like is very different (maybe lawyers, doctors, for blue collars some trades but look at long term health effects with ie back or knee issues).



In the ~8 years since I worked there, Zuckerberg announced that we'd all be spending our 8 hour workdays in the Metaverse, and when that didn't work out, he pivoted to crypto currency.

He's just trend-chasing, like all the other executives who are afraid of being left behind as their flagship product bleeds users...


We gotta put AI Crypto in the Blockchain Metaverse!


Have they bled users?


Apparently not, according to their quarterly earnings reports: https://www.statista.com/statistics/1092227/facebook-product...


The core Facebook product? Yeah.

Across all products, maybe not - Instagram appeals to a younger demographic, especially since they turned it into a TikTok clone. And WhatsApp is pretty ubiquitous outside of the US (even if it is more used as a free SMS replacement than an actual social network).


Growth in monthly actives across Facebook seems to have slowed but is still increasing--not bleeding users--from data I found through Q4 2023.

With 3 billion monthly actives and China being excluded, it's hard to expect a ton of growth since it is a major fraction of the remaining world population. There are bots etc. but they are one of the stricter networks with requiring photos of your ID and stuff a lot more often than others.


Keep in mind that Meta pulls something of a fast one here, because a lot of instagram accounts end up creating an attached Facebook account (so that they can share Reels across both platforms). I don't have current insider information, but as of 2019 they were heavily using instagram sign-ups to shore up the Facebook numbers.


Zuckerberg, as always, is well known for making excellent business decisions that lead to greater sector buy in. The Metaverse is going great.


On the other hand, Instagram has been called one of the greatest acquisitions of all time only below the Apple/Next acquisition.


That was 13 years ago. How are things going more recently?


$53 million in 2012 and $62.36 billion in profit last year…


Really, that's what you're going with, arguing against the business acumen of the world's second richest person, and the only one at that scale with individual majority control over their company?

As for the Metaverse, it was always intended as a very long-term play which is very early to be judged, but as an owner of a Quest headset, it's already going great for me.


Yes? I don't understand what is so outrageous about that. Most business decisions are not made by the CEO, and the ones we know are directly a result of him have been poor.


Howard Hughes was one of the biggest business successes of the 20th century, on par with, if not exceeding, the business acumen of the zucc. Fantastically rich, hugely successful, driven, talented, all that crap.

Anyway he also acquired RKO Pictures and led it to its demise 9 years later. In aviation he had many successes, he also had the spruce goose. He bought in to TWA then got forced out of its management.

He died as a recluse, suffering from OCD and drug abuse, immortalized in a Simpsons episode with Mr. Burns portraying him.

People can have business acumen, and sometimes it doesn't work out. Past successes doesn't guarantee future ones. Maybe the metaverse will eventually pay off and we'll all eat crow, or maybe (and this is the one I'm a believer of) it'll be a huge failure, an insane waste of money, and one of the spruce geese of his legacy.


Meta's success for the past 10 years had more to do with Cheryl Sandburg and building a culture that chases revenue metrics than whatever side project Zuckerberg is doing. He also misunderstands the product they do have. He said he didn't see TikTok as a competitor because they "aren't social," but Meta's products have been attention products, not social products, for a long time now.


Are you really claiming that it's inherently wrong to argue against somebody who is rich?


No, not at all, it's absolutely cool to argue against specific decisions he made, but I just wanted to reject this attempt at sarcasm about his overall decision-making:

>Zuckerberg, as always, is well known for making excellent business decisions that lead to greater sector buy in.


If we're being honest here. A lot of the current technocrats made one or two successful products or acquisitions, and more or less relied on those alone to power everything else. And they weren't necessarily the best, they were simple first. Everything else is incredibly hit or miss, so I wouldn't call them visionaries.

Apple was almost the one exception, but the post Jobs era definitely had that cultural branding stagnate at best.


Yes, this is exactly why I think my comment was fair. Meta itself is where it is because it was first and got initial traction. Now it has an effective level of vendor lock-in. It's the equivalent of saying someone who won at slots on their first round is an expert gambler because they got a multi-million dollar payout, even if they've then subsequently lost every other turn on the machine.


The Metaverse is actually still a thing? With, like, people in it and stuff? Who knew?


Well, we aren't yet "in it", but there's a lot of fun to be had with VR (and especially AR) activities. For example, I love how Eleven Table Tennis allows me to play ping pong with another person online, such that the table and their avatar appear to be in my living room. I don't know if this is "the future", but I'm pretty confident that these sorts of interactions will get more and more common, and I think that Meta is well positioned to take advantage of this.

My big vision for this space is the integration of GenAI for creating 3d objects and full spaces in realtime, allowing the equivalent of The Magic School Bus, where a teacher could guide students on a virtual experience that is fully responsive and adjustable on the fly based on student questions. Similarly, playing D&D in such a virtual space could be amazing.


Have you heard the term survivorship bias? Billionaires got so rich by being outliers, for better or worse. Even if they were guaranteed to be the best just going all in with one action in their portfolio isn't even what their overall strategy. Zuckerberg can afford to blow a few billion on a flop because it is only about 2% of his net worth. Notably, even he while poised and groomed for overconfidence by past successes and yes-men doesn't trust his own business acumen all that much!


I'm sorry for your sunk cost. I bought an early-ish Oculus Rift myself.


Obviously the people developing AI and spending all of their money on it (https://www.reuters.com/technology/meta-invest-up-65-bln-cap...) are going to say this. It's not a useful signal unless people with no direct stake in AI are making this change (and not just "planning" it). The only such person I've seen is the Gumroad CEO (https://news.ycombinator.com/item?id=42962345), and that was a pretty questionable claim from a tiny company with no full-time employees.


Planning to and succeeding at are very different things


I'd be willing to bet that "planning to" means the plan is being executed.

https://www.msn.com/en-us/money/other/meta-starts-eliminatin...


Part of my work is rapid prototyping of new products and technology to test out new ideas. I have a small team of really great generalists. 2 people have left over the last year and I didn't replace them because the existing team + chatGPT can easily take up the slack. So that's 2 people that didn't get hired that would have done without chatGPT.


There is little evidence that AI is replacing engineers, but there is a whole lot of evidence that shareholders and execs really love the idea and are trying every angle to achieve it.


The funny thing is that "replacing engineers" is framed as cutting costs

But that doesn't really lead to any market advantage, at least for tech companies.

AI will also enable your competitors to cut costs. Who thinks they are going to have a monopoly on AI, which would be required for a durable advantage?

---

What you want to do is get more of the rare, best programmers -- that's what shareholders and execs should be wondering about

Instead, those programmers will be starting their own companies and competing with you


> AI will also enable your competitors to cut costs.

which is why it puts pressure on your own company to cut costs

it's the same reason why nearly all US companies moved their manufacturing offshore; once some companies did it, everyone had to follow suit or be left behind due to higher costs than their competitors


If this works at all, they'll be telling AIs to start multiple companies and keeping the ones that work best.

But if that works, it won't take long for "starting companies" and "being a CEO" to look like comically dated anachronisms. Instead of visual and content slop we'll have a corporate stonk slop.

If ASI becomes a thing, it will be able to understand and manipulate the entirety of human culture - including economics and business - to create ends we can't imagine.


Fortunately we are nowhere near ASI.

I don't think we are even close to AGI.

That does bring up a fascinating "benchmark" potential -- start a company on AI advice, with sustained profit as the score. I would love to see a bunch of people trying to start AI generated company ideas. At this point, the resulting companies would be so sloppy they will all score negative. And it would still completely depend on the person interpreting the AI.


I would bet money this doesn't work

The future of programming will be increasingly small numbers of highly skilled humans, augmented by AI

(exactly how today we are literally augmented by Google and Stack Overflow -- who can claim they are not?)

The idea of autonomous AIs creating and executing a complete money-making business is a marketing idea for AI companies

---

Because if "you" can do it, why can't everyone else do it? I don't see a competitive advantage there

Humans and AI are good at different things. The human+AI is going to outcompete AI only FOR A LONG time

I will bet that will be past our lifetimes, for sure


> Instead, those programmers will be starting their own companies and competing with you

If so, then why am I not seeing a lot of new companies starting while we're in this huge down-turn in the development world?

Or, is everyone like me and trying to start a business with only their savings, so not enough to hire people?


What's the far future end-state that these shareholders and execs envision? Companies with no staff? Just self-maintaining robots in the factory and AI doing the office jobs and paperwork? And a single CEO sitting in a chair prompting them all? Is that what shareholders see as the future of business? Who has money to buy the company's products? Other CEOs?


Just a paperclip maximizer, with all humans reduced to shareholders in the paperclip maximizer, and also possibly future paperclips.


> all humans reduced to shareholders

That seems pretty optimistic. The shareholder / capital ownership class isn't exactly known for their desire to spread that ownership across the public broadly. Quite the opposite: Fewer and fewer are owning more and more. The more likely case is we end up like Elysium, with a tiny <0.1% ownership class who own everything and participate in normal life/commerce, selling to each other, walled off from the remaining 99.9xxx% barely subsisting on nothing.


> The shareholder / capital ownership class isn't exactly known for their desire to spread that ownership across the public broadly.

This seems like a cynical take, given that there are two stock markets (just in the US), it's easy to set up a brokerage account, and you don't even need to pay trading fees any more. It's never been easier to become a shareholder. Not to mention that anyone with a 401(k) almost surely owns stocks.

In fact, this is a demonstrably false claim. Over half of Americans have owned stock in every year since 1998, frequently close to 60%. [1]

[1] https://news.gallup.com/poll/266807/percentage-americans-own...


> execs really love the idea and are trying every angle to achieve it.

reminds me of the offshoring hype in the early 2000's. Where it worked, it worked well but it wasn't the final solution for all of software development that many CEOs wanted it to be.


Yep. It has the same rhyme of the worst case being 'wishes made by fools' too where they don't realize that they themselves don't truly know what to ask for, so getting exactly what they asked for ruins them.


If the latter is the case, then it's only a matter of time. Enshitification, etc.


I can tell you from personal experience that investors are feeling pressure to magically reduce head count with AI to keep up with the joneses. It's pretty horrifying how little understanding or information some of the folks making these decisions have. (I work in tech diligence on software M&A and talk to investment committees as part of the job)


For a lot of tasks like frontend development I’ve found that a tool like cursor can get you pretty far without much prior knowledge. IMO (and experience) many tasks that previously required to hiring a programmer or designer with knowledge of the latest frameworks can now be replaced by one motivated “prompt engineer” and some patience


The deeper it gets you into code without prior knowledge the deeper it gets you into debug hell.

I assume the "motivated prompt engineer" would have to already be an experienced programmer at this point. Do you think someone who has only had an intro to programming / MBA / etc could do this right now with tools like cursor?


I love cursor, but yeah no way in hell. This is where it chokes the most and I've been leaning on it for non trivial css for a year or more. If I didn't have experience with frontend it would be a shit show. If you replaced a fe/designer with a "prompt engineer" at this stage it would be incredibly irresponsible.

Responsiveness, cohesive design, browser security, accessibility and cross browser compatibility are not easy problems for LLMs right now.


Feels like C-suite thinks if they keep saying it, it will happen. Maybe! I think more likely programmers are experiencing a power spike.

I think it's a great time to be small, if you can reap the benefits of these tools to deliver EVEN FASTER than large enterprise than you already are. Aider and a couple Mac minis and you can have a good time!


I can say my company stopped contracting for test system design, and we use a mix of models now to achieve the same results. Some of these have been running without issue for over a year now.


As in writing test cases? I’ve seen devs write (heavily mocked) unit tests using only AI, but these are worse than no tests for a variety of reasons. Our company also used to contract for these tests…but only because they wanted to make the test coverage metric to up. They didn’t add any value but the contractor was offshore and cheap.

If you’re able to have AI generate integration level tests (ie call an API then ensure database or external system is updated correctly - correctly is doing a lot of heavy lifting here) that would be amazing! You’re sitting on a goldmine, and I’d happily pay for these kind of tests.


Amazingly, there is industry outside tech that uses software. We are an old school tangible goods manufacturing company. We use stacks of old grumbling equipment to do product verification tests, and LLMs to write the software that synchronizes them and interprets what they spit back out.


I'm feeling it at my non-tech company. They want more people to use Copilot and stuff and are giving out more bad ratings and PIPs to push devs out.


Even if it is a cover, many smaller companies follow the expressed reasoning of the larger ones.


Another issue is that the article assumes companies will let go of all programmers. They will make sure to keep some in case the fire spreads. Simple as that.


There were massive layoffs in 2024 and continuing this year. No one will scream they are firing people for LLMs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: