- A good lawyer + AI will likely win in court against a non lawyer with AI who would likely win in court against just an AI
- A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI
- A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
As long as a human has a marginal boost to AI (either by needing to supervise it, regulation, or just AI is simply better with a human agency and intuition) - jobs won't be lost, but the paradox of "productivity increases, yet we end up working harder" will continue.
p.s. there is the classic example I'm sure we all are aware of, autopilot is capable of taking off and landing since the 80s, I personally prefer to keep the pilots there, just in case.
I’m actually not clear on what percentage of the doctor’s work-time is spend doing things other than talking to patients (like arguing with insurance or making records).
The solution to doctors arguing with insurance isn't "have an AI do it" it's universal health care so doctors don't need to worry about insurance in the slightest
I worked on software for electronic medical record note taking and I'm not sure how an LLM can help a doctor speed that up tbh. All of the stats need to be typed into the computer regardless. The LLM can't really speed that up?
I’d also prefer single payer, but nothing except ourselves has been stopping us from doing that, and we haven’t changed much. Maybe it’ll happen. But I don’t see any recent tech changes making it so.
Unless somebody manages to make hyper-convincing LLMs and use them for good, I guess. (Note: I think this is a bad path).
I have no expertise and am prepared to be quite wrong, but I wonder if llm's would be good at listening to a session, and/or a doctor dictating, and putting the right stats in the right place, and the dictated case history into a note.
I think llm's are alright at speech recognition and that sort of unstructured to structured text manipulation. At least, in my corner of the customer success world I've seen some uses along those lines
My doctor was in a pilot program for this exact thing. It recorded our conversation and created the after visit summary.
It's summary was that I wasn't taking my antibiotics (I was, neither I nor my doctor said anything to the contrary). Luckily my doctor was very skeptical of the whole thing and carefully reviewed the notes, but this could be an absolute disaster if it hallucinates something more nefarious and the doctor isn't diligent about reviewing
From me talking to Doctors, it seems most of their job is handling records, contacting insurance, prior auth, talking to pharmacists, etc. This is despite having billing specialists and admins out the wazoo.
Also their care is pretty much completely decided by insurance. What surgeries they can perform, what medicine they can give, how much, what materials they can use for surgery, and on and on. Your doctor is practicing shockingly little medicine, your real doctor is thousands of pages of guidelines created by insurers and peer-to-peer doctors who you will never meet.
My experience with the current stuff on the market is you get out what you put in
If you put in a very detailed and high quality, precisely defined question and also provide a framework for how you would like it to reason and execute a task, then you can get out a pretty good response
But the less effort you put in the less accurate the outcome is
If a bad doctor is someone who puts in less effort, is less precise, and less detail oriented, it's difficult to see how AI improves on the situation at all
Especially current iterations of AI that don't really prompt the users for more details or recognize when users need to be more precise
IMO the problem is that, at least right now, the AI can't examine the patient itself, it has to be fed information from the doctor. This step means bad doctors are likely to provide the AI with bad information and reduce it's effectiveness (or cause the AI to re-enforce the biases of the doctor by only feeding it the information they see as relevant).
Not sure about what will happen with software engineers, lawyers, or doctors, but I do know how computer assistance worked decades ago when it took over retail clerks, the net effect was to de-skill and damage the job as a career, by bringing everyone up to the same baseline level management lost interest in building skills above that baseline level.
So until the 1970's shopping clerk was a medium-skill and prestige job. Each clerk had to know the prices for all the items in your store because of the danger of price-tag switching(1). So clerks who knew all the prices were faster at checking out then clerks who had to look up the prices in their book, and reducing customer friction is hugely valuable for stores. So during this era store clerk is a reasonable career, you could have a middle-class lifestyle from working retail, there are people who went from clerk to CEO, and even those who weren't ambitious could just find a stable path to support their family.
Then the UPC code, laser scanner, and product/price database came along in the 1970's. The UPC code is printed in a more permanent way so switching tags is not as big a threat (2). Changing prices is just a database update, rather than printing new tags for every item and having the clerks memorize the new price. And there is a natural language description of every item that the register can display, so you don't have to keep the clerk around to be able to tell the difference between the expensive dress and the cheap dress- it will say the brand and description. This vastly improved the performance of a new clerk, but also decreased the value of the more experienced clerk. The result was a great hallowing-out of the retail sector employment, the so-called "McJob" of the 1990's.
But the result was things like Circuit City (in its death throes) firing all of their experienced retail employees (3) because the management didn't think that experience was worth paying for. This is actually the same sort of process that Marx had noted about factory jobs in the 19th century- he called it the alienation of labor, this is capital investment replacing skilled labor, to the benefit of the owners of the investment- but since retail jobs largely code as female no one really paid much attention to it. It never became a subject of national conversation.
1: This also created a limit on store size: you couldn't have something like a modern supercenter (e.g. Costco, Walmart, Target) because a single clerk couldn't know the prices for such a wide assortment of goods. In department stores in the pre-computer era every section had its own checkout area, you would buy the pots in the housewares section and then go to the women's clothes area and buy that separately, and they would use store credit to make the transaction as friction-less as possible.
2: Because in the old days a person with a price tag gun would come along and put the price directly onto each item when a price changed, so you'd have each orange with a "10p" sticker on it, and now it's a code entry and only the database entry needs to change, the UPC can be much more permanently printed.
3: https://abcnews.go.com/GMA/story?id=2994476 all employees paid above a certain amount were laid off, which pretty much meant they were the ones who had stuck around for a while and actually knew the business well and were good at their jobs.
Considering how little interest doctors have taken in some of my medical problems I'll be happy to have AI help me to investigate things myself. And for a lot of people in the US it may make the difference between not being able to afford a doctor vs getting some advice.
You (and I) prefer to keep the pilots there, but still, there's a push to need only one person and not two in that plane/cockpit. I have little to no doubt we'll have to relearn some hard lessons after we've AI'd up pilots.
I wanted to say maybe the 2nd pilot could double as a flight attendent if they're not needed full time in the cockpit. Still retains redundancy while saving the airline money.
The problem with that is most skills need to be practiced. When you only need to use your skills unexpectedly in an emergency, that may not end well. Same applies to other fields where AI can do something 95% of the time, with human intervention required in the 5% case. Is it realistic to expect humans to continue to fill that 5% gap if we allow our skills to wane by outsourcing the easiest 95% of a job and only keep the hardest 5% for ourselves.
For those who can stomach it, reading aviation accident reports, listening to actual recorded voice footage, you very often read about the cognitive load of a two-person team trying to get through a shitty moment.
Richard de Crespigny, who flew the Quantas A380 that blew one of its engines after a departure from Changi, explains very clearly and in a gripping way the amount of stuff happening while trying to save an aircraft.
Lots of accidents happen today already at the seams of automation, I don't think we're collectively ready for a world with much more automation, especially in the name of more shareholder value of a 4 dollars discount.
Agree 100%. Watch a few videos on youtube from Mentour Pilot. The cognitive load is such a huge factor in so many accidents and close calls. There are also equally many accidents that could have been prevented with just a bit more automation and fault detection. Perhaps the most amazing thing is that after an accident, it can take years to get a real corrective action across the industry. It would be like level 10 CVEs taking 5 years to get patched!
With the level of regression I get from 'security patches' :-) I won't blame the conservative mindset there.
The Air France Rio-Paris crash is a good example of sudden full mistrust of automation and sensors by the crew after a sensor failure appeared and then recovered. Very, very sad transcript and analysis... I'm arguing against myself here, singe it was also a huge case of crew management failure and it might not have gone to crash with only one person in the cockpit.
You kinda said it, but you didn't hit the nail on the head. Yes we need the pilots. But -I will repeat my own example in my current mega-corp employer- I am about to develop a solution using an LLM (premium/enterprise) that will stop a category of employees from reaching 50, and will remain to 20, and with organic wear & tear, will drop to 10, which will be the 'forever number' (until the next 'jump' in tech).
So yes, we keep pilots, but we keep _fewer_ pilots.
It's unclear what your numbers refer to. If I had to guess, I'd say 50 means the number of employees in the category employed by your employer, but I'm not sure.
When the market pool of seniors will run dry, and as long as hiring a junior + AI is better than a random person + AI, it will balance itself.
I do believe the “we have a tech talent shortage” was and is a lie, the shortage is tech talent that is willing to work for less. Everyone was told to just learn to code and make 6 figures out do college. This drove over-supply.
There is still shortage of very good software engineers, just not shortage of people with a computer science degree.
In the US, “Junior” pilots typically work as flight instructors until they have built up enough time to no longer be junior. 1500 flight hours is the essential requirement to be an airline pilot, and every hour spent giving instruction counts as a flight hour. It’s not the only way, but it’s the most common way. Airlines don’t fund this; pilots have to work their way up to this level themselves.
The 1500 hour rule was instituted by congress at the request of pilots unions not the FAA or any other regulator. Europe only requires 250 hours and has a similar aviation safety track record to the US in the 21st century.
Accepting that people need to be trained within a system. As of now it's easy enough for software devs to get started without formal training. I don't see that changing. Smart people will be able to jump directly to senior level with the help of AI.
My concern though is that over time, a "good ANYTHING" + AI will converge to just AI, as you continue to outsource your thinking processes to AI, it will create dependence like any tool. This is a problem for any individual's long term prospects as a source of expertise. How do you think one might combat this? It seems the skills are at odds, and you are in the best position at the very START of using AI, and then your growth likely slows or stops completely as you migrate to thinking via AI API calls.
I am also concerned about couple of important things: human skill erosion (a lot of new devs who use AI might not bother to learn the basics that can make a difference in production/performance, security, etc.), and human laziness (and thus, gradually growing the habit to trust/rely on AI's output entirely).
When it's been studied so far, AI alone does better than AI + human doctor
>Surprisingly, in many cases, A.I. systems working independently performed better than when combined with physician input. This pattern emerged consistently across different medical tasks, from chest X-ray and mammography interpretation to clinical decision-making.
The scenario you describe leads to a massive productivity boost for some engineers and no work left for the rest. Or in other words: The profit share of labour compared to capital becomes even smaller. Meaning an even more skewed income distribution, where a few make millions and the rest of the currently employed software engineers / lawyers, etc will become bartenders or greeters at walmart.
> A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI
Safer is the crucial word here. If you remove it, I'd argue the ordering should be reversed.
I also will point out that you could replace ai with amphetamines and have close to the same meaning. (And like amphetamines an ai can only act through humans, never solely on its own.)
I think you're missing the spectrum between no jobs being lost and all jobs being lost. I think your first points are correct, but to me that points to some job losses as the good lawyers/doctors/SWEs get more efficient and better, and the lower tier aren't needed anymore and/or aren't worth the salary to employers.
Frankly "some jobs lost" is the worst possible outcome. This is the nightmare scenario for me
If all jobs are lost then our society becomes fundamentally broken and we need to figure out how to elevate the lives of everyone very quickly before it turns into riots and chaos. The thing is that it will be a very clear signal that something has to change, so change is more likely
If no jobs are lost we continue the status quo which is not perfect but is at least relatively sane and tolerable for now and hopefully we can keep working on fixing some of our underlying problems
If some jobs are lost but not all, then we see a further widening of the wealth gap but it is just another muddy signal of a problem that will not be dealt with. This is the "boiling the frog" outcome and I don't want to see what happens when we reach the end of that track.
Unfortunately that seems like the most likely outcome because boiling the frog is the path we've been on for a long time now.
A bad doctor with AI can commit malpractice longer by throwing AI under the bus. It remains to be tested, but a plaintiff suing a professional who uses AI may have a harder time prevailing if the defendant uses Shaggy defense and points to the Black Box that is shielded behind layers of third parties and liability limitations.
However, one constant I've observed over my career: the quality and speed of the work I produce has not significantly contributed to career advancement. I know I'm appreciated for the fact that I don't cause more problems, and I usually make the total number of problems go down. I mention this because if quality/speed was truly valued, I believe I'd see more career-related growth (titles, etc) from it at some point in the last 20 years of my career.
This isn't to say AI won't be helpful. It is, and I use it some. But the whole schtick around, "SWEs must adopt AI or they'll be left behind," reeks of thought-terminating influencer BS. If people had great ways of assessing programmer productivity, we wouldn't need the ceremony-ridden promo culture that we have in some places.
(Arguably most of my career advancement in the last 5 years or so has come mainly from therapy: emotional regulation, holding onto problems that cannot be fixed easily w/o being consumed with trying to fix them or disengaging completely, and applying all that and more to various types of leadership.)
I've grown to believe the following more extreme (or maybe reasonable) version of what you said:
- A good lawyer with or without AI will likely win in court against a mediocre lawyer with AI
- A good SWE with or without AI will likely ship features faster/safer than a mediocre engineer with AI
- A good doctor with or without AI will save more lives than a mediocre doctor with AI.
I've experimented with this personally, stopping all my usage of AI coding tools for a time, including the autocomplete stuff. I by no means found myself barely treading water, soon to be overtaken by my cybernetically enhanced colleagues. In fact, quite the opposite, nothing really changed.
- A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
I find even entertaining the opposite conclusion comical. Think of, for example, a world acclaimed heart surgeon. Are people seriously entertaining the idea that a rando with some agentic AI setup could outperform such a surgeon in said field, saving more lives? Is this the level of delusion that some people are at now?
I figure by "doctor" they're thinking of a GP, who most people only ever see taking measurements and diagnosing things, not actually doing physical things like a surgeon.
As Doc Brown famously said, "I don't think you're thinking fourth-dimensionally."
Current gen AI taking all the medical jobs is indeed laughable, but the amount of R&D going into AI right now is staggering and the progress has been rapid, with no signs of slowing down. 5 years from now things will be very different IMHO.
Only if you assume the current amount of knowledge work being done, or the amount of output from knowledge work, is the maximum amount possible or desired. Which is incorrect.
Every software company has a backlog of 1000 features they want to add, everywhere has a shortage of healthcare workers. If AI makes developers on a successful product 20% more efficient, they won't fire 20% of developers, they'll build 20% more features.
The problem is the "successful product" part; for a decade or more unsuccessful products were artificially propped up by ZIRP. Now that money isn't free these products are being culled, and the associated jobs along with them. AI is just an excuse.
> Only if you assume the current amount of knowledge work being done, or the amount of output from knowledge work, is the maximum amount possible or desired. Which is incorrect.
My point is simple:
Why would I hire 100s of employees when I can cut the most junior and mid-level roles and make the seniors more productive with AI?
> Every software company has a backlog of 1000 features they want to add, everywhere has a shortage of healthcare workers. If AI makes developers on a successful product 20% more efficient, they won't fire 20% of developers, they'll build 20% more features.
Exactly. Keep the seniors with AI and no need for any more engineers, or even just get away with it by firing one of them if they don't want to use AI.
> Now that money isn't free these products are being culled, and the associated jobs along with them. AI is just an excuse.
The problem is "AI" is already good enough and even if their jobs somehow "come back", the salaries will be much lower (not higher) than before.
So knowledge workers have a lot more to lose, rather than gain if they don't use AI.
> Why would I hire 100s of employees when I can cut the most junior and mid-level roles and make the seniors more productive with AI?
Because at competent companies juniors and mid-level employees aren't just cranking out code, they're developing an understanding of the domain and system. If all you cared about was cranking out code and features, you'd have outsourced to Infosys etc long ago. (Admittedly, many companies aren't competent.)
> Exactly. Keep the seniors with AI and no need for any more engineers, or even just get away with it by firing one of them if they don't want to use AI.
This doesn't make any sense. I asked ChatGPT and it couldn't parse it either.
> The problem is "AI" is already good enough and even if their jobs somehow "come back", the salaries will be much lower (not higher) than before.
This much is true but tech salary inflation was, again, largely a ZIRP phenomenon and has nothing to do with AI. Junior developers were never really worth $150k/year right out of university.
> Because at competent companies juniors and mid-level employees aren't just cranking out code, they're developing an understanding of the domain and system.
So many companies like Microsoft, Meta, Salesforce and Google (who are actively using AI just did layoffs) are some how not 'competent companies' because they believe with AI they can do more with less engineers and employees?
> This doesn't make any sense. I asked ChatGPT and it couldn't parse it either.
Made total sense for the companies I mentioned above, who just did layoffs based on 'streamlining operations' and 'effciency gains' with AI just this year (and beat their earnings estimates).
> This much is true but tech salary inflation was, again, largely a ZIRP phenomenon and has nothing to do with AI. Junior developers were never really worth $150k/year right out of university.
It's more than just that, including an increasing over-supply of software engineers in general and lots of them with highly inflated salaries regardless of rank. The point is that it wasn't sustainable in the first place and roles in the junior to mid-level will see a reduction of salaries and jobs.
Once again, knowledge workers still have a lot more to lose, rather than gain if they don't use AI.
> So many companies like Microsoft, Meta, Salesforce and Google (who are actively using AI just did layoffs) are some how not 'competent companies' because they believe with AI they can do more with less engineers and employees?
Is there any evidence the layoffs are actually due to AI, or due to a hiring correction using AI as an excuse?
We both already know it's very obvious that the layoffs are due to them getting away with actually doing more with less using AI.
Evidently:
1. After the layoffs that happned at Meta, it is reported that they are building (and using) AI coding agents to become even more efficient, same with Google. [0]
2. Duolingo went all in and replaced their contract workers with AI. [1]
3. Microsoft CEO says "up to 30% of the company’s code was written by AI" [0] then laid off 3% of workers (6K employees), including engineers. [2]
4. Business Insider went "AI first" with 70% of employees using ChatGPT and then lays off 21% of their workers. [4]
5. After Salesforce laid off 1,000 their workers in Feburary 2025, they now said that "the use of artificial intelligence tools internally has allowed it to hire fewer workers" and additionally said:
"We view these as assistants, but they are going to allow us to have to hire less and hopefully make our existing folks more productive." [5]
The list goes on and on in 2025 alone and this further strengthens my whole point that companies will be doing more with less knowledge workers and these workers still have a lot more to lose, rather than gain if they don't use AI.
You're falling for marketing pieces, there's been multiple discussions on HN from people working at these companies calling out that all of this as bullshit. Nobody's been replaced by AI.
I appreciate you citing sources, but all of those quotes are from CEOs pumping their stock prices with lies--which, at the moment, is the best known use of AI.
Exactly. Less of them will be needed given that a few of them will be more productive with AI vs without it. That is the change which is happening right now.
- A good lawyer + AI will likely win in court against a non lawyer with AI who would likely win in court against just an AI
- A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI
- A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
As long as a human has a marginal boost to AI (either by needing to supervise it, regulation, or just AI is simply better with a human agency and intuition) - jobs won't be lost, but the paradox of "productivity increases, yet we end up working harder" will continue.
p.s. there is the classic example I'm sure we all are aware of, autopilot is capable of taking off and landing since the 80s, I personally prefer to keep the pilots there, just in case.