Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  Where's the business value? Right now it doesn't really exist, adoption is low to nonexistent outside of programming and even in programming it's inconclusive as to how much better/worse it makes programmers.
I have a friend who works at PwC doing M&A. This friend told me she can't work without ChatGPT anymore. PwC has an internal AI chat implementation.

Where does this notion that LLMs have no value outside of programming come from? ChatGPT released data showing that programming is just a tiny fraction of queries people do.



> This friend told me she can't work without ChatGPT anymore.

Is she more productive though?

People who smoke cigarettes will be unable to work without their regular smoke breaks. Doesn’t mean smoking cigarettes is good for working.

Personally I am an AI booster and I think even LLMs can take us much farther. But people on both sides need to stop accepting claims uncritically.


> Doesn’t mean smoking cigarettes is good for working.

Fun fact; smoking likely is! There have been numerous studies into nicotine as a nootropic, eg https://pubmed.ncbi.nlm.nih.gov/1579636/#:~:text=Abstract,sh... which have found that nicotine improves attention and memory.

Shame about the lung cancer though.


Nicotine does not cause cancer. Smoke do


Yes however nicotine can speed up the growth of existing cancers.

Cigarettes were/are a pretty lucrative business. It doesn’t matter if it’s better or worse, if it’s as addictive as tobacco, the investors will make back their money.


Productive how and for who?

My own use case (financial analysis and data capture by the models). It takes away the grunt work, I can focus on the more pleasant aspects of the job, it also means I can produce better quality reports as I have additional time to look more closely. It also points out things I could have potentially missed.

Free time and boredom spurs creativity, some folks forget this.

I also have more free time, for myself, you're not going to see that on a corporate productivity chart.

Not everything in life is about making more money for some already wealthy shareholders, a point I feel sometimes lost in these discussions, I think some folks need some self-reflection on this point, their jobs don't actually change the world and thinking of the shareholders only gets you so far. (Not pointed at you, just speaking generally).


>Productive how and for who?

For me, quality is the biggest metric, not money. But time does play into the metric of quality.

The sad reality is that many use it as a shortcut to output slop. Which may be "productive" in a job where that busywork isn't critical for anyone but your paycheck. But those kinds of corners being cut seems anathema to proper engineering or any other mission critical duties.

>their jobs don't actually change the world and thinking of the shareholders only gets you so far.

I'm worried of seeing more cases like a lawyer submitting cases to a judge that never existed. There's ethical concerns about the casual chat apps, but I can leave that to others.


I think this is not really the case, people see through that type of LLM use immediately (busywork). This is demonstrated in the fact that top-down implementations aren't working despite use amongst employees thriving.

People doing their jobs know how to use it effectively. Just because corporates aren't capturing that value for themselves doesn't mean it's low quality. It's being used in a way that is perhaps reflected as an improvement in the actual employees standing, and could be bridging existing outdated work processes. Often an employee is powerless to change these processes and KPI's are notoriously narrow in scope.

Hallucinations happen less frequently these days, and people are aware of the pitfalls so account for this. Literally in my own example above it means I have more time to actually check my own work (and it's work) and it also points out factors I might have missed as a human (this has absolutely happened multiple times already).


> Doesn’t mean smoking cigarettes is good for working.

Au contraire. Acute nicotine improves cognitive deficits in young adults with attention-deficit/hyperactivity disorder: https://www.sciencedirect.com/science/article/abs/pii/S00913...

> Non-smoking young adults with ADHD-C showed improvements in cognitive performance following nicotine administration in several domains that are central to ADHD. The results from this study support the hypothesis that cholinergic system activity may be important in the cognitive deficits of ADHD and may be a useful therapeutic target.


So the best interpretation is that it's like Adderal. Something to be carefully prescribed to with doctor-sanctioned doses. Not something you buy off the counter and smoke a pack a day of.


No she’s less productive. She just use it because she wants to do less work, be less likely to get promoted, and have to stay in the office longer to finish her work.

/s

What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder? Like ChatGPT is some kind of addiction similar to drugs? Is it so hard to believe that ChatGPT is actually productive?


It is the kind of question that takes into account that people thinking that they are more productive does not imply that they actually are. This happens in a wide range of contexts, from AI to drugs.


It isn’t a question asked by people generally suspicious of productivity claims. It’s only asked by LLM skeptics, about LLMs.


It absolutely is a question people ask when suspicious of productivity claims.

Lots of things claim to make people more productive. Lots of things make people believe they are more productive. Lots of things fail to provide evidence of increasing productivity.

This "just believe me" mentality normally comes from scams.


>It isn’t a question asked by people generally suspicious of productivity claims.

Why not? If you ever got an AI generated email or had to code-review anything vibecoded, you're going to be suspicious on who's "more productive". I've read reports and studies and it feels like the "more productive" people tend to be pushing more work onto people below or beside them to fix the generated mess.

I do believe there are productive ways to use this tech, but it does not seem like many people these days has the discipline to establish a proper workflow.


That doesn’t seem to me like a good reason to dismiss the question, and especially not that strongly/aggressively. We’re supposed to assume good intentions on this site. I can think of any number of reasons one might feel more productive but in the end not be going much faster. It would be nice to know more about the subject of the question’s experience and what they’re going off of.


You’re right; I’m rereading and it’s rude. Thanks.


As a counterexample to your assertion, I've seen it a lot on both sides of the RTO discourse.


This is another example of the phenomenon they’re describing, not a counterexample.


...The post I replied to specifically said "It [questioning people's self-evaluation of productivity] is only asked by LLM skeptics, about LLMs".

Naming another example outside of LLM skeptics asking it, about LLMs, is inherently a counterexample.


Wow you're completely right and I just completely forgot who you were replying to. I thought you were replying to the person the person you were actually replying to was replying to. Sorry about both my mistake and my previous sentence's convolution!


Maybe you are not aware of such kinds of topics, but yes it is asked often. It is asked for stimulants, for microdosing psychedelics, for behavioural interventions or workplace policies/processses. Whenever there are any kind of productivity claims, it is asked, and it should be asked.


It's not that hard to review how much you actually got done and check whether it matches how much it felt like you were getting done.


To do that properly, one needs some kind of control, which is hard to do with one person. It should be doable with proper effort, but far from trivial, because it is not enough to measure what you actually did in one condition, you have to compare it with sth. And then there can be a lot of noise for n=1: when you use LLMs, maybe you happen to have to solve harder tasks. So you need at least to do it over quite a lot of time, or make sure the difficulty of tasks is similar. If you have a group of people, you can put them into groups instead and thus not care as much for these parameters, because you can assume that when you average this "noise" will cancel out.


The problem isn't a delta between what got done and how much it felt like got done. The problem is it's not known how it would have taken you to do what got done unless you do it twice. Once by hand and once with an LLM, and then compare. Unfortunately, regardless of what you find, HN will be rushing to say N=1, so there's little incentive to report on any individual results.


In fact, when this was studied, it was found that using AI actually makes developers less productive:

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...


> This friend told me she can't work without ChatGPT anymore.

It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.


Serious thought.

What if people are using LLMs to achieve the same productivity with more cost to the business and less time spent working?

This, to me, feels incredibly plausible.

Get an email? ChatGPT the response. Relax and browse socials for an hour. Repeat.

"My boss thinks I'm using AI to be more productive. In reality, I'm using our ChatGPT subscription to slack off."

That three day report still takes three days, wink wink.

AI can be a tool for 10xers to go 12x, but more likely it's also that AI is the best slack off tool for slackers to go from 0.5x to 0.1x.

And the businesses with AI mandates for employees probably have no idea.

Anecdotally, I've seen it happen to good engineers. Good code turning into flocks of seagulls, stacks of scope 10-deep, variables that go nowhere. Tell me you've seen it too.


Yeah, I think this is why it's more important to shift the question to "is the team/business more productive". If a 0.5xer manager is pushing 0.1x work and a 1xer teammate needs to become a 1.5xer to fix the slop, then we have this phenomenon where the manager can feel way more productive, while the team under him is spending more time just to fix or throw out his slop.

Both their perspectives are technically right. But we'll either have burned out workers or a lagging schedule as a result in the long term. I miss when we thought more long term about projects.


That's Jevons paradox for you.


There's literally a study out the shows when developers think LLMs are making them 20% faster, it turned out to be making them 20% less productive:

https://arxiv.org/abs/2507.09089


I mean... there are many situations in life where people are bad judges of the facts. Dating, finances, health, etc, etc, etc.

It's not that hard to imagine that your friend feels more productive than she actually is. I'm not saying it's true, but it's plausible. The anecdata coming out of programming is mostly that people are only more productive in certain narrow use cases and much less productive in everything else, relative to just doing the work themselves with their sleeves rolled up.

But man to seeing all that code gets spit out on the screen FEEL amazing, even if I'm going to spend the next few hours needing to edit it, for the next few months managing the technical debt I didn't notice when I merged it.


> What kind of question is that? Seriously. Are some people here so naive to think that tens of millions out there don’t know when something they choose to use repeatedly multiple times a day every day is making their life harder?

That's just an appeal to masses / bandwagon fallacy.

> Is it so hard to believe that ChatGPT is actually productive?

We need data, not beliefs and current data is conflicting. ffs.


You're working under the assumption that punching a prompt into ChatGPT and getting up to grab some coffee while it spits out thousands of tokens of meaningless slop to be used as a substitute for something that you previously would've written yourself is a net upgrade for everyone involved. It's not. I can use ChatGPT to write 20 paragraph email replies that would've previously been a single manually written paragraph, but that doesn't mean I'm 20x more productive.

And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.


That's a very broad assumption.

It's no different to a manager that delegates, are they less of a manager because they entrust the work to someone else? No. So long as they do quality checks and take responsibility for the results, wheres the issue?

Work hard versus work smart. Busywork cuts both ways.


You’re assuming that there is zero quality check and that managers and clients will accept anything chatgpt generates.

Let’s be serious here. These are still professionals and they have a reputation. The few cases you hear online of AI slop in professional settings is the exception. Not the norm.


> And yes, ChatGPT is kinda like an addictive drug here. If someone "can't work without ChatGPT anymore", they're addicted and have lost the ability to work on their own as a result.

Come on, you can’t mean this in any kind of robust way. I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?” Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.


> you can’t mean this in any kind of robust way.

Why not?

>I can’t get my job done without a computer; am I an “addict” who has “lost the ability to work on my own?”

It's very possible. I know people love bescmirching the "you won't always have a calculator" mentality. But if you're using a calculator for 2nd grade mental math, you may have degregaded too far. It varies on the task, of course.

>Every tool tends to engender dependence, roughly in proportion to how much easier it makes the life of the user. That’s not a bad thing.

Depends on how it's making it easier. Phones are an excellent example. They make communication much easier and long distance communication possible. But if it gets to the point where you're texting someone in the next room instead of opening your door, you might be losing a piece of you somewhere.


There's a big difference between needing a tool to do a job that only that tool can do, and needing a crutch to do something without using your own faculties.

LLMs are nothing like a computer for a programmer, or a saw for a carpenter. In the very best case, from what their biggest proponents have said, they can let you do more of what you already do with less effort.

If someone has used them enough that they can no longer work without them, it's not because they're just that indispensable: it's because that someone has let their natural faculties atrophy through disuse.


> I can’t get my job done without a computer

Are you really comparing an LLM to a computer? Really? There are many jobs today that quite literally would not exist at all without computers. It's in no way comparable.

You use ChatGPT to do the things you were already doing faster and with less effort, at the cost of quality. You don't use it to do things you couldn't do at all before.


I can’t maintain my company’s Go codebase without chatgpt.


>Is it so hard to believe that ChatGPT is actually productive?

Given what I've seen in the educational sector: yes. Very hard. We already had this massive split in extremes between the highly educated and the ones who struggle. The last thing we need is to outsource the aspect of thinking to a billionaire tech company.

The slop you see in the workplace isn't encouraging either.


The recent MIT report on the state of AI in business feels relevant here [0]:

> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return.

There's no doubt that you'll find anecdotal evidence both for and against in all variations, what's much more interesting than anecdotes is the aggregate.

[0] https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Bus...


I think it's true that AI does deliver real value. It's helped me understand domains quickly, be a better google search, given me code snippets and found obscure bugs, etc. In that regard, it's a positive on the world.

I also think it's true that AI is nowhere near AGI level. It's definitely not currently capable of doing my job, not by a long shot.

I also think that, throwing trillions of dollars at AI for a "a better google search, code snippet generator, and obscure bug finder" is a contentious question, and a lot of people oppose it for that reason.

I personally still think it's kind of crazy we have a technology to do things that we didn't have just ~2 years before, even if it just stagnates right here. Still going to be using it every day, even if I admittedly hate a lot of parts of it (for example, "thinking models" get stuck in local minima way too quickly).

At the same time, don't know if it's worth trillions of dollars, at least right now.

So all claims on this thread can be very much true at the same time, just depends on your perspective.


I have my criticisms of LLM's, but anyone in 2025 trying to sell you AGI is selling you a bridge made of snake oil. The aspect of the job market won't even be the biggest question the day we truly achieve AGI.

>At the same time, don't know if it's worth trillions of dollars, at least right now.

The revenue numbers sure don't think so. And I don't think this economy can support "trillions" of spending even if it wanted to. That's why the bubble will pop, IMO.


That report also mentions individual employees using their own personal subscriptions for work, and points to it as a good model for organizations to use when rolling out the tech (i.e. just make the tools available and encourage/teach staff how they work). That sure doesn’t make it sound like “zero return” is a permanent state.


Ah yes, the study that everyone posts but nobody reads

>Behind the disappointing enterprise deployment numbers lies a surprising reality: AI is already transforming work, just not through official channels. Our research uncovered a thriving "shadow AI economy" where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.

>The scale is remarkable. While only 40% of companies say they purchased an official LLM subscription, workers from over 90% of the companies we surveyed reported regular use of personal AI tools for work tasks. In fact, almost every single person used an LLM in some form for their work.


IT nightmares aside, this only makes the issue worse, if +it is so widespread to the point where some are sneaking to use it personally, and they still can't make a business more productive/profitable: well, that bubble is awfully wobbly


No. The aggregate is useless. What matters is the 5% that have positive return.

In the first few years of any new technology, most people investing it lose money because the transition and experimentation costs are higher than the initial returns.

But as time goes on, best practices emerge, investments get paid off, and steady profits emerge.


On the provider end, yes. Not on the consumer end.

These are business customers buying a consumer-facing product.


No, on the consumer end. The whole point is that the 5% profitable is going to turn to 10%, 25%, 50%, 75% as companies work out how to use AI profitably.

It always takes time to figure out how to profitably utilize any technological improvement and pay off the upfront costs. This is no exception.


Can we both at least agree that 95% of comapnies investing and failing in a technology with 400b+ dollars of investment constitutes a bubble popping? I pretty much agree with you otherwise and that is what the article comes down to as well:

>I believe both sides are right. Like the 19th century railroads and the 20th century broadband Internet build-out, AI will rise first, crash second, and eventually change the world.


When your work consists of writing stuff disconnected from reality it surely helps to have it written automatically.


On the other hand, it's a hundreds-of-billions of dollars market...


What is?


Writing stuff disconnected from reality, I assume.


Greater output doesn't always equal greater productivity. In my days in the investing business we would have junior investment professionals putting together elaborate and detailed investment committee memos. When it came time to review a deal in the investment committee meetings we spent all our time trying to sift through the content of the memos and diligence done to date to identify the key risks and opportunities, with what felt like a 1:100 signal to noise ratio being typical. The productive element of the investment process was identifying the signal, not producing the content that too often buries the signal deeper. Imo, AI tools to date make it so much easier to create content which makes it harder to be productive.


> This friend told me she can't work without ChatGPT anymore

I am curious what kind of work is she using ChatGPT such that she cannot do without it?

> ChatGPT released data showing that programming is just a tiny fraction of queries people do

People are using it as search engine, getting dating advice and everything under the sun. That doesn't mean there is business value - so to speak. If these people had to pay say $20 a month for this access, are they willing to do so?

The poster's point was that coding is an area which is paying consistently for LLM models so much that every model has a coding specific version. But we don't see same sort of specialized models for other areas and the adoption is low to nonexistent.


> what kind of work is she using ChatGPT such that she cannot do without it?

Given they said this person worked at PwC, I’m assuming it’s pointless generic consultant-slop.

Concretely it’s probably godawful slide decks.


>Where does this notion that LLMs have no value outside of programming come from?

Well this article cites 400b of spending for 12b of revenue. That's not zero value, but it definitely showing overvalue. We're not paying that level of money back with consumer level goods.

Now is B2B valuable? Maybe. But it's really tough valuating that with how businesses are operating c. 2025.

> ChatGPT released data showing that programming is just a tiny fraction of queries people do.

yes, but it's not 2010 anymore. Companies are already on ChatGPT's neck trying to get RoI's. They can't run insolvent for a decade at this level of spending like all the FAANG's did in yestr-decade.


> This friend told me she can't work without ChatGPT anymore.

This isn't a sign that ChatGPT has value as much as it is a sign that this person's work doesn't have value.


What kind of logic is this?

ChatGPT automates much of my friend's work at PwC making her more productive --> not a sign that ChatGPT has any value

Farming machines automated much of what a farmer used to have to do by himself making him more productive --> not a sign that farming machines have any value


The output of a farm is food or commodities to be turned into food.

The output of PwC -- whoops, here goes any chance of me working there -- is presentations and reports.

“We’re entering a bold new chapter driven by sharper thinking, deeper expertise and an unwavering focus on what’s next. We’re not here just to help clients keep pace, we’re here to bring them to the leading edge.”

That's on the front page of their website, describing what PwC does.

Now, what did PwC used to do? Accounting and auditing. Worthwhile things, but adjuncts to running a business properly, rather than producing goods and services.


The output of her work isn’t presentations and reports. The actual output is raising money and making successful deals. This requires convincing investors mostly which is very hard to do.

Look up what M&A is.


>Look how what M&A is.

Mergers and Aquisitions? If that's the right acronym I hate it even more, thank you.

But yes, I can see how automating the BS of corporate culture then using it to impress people (who also don't care anyway) by saying "I made this with AI" can be "productive". Not really a job I can do, though.


Classic software developer mindset. Thinks nothing is valuable except writing code.


If you saw my rant on monopolistic mergers and thought "he only cares about writing code", then it's clear who's really in the software mindset.


What?

If you think convincing investors to give you hundreds of millions is easier than writing code, you’re out of your mind.


I find it’s mostly a sign of how lazy people get once you introduce them to some new technology that requires less effort for them.


Most developers can't do much work without an IDE and Chrome + Google.

Would you say that their work has no value?


This is probably the only place I can properly say "Programmers should be brought up with vim and man pages", so I'll say it here.

Anyways, IDE's don't try to offload the thinking for you, it's more like an abacus. You still need to work in it a while and learn the workflow before it's more efficient than a text editor + docs.

Chrome is a trickier aspect, because the reality is that a lot of modern docs completely suck. So you rely less on official documentation and more about how others have navigated an IDE and if those options work for you. I'd rather we make proper documentation than offload it into a black box that may or may not understand what it's spouting out to you, though.


This says more about PwC and what M&A people do all day than it does about ChatGPT.


I have a friend who can't work with an LLM.

1-1. Your move dickhead.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: