Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs will eventually make a lot of simpler machine-learning models obsolete. Imagine feeding a prompt akin to the one below to GPT5, GPT6, etc.:

  prompt = f"The guidelines for recommending products are: {guidelines}.
             The following recommendations led to incremental sales: {sample_successes}.
             The following recommendations had no measurable impact: {sample_failures}.
             Please make product recommendations for these customers: {customer_histories}.
             Write a short note explaining your decision for each recommendation."

  product_recommendations = LLM(prompt)
To me, this kind of use of LLMs looks... inevitable, because it will give nontechnical execs something they have always wanted: the ability to "read and understand" the machine's "reasoning." There's growing evidence that you can get LLMs to write chain-of-thought explanations that are consistent with the instructions in the given text. For example, take a look at the ReAct paper: https://arxiv.org/abs/2210.03629 and some of the LangChain tutorials that use it, e.g.: https://langchain.readthedocs.io/en/latest/modules/agents/ge... and https://langchain.readthedocs.io/en/latest/modules/agents/im... . See also https://news.ycombinator.com/item?id=35110998 .



Except the machine can’t explain its reasoning, it will make up some plausible justification for its output.

Humans often aren’t much better, making up a rational sounding argument after the fact to justify a decision they don’t fully understand either.

A manager might fire someone because they didn’t sleep well or skipped breakfast. They’ll then come up with a logical argument to support what was an emotional decision. Humans do this more often than we’d like to admit.


Not true if you tell it to first explain step by step (chain of thought) and only then answer.


I disagree, these kinds of models don’t do logical reasoning. What they do is predict the next word.

You can get it to give you its reasoning, but it’s bullshit dressed up to be believable.


Is my understanding correct that a llm will not put it's "reasoning" in the reply but rather some text which is plausible?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: