LLMs will eventually make a lot of simpler machine-learning models obsolete. Imagine feeding a prompt akin to the one below to GPT5, GPT6, etc.:
prompt = f"The guidelines for recommending products are: {guidelines}.
The following recommendations led to incremental sales: {sample_successes}.
The following recommendations had no measurable impact: {sample_failures}.
Please make product recommendations for these customers: {customer_histories}.
Write a short note explaining your decision for each recommendation."
product_recommendations = LLM(prompt)
Except the machine can’t explain its reasoning, it will make up some plausible justification for its output.
Humans often aren’t much better, making up a rational sounding argument after the fact to justify a decision they don’t fully understand either.
A manager might fire someone because they didn’t sleep well or skipped breakfast. They’ll then come up with a logical argument to support what was an emotional decision. Humans do this more often than we’d like to admit.