People still really find it hard to internalize exponential improvement.
So many evaluations of LLMs were saying things like "Don't worry, your job is safe, it still can't do X and Y."
My immediate thought was always, "Yes, the current version can't, but what about a few weeks or months from now?"
I think people find it harder to not extrapolate initial exponential improvement, as evidenced by your comment.
> My immediate thought was always, "Yes, the current version can't, but what about a few weeks or months from now?"
This reasoning explains why every year, full self driving automobiles will be here "next year".
What's the fundamental limit where it becomes much more difficult to improve these systems without some new break through?
People still really find it hard to internalize exponential improvement.
So many evaluations of LLMs were saying things like "Don't worry, your job is safe, it still can't do X and Y."
My immediate thought was always, "Yes, the current version can't, but what about a few weeks or months from now?"