Yeah, and I'd like to emphasize that this is qualitatively different from older gripes such as "calculators make kids lazy in math."
This is because LLMs' have an amazing ability to dream up responses stuffed with traditional signals of truthfulness, care, engagement, honesty etc... but that ability is not matched by their chances of dreaming up answers and ideas that are logically true.
This gap is inevitable from their current design, and it means users are given signals that it's safe for their brains to think-less-hard (skepticism, critical analysis) about what's being returned at the same moments when they need to use their minds the most.
That's new. A calculator doesn't flatter you or pretend to be a wise professor with a big vocabulary listening very closely to your problems.
Yeah, and I'd like to emphasize that this is qualitatively different from older gripes such as "calculators make kids lazy in math."
This is because LLMs' have an amazing ability to dream up responses stuffed with traditional signals of truthfulness, care, engagement, honesty etc... but that ability is not matched by their chances of dreaming up answers and ideas that are logically true.
This gap is inevitable from their current design, and it means users are given signals that it's safe for their brains to think-less-hard (skepticism, critical analysis) about what's being returned at the same moments when they need to use their minds the most.
That's new. A calculator doesn't flatter you or pretend to be a wise professor with a big vocabulary listening very closely to your problems.