Hacker News new | past | comments | ask | show | jobs | submit login

> ChatGPT would soon do that for you better than you do it yourself.

And this is based on..? LLMs suffer from all logical fallacies humans do, afaik. It performs extremely poor where there’s no training data, since it’s mostly pattern matching. Sure, some logical reasoning is encoded in the patterns, but clearly not at a sophisticated level. It’s also easily tricked by reusing well-known patterns with variations - ie using a riddle like the wolf-lamb-shepherd with a different premise makes it fall into training data honey pot.

And as for the main argument, to replace literally critical thinking of all things, with a pattern parrot, is the most techno-naive take I’ve heard in a long time. Hot damn.




you're listing today's deficiencies. ChatGPT didn't exist several years ago, and will be a history in several years.

>to replace literally critical thinking of all things

Nobody forces you to replace. The ChatGPT would just be doing it better and faster as it is a pretty simple thing once the toolset gets built up which is happening as we speak.

>it’s mostly pattern matching

it is one component of ChatGPT. The other is the emergent (as a result of simple brute-force looking training) model over which that pattern-matching is performed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: