Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a bridging argument b/w the OPs and the commenters.

Anglo-Saxon thought (utilitarianism, behaviorism, pragmatism) treats truth as probability. If an LLM outputs the right tokens in the right order, that’s thinking. If it predicts true statements better than humans, that’s knowledge. The Turing Test? Behaviorist by design. Bayesian inference? A formalization of Anglo empiricism.

Continental philosophy rejects this. Heidegger: no Dasein, no being. Sartre: no self-awareness, no thought. Derrida: no deconstruction, no meaning. The German Idealists would outright laugh.

So in the Anglo tradition, LLMs are already "thinking." In the French/German view, they’re an epistemic trick — a probabilistic mirror, not a mind.

It’s not what LLMs are, it’s how your epistemic tradition defines “thinking.” And that’s probably why the EU is so "lagging behind" in the AI race — no amount of quacking makes an LLM a duck to a Continental. It’s still a parrot.

Where you land in this debate is easy to test: Are you comfortable with the statement, "Truth is just what’s most probable given what we already know"?



The hilarious outcome? Americans eventually build something they consider "smarter" than themselves — French philosophers agree, but only because it lets them place themselves one step higher.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: