Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Bingo, great reply! This is what I've been trying to explain to my wife. LLM's use fancy math and our language examples to reproduce our language but have no thoughts are feelings.



Yes but the initial training sets did have thoughts and feeling behind them and those are reflected back to the user in the output (with errors)


non c'est un pipe

Ability to generate words describing emotions are not the same thing as the LLM having real emotions


There are humans that do not experience emotions, they are not un-real pipes.

Featherless biped -> no-true Scotsman goalpost moving [saving us that step]

Humans are no more capable of originality, just more convinced of their illusion of consciousnesses. You could literally not pick a human out of a conversational line-up, so it is moot - computationally functionally equivalent.

https://en.wikipedia.org/wiki/Chinese_room https://en.wikipedia.org/wiki/Mechanism_(philosophy)

At some point, their models will 1:1 our neuron count, and Pigeonhole principle then implies we are the "less intelligent ones" since "internal model" (implicit parameter count) is the goalpost of the hour.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: