Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's something to think about:

Can we consider AI conscious similar to how Hardy recognised Ramanujan's genius? That if AI weren't to be conscious then they won't have the imagination to write what it wrote.




It doesn't take conciousness to predict the next word repeatedly.


That's reductive. Cherry pick the most lucid LLM output and the least lucid human output and I think at this point the former clearly exceeds the latter. If an LLM is "just" predicting the next word repeatedly then what does that say about humans?


Nothing, since human output is adapted (better or worse) to the human's goals in a particular situation, while an LLM produces text that seems like a logical continuation of the text they are given as input. There is a fundamental difference in these two approaches that makes lessons from one very un-applicable from one to the other.


Have you noticed how predictable humans are when they have Transient Global Amensia?

https://www.radiolab.org/podcast/161744-loops/transcript

*The Prisoner of Memory*

In August 2010, Mary Sue Campbell became trapped in the most peculiar prison—a 90-second loop of her own consciousness. After experiencing transient global amnesia, she lost the ability to form new memories, resetting every minute and a half like a human record player stuck on repeat.

What made her condition haunting wasn't just the forgetting, but the eerie predictability. Every cycle, she asked the same questions in identical phrasing: "What's the date?" followed by "My birthday's already passed? Darn." She'd listen to her daughter's explanation, widen her eyes at the mention of paramedics, and declare "This is creepy!" The nurses began mouthing her words along with her.

Dr. Jonathan Vlahos, who treated similar cases, found this mechanical repetition deeply unsettling. It suggested that beneath our illusion of free will lies something more algorithmic—that given identical inputs, the brain produces identical outputs. Mary Sue's loop revealed the uncomfortable truth that our sense of spontaneous choice might be far more fragile than we imagine.


> that given identical inputs

This part doesn’t seem correct. Each 90 seconds she has completely different inputs: sights, sounds, the people she’s around, etc. Seems like it has more to do with her unique condition than any sort of revelation about free will.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: