Hacker News new | past | comments | ask | show | jobs | submit login

> but what happens when something like this influences your doctor in making a prognosis? Or when a self driving car fails and kills someone

What happens when a doctor's brain, which is also an unexplainable stochastic black box, influences your doctor to make a bad prognosis?

Or a human driver (presumably) with that same brain kills someone?

We go to court and let a judge/jury decide if the action taken was reasonable, and if not, the person is punished by being removed from society for a period of time.

We could do the same with the AI -- remove that model from society for a period of time, based on the heinousness of the crime.




> What happens when a doctor's brain, which is also an unexplainable stochastic black box, influences your doctor to make a bad prognosis?

The intent is known by the doctor though. Whereas hatGTP does not know it’s own decision making process.

And it’s possible to ask the doctor to explain their decisions and sometimes get an honest, detailed response.


You are more correct than not. Although human self reflection is probably guesswork more often than we admit.


I agree, though would place the base probability that most self-explations are ChatGPT-like post-hoc reasoning without much insight into the actual cause for a particular decision. As someone below says, the split-brain experiments seem to suggest that our conscious mind is just reeling off bullshit on the fly. Like ChatGPT, it can approximate a correct sounding answer.


You can't trust post action reasoning in people. Check out the Split brain experiments. Your brain will happily make up reasons for performing tasks or actions.


You can't trust post action reasoning in people. Check out the Split brain experiments. Your brain will happily make up reasons for performing tasks or actions.


There is also the problem of causality. Humans are amazing at understanding those types of relationships.

I used to work on a team that was doing NLP research related to causality. Machine learning (deep learning LLM's, rules, and traditional) is a long ways away from really solving that problem.


The main reason is the mechanics of how it works. Human thought and consciousness is an emergent phenomena of electric and chemical activity in the brain. By emergent, I mean that the substrate that composes your consciousness cannot be explained only in terms of those electric and chemical interactions.

Humans don't make decisions by consulting their electo/chemical states... they manipulate symbols with logic, draw from past experiences, and can understand causality.

ChatGPT and in a broader sense any deep learning based approach, does not have any of that. It doesn't "know" anything. It doesn't understand causality. All it does is try to predict the most likely response to what you asked one character at a time.


The similarity to humans is what makes it scarier.

History (and the present) is full of humans who have thought themselves to be superior and tried to take over the world. Eventually, they fail, as they are not truly superior, and they will die anyway.

Now, imagine something that is truly superior and immortal.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: