Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If ChatGPT-4 hallucinates an answer, it won't kill me

Not yet it won't. It doesn't take much imagination to foresee where this kind of AI is used to inform legal or medical decisions.



Real human doctors kill people by making mistakes. Medical error is a non-trivial cause of deaths. An AI doctor only needs to be better than the average human doctor, isn't that what we always hear about self-driving cars?

And medicine is nothing but pattern matching. Symptoms -> diagnosis -> treatment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: