Hacker News new | past | comments | ask | show | jobs | submit login

The buzzkill when you fire up the latest most powerful model only for it to tell you that peanut is not typically found in peanut butter and jelly sandwiches.



I don't think providing accurate answers to context free questions is even something anyone is seriously working on making them do. Using them that way is just a wrong use case.


People are working -very- seriously on trying to kill hallucinations. I'm not sure how you surmised the use case here, as nothing was given other than an example of a hallucination.


There's a difference between trying to get it to accurately answer based on the input you provide (useful) and trying to get it to accurately answer based on whatever may have been in the training data (not so useful)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: