Hacker News new | past | comments | ask | show | jobs | submit login

> However I think what's missing here is our benchmarks (a la Turing test) are about negation as opposed to affirmation.

I would question the value of the Turing test, and maybe think that's not a great example for AI.

There's always been this assumption that passing the Turing test would mean we had AI, but I think that was always predicated on the machine generating the outputs. With the GPT- models, it's not clear that this isn't a form of compression over an immense data set, and we're sending pre-existing _human_ responses back to the user. It implies to me that we can pass the Turing test with a large enough data set and no (or very little) intelligence.

All of this makes me believe "These are all definitely steps in the right direction" is questionable.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: