Hacker News new | past | comments | ask | show | jobs | submit login

> Most times when I spot an error and mention it it seems that it already had some notion of this problem in the model.

By this do you mean that it correctly incorporated what you said and convincingly indicated that it understood the mistake? Because that's not the same thing as it having the truth latently encoded in the model—it just means that it knows how people respond when someone corrects them (which is usually to say "oh, yeah, that's what I meant").




If you ask it open ended questions like:

"what's wrong with this code"

or "list 5 ways this can be improved,"

it does often recognize errors and give reasonable improvement suggestions.


I talked to it about the Turing completeness of PowerPoint. Initially it thought it was impossible, then possible with some scripting language, and then with some prodding I got it to believe you can do it with hyperlinks and animations. Then it gave me an example that I was unable to verify, but was definitely in the ballpark of the actual solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: