Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes to me LLMs and the transformer have stumbled on a key aspect for how we learn and “autocomplete.”

We found an architecture for learning that works really well in a very niche use-case. The brain also has specialization so I think we could argue that somewhere in our brain is a transformer.

However, ChatGPT is slightly cheating because it is using logic and reasoning from us. We are training the model to know what we think are good responses. Our reasoning is necessary for the LLM to function properly.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: