Lots of Barry's out there, one of them even got elected president and shared his knowledge with the world quite widely. Half the people took him credulously.
Point being, yes the LLM loves to make shit up. Lots of people dismiss it as a result. It's still bloody impressive, we just need to be aware of its limitations.
I get that the current US president is senile. But that sets a low bar. Why do we need to pretend something is good if it’s as shitty at facts as some people? People want something that’s better and more trustworthy.
I went to Twitter and read what ML scientists say. They don’t think it’s anything like a layperson imagines it to be(I’m also a layman when it comes to LLMs). But it’s an impressive technology IMO. I just think we don’t know all the limitations and strength yet because there’s a vocal majority that suffers from survivorship bias.
They're just telling you how LLMs work. Anyone can understand the underlying algorithm with a bit of study. It's trivial.
Nobody is understanding the high level emergent effects of LLMs plus training. What the ML scientists say has as much credibility as a lay person in this regard.
I think it’s a wrong question to ask. It’s an impressive piece of technology for which people are trying to find use cases. But the right questions are: 1. What are the tasks where LLMs out-perform humans, or, at least deliver similar performance? 2. Are LLMs more efficient in these tasks?
Point being, yes the LLM loves to make shit up. Lots of people dismiss it as a result. It's still bloody impressive, we just need to be aware of its limitations.