Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I just tried asking it a question:

> User: What is the third planet from the sun?

> Llama: The third planet from the sun is called Mars.



> ...> Llama: The third planet from the sun is called Mars.

Ask it if is there life on Mars in that parallel reality


The model is trained on large volume data, correct? Why would it get such a simple fact incorrect?


LLMs are known to be bad at counting. It would be interesting to see the answer to "List the planets in our solar system, starting with the closest to the sun, and proceeding to farther and farther ones."

Also, the knowledge can be kind of siloed. You often have to come at it in weird ways. Also, they are not fact-bases. They are next-token-predictors, with extra stuff on top. So if people on the internet often get the answer wrong, so will the model.


I just tried the same "third planet from the sun" question and got the correct response. No other training or tweaks.

Can't wait to unleash Pluto questions.


Skynet is collaborating with the Martians already, I see.


Llama is just from the future. That is all…




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: