Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The model is trained on large volume data, correct? Why would it get such a simple fact incorrect?


LLMs are known to be bad at counting. It would be interesting to see the answer to "List the planets in our solar system, starting with the closest to the sun, and proceeding to farther and farther ones."

Also, the knowledge can be kind of siloed. You often have to come at it in weird ways. Also, they are not fact-bases. They are next-token-predictors, with extra stuff on top. So if people on the internet often get the answer wrong, so will the model.


I just tried the same "third planet from the sun" question and got the correct response. No other training or tweaks.

Can't wait to unleash Pluto questions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: