Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are so many papers now showing that LLM "reasoning" is fragile and based on pattern-matching heuristics that I think it's worth considering that, while it may not be an in principle limitation — in the sense that if you gave an autoregressive predictor infinite data and compute, it'd have to learn to simulate the universe to predict perfectly — in practice we're not going to build Laplace's LLM, and we might need a more direct architecture as a short cut!
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: