Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If LLMs just regurgitate compressed text, they'd fail on any novel problem not in their training data. Yet, they routinely solve them, which means whatever's happening between input and output is more than retrieval, and calling it "not understanding" requires you to define understanding in a way that conveniently excludes everything except biological brains.
 help



Yes there are some fascinating emergent properties at play, but when they fail it's blatantly obvious that there's no actual intelligence nor understanding. They are very cool and very useful tools, I use them on a daily basis now and the way I can just paste a vague screenshot with some vague text and they get it and give a useful response blows my mind every time. But it's very clear that it's all just smoke and mirrors, they're not intelligent and you can't trust them with anything.

When humans fail a task, it’s obvious there is no actual intelligence nor understanding.

Intelligence is not as cool as you think it is.


It can still be cool- but maybe it's just not as rare.

I assure you, intelligence is very cool.

you'd think with how often Opus builds two separate code paths without feature parity when you try to vibe code something complex, people wouldn't regard this whole thing so highly

> they'd fail on any novel problem not in their training data

Yes, and that's exactly what they do.

No, none of the problems you gave to the LLM while toying around with them are in any way novel.


None of my codebases are in their training data, yet they routinely contribute to them in meaningful ways. They write code that I'm happy with that improves the codebases I work in.

Do you not consider that novel problem solving?


They don't solve novel problems. But if you have such strong belief, please give us examples.

Depends how precisely you define novel - I don't think LLMs are yet capable of posing and solving interesting problems, but they have been used to address known problems, and in doing so have contributed novel work. Examples include Erdos Problem #728[0] (Terence Tao said it was solved "more or less autonomously" by an LLM), IMO problems (Deepmind, OpenAI and Huang 2025), GPT-5.2 Pro contributing a conjecture in particle physics[1], systems like AlphaEvolve leveraging LLMs + evolutionary algorithms to generate new, faster algorithms for certain problems[2].

[0] https://mathstodon.xyz/@tao/115855840223258103

[1] https://huggingface.co/blog/dlouapre/gpt-single-minus-gluons

[2] https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: