Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

how do you falsify that "llm will never reason?"

I asked GPT to compute some hard multiplications and the reasoning trace seems valid and gets the answer right.

https://chatgpt.com/share/6999b72a-3a18-800b-856a-0d5da45b94...

 help



i dont need to. llm are probabilistic systems, they are not design to reason, and its actually the opossite nobody can explain some of the emergent behaviour they exhibit. will you let one of those to control the air traffic based on "black magic"? sometimes i have the feeling that we have forgot what scientific method is...

You trust humans yet our brain is a black box.

i trust my kind yes. i dont know how it works, but i have one.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: