Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, we won't. Not in either of our lifetimes. There are problems with infinitely smaller problem spaces that we cannot solve because of the sheer difficulty of the problem. LLMs are the equivalent of a brute force attempt at cracking language models. Language is an infinitesimal fraction of the whole body of work devoted to AI.


That's what they used to say about Go before DeepMind took Lee Se-dol for a ride.

Not bad for a parrot.

As for language, LLMs showed that we didn't really understand what language was. Don't sell language short as a concept. It does more than we think.


Ok. Check back on this thread in 3 years then.


You should really make a bet on longbets.org if you're serious.


Done, see you in three years.


comment time limit is 14 days, not sure if you can keep it alive for 3 years by commenting 160 deep


They could create a new post, resurfacing this bet.


how will the other person ever find it


They could … share email addresses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: