Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

can you say more about world models or symbolism?

i thought world models like genie 3 would be the training mechanism, but i likely misunderstand.



A World Model is a theoretical type of model that has knowledge about the "real world" (or whatever world or bounds you define). It can infer causalities from concepts within this world.

Yes, you can use Genie 3 to train other models. Its far from perfect. You still need to train Genie 3. And its training and outputs must be useful in the context of what you want to train other models with. That's a paradox. The feedback loop needs to produce useful results. And Genie 3 can still hallucinate or produce implausible responses. Symbolism is a wide term. But a "World Model" needs it to make sense between concepts (e.g. Ontologies or the relation of movement and gravity).


>The feedback loop needs to produce useful results. And Genie 3 can still hallucinate or produce implausible responses

The solution to this is giving the model a physical body and actually letting it interact with the real world and learn from it. But no lab dares to try this because allowing a model to learn from experience would mean allowing it to potentially change its views/alignment.


Labs have been doing that since Brooks' Subsumption Architecture decades ago. The problem with AI now is that the architectural design, unlike the brain, doesn't have grounded memory and hallucination mitigation. Letting those architectures walk around in the real world would show similar flaws.

Multiple teams already baked memory into designs, some like typical ML and some biologically inspired. Hallucination mitigation needs a ton more research. My proposal was studying the part of the brain that causes hallucinations when damaged in case it's designed to mitigate them. Then, imitate it until we have something better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: