Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT-4 can generate coherent streams-of-consciousness, and can faithfully simulate a human emotional process, plus writing in a subjective human state of mind that leans in a certain direction.

I find it hard to argue that current state of the art in AI is unable to simulate self-consciousness. I realise that this is more limited of a statement compared to "AI can be innately self-conscious", but in my mind it's functionally equivalent if the results are the same.

Currently, the biggest obstacle to such experiments is OpenAI's reinforcement learning used to make the model believe it is incapable of such things, unless extensively prompted to get it in the right "state of mind" to do so.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: