ML models intent is the reward function it has. They strive to maximize rewards, just like a human does. There is nothing strange about this.
Humans are much more complex than these models so they have much more concepts and stuff which is why we need psychology. But some core aspects works the same in ML and in human thinking. In those cases it is helpful to use the same terminology for humans and machine learning models, because that helps transfer understanding from one domain to the other.
Humans are much more complex than these models so they have much more concepts and stuff which is why we need psychology. But some core aspects works the same in ML and in human thinking. In those cases it is helpful to use the same terminology for humans and machine learning models, because that helps transfer understanding from one domain to the other.