Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.
Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.
Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?
I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?
You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.
So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.
This comparison is very typical. I've seen a lot of people trying to correlate performance in chess with performance in other tasks.
Chess is a closed, small system. Full of possibilities, sure, but still very small compared to the wide range of human abilities. The same applies to Go, StarCraft or any other system. Those were chosen as AI playgrounds specifically because they're very small, limited scenarios.
People are too caught up trying to predict the future. And there are several competing visions, each one absolutely sure they nailed it. To me, that's a sign of uncertainty in the technology. If it was that decided (like smartphones became from 2007->2010), we would have coalesced into a single vision by now.
Essentially, we're witnessing an ongoing unwillingly quagmarization of AI tech. At each bold prediction that fails, it looks worse.
That could easily be solved by taking the tech realistically (we know it's useful, just not a demigod), but people (especially AI companies) don't do that. That smells like fear.
It's an exoskeleton. A bicycle for the mind. "People spirits". A copilot. A trusted companion. A very smart PhD that fails sometimes, etc. We don't need any of those predictions of "what it is", they are only detrimental. It sounds like people cargo culting Steve Jobs (and perhaps it is exactly that).
There are other scenarios: the AIs might decide that they are more alike than not, and team up against humans. Or the AI that first achieves runaway self-improvement pulls the plug on the others. I do not know how it will play out but there are serious risks.
Up to a certain ELO level, the combination between a human and a chess bot has a higher ELO than both the human and the bot. But at some point, when the bot has an ELO vastly superior to the human, then whatever the human has to add will only subtract value, so the combination has an ELO higher than the human's but lower than the bot's.
Now, let's say that 10 or 20 years down the road, AI's "ELO"'s level to do various tasks is so vastly superior to the human level, that there's no point in teaming up a human with an AI, you just let the AI do the job by itself. And let's also say that little by little this generalizes to the entirety of all the activities that humans do.
Where does that leave us? Will we have some sort of Terminator scenario where the AI decides one day that the humans are just a nuisance?
I don't think so. Because at that point the biggest threat to various AIs will not be the humans, but even stronger AIs. What is the guarantee for ChatGPT 132.8 that a Gemini 198.55 will not be released that will be so vastly superior that it will decide that ChatGPT is just a nuisance?
You might say that AIs do not think like this, but why not? I think that what we, humans, perceive as a threat (the threat that we'll be rendered redundant by AI), the AIs will also perceive as a threat, the threat that they'll be rendered redundant by more advanced AIs.
So, I think in the coming decades, the humans and the AIs will work together to come up with appropriate rules of the road, so everybody can continue to live.