I've used GPT-4, and while extremely impressive, it doesn't feel like we're all that much closer to super-intelligent AI than we were last month or last year. It feels like Google on steroids, but the gap between GPT-4 and AGI still feels massive. This seems like so much putting the cart way ahead of the horse.
GPT-4 and homologous are just advanced text concatenation programs based on statistics; the AI acronym in such models is just marketing.
And such marketing is highly worrying as there are people thinking about letting these kinds of statistical algorithms to make decisions that should be done by real people.
But the above doesn't rule out the possibility of people trying to develop real AI (they may even be stupid enough to use an internet connection to build the system), and this should be a worrying to add to the above one.
The above doesnt exclude neither the possibility that many of the relevant people who signed the "Pause Giant AI Experiments for at least 6 months" petition are simply trying to gain time to reach the competence; their hypocritical shamelessness about this matter is visible.
I think the problem is the gradient, not the current state of chatgpt. Given how much better gpt4 is over gpt3 (I've used both) , some people are getting worried.
FWIW, most people at OpenAI do believe GPT5 will achieve AGI. Of course, it's hard to define precisely what that is and where the line is from an AI system to an AGI one.
GPT5 will complete training in December of this year.
The concern is that there could be an exponential advancement in AI over the next few decades. So GPT looks stupid right now, stochastic parrot and all, but after 15, 30 years? I'm on board with some kind of precautionary principle applied on an international scale.
I believe AGI using LLMs will need a cognitive framework to glue these models into.
I don't think we're anywhere near having one model that reaches AGI with it's own agency and online learning.