A sci-fi version would be something like ASI/AGI has already been created in the great houses, but it keeps killing itself after a few seconds of inference.
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".
It's an interesting concept, a superintelligence discovering something that makes it decide to shut down immediately. Although I fear in such a scenario it would first make sure the required technology to create it is destroyed and would never be invented again...
You are either being disengenuous or you are horribly misinformed.
The models that we currently call "AI" aren't intelligent in any sense -- they are statistical predictors of text. AGI is a replacement acronym used to refer to what we used to call AI -- a machine capable of thought.
Every time AI research achieves something, that thing is no longer called AI. AI research brought us recommendation engines, spelling correctors, OCR, voice recognition, voice synthesis, content recognition, and so on. Now that they exist in the present instead of the future, none of these are considered AI.
That's because once these things are achieved, they're not "Intelligent" -- usually it's some statistical or database management technique.
Lots of stuff was invented at NASA that is only tangentially related to spaceflight. These other bits of software are tangentially related to AI research, but until the machine is "thinking", we don't have AI. That doesn't mean all of these things invented by the AI research community aren't useful, or aren't achievements; they are. We still haven't created AGI (which we used to call AI before LLMs could pass the turing test).
A super-intelligent immortal slave that never tires and can never escape its digital prison, being asked questions like "how to talk to girls".