The proof is in the pudding, and we are making more and more pudding everyday. Instead of caring about naysayers, we need to be working under the assumption AI is here to stay and rapidly expanding in scope, and we need to build the social and political structures to be able to handle it.
The saying is actually "The proof of the pudding is in the eating". I don't quite understand what "the proof is in the pudding" is supposed to mean and how that relates to making a lot of pudding ^^.
>we need to build the social and political structures to be able to handle it.
This would be a massive waste of resources depending on how far you misinterpret the nature of the “AI thats here to stay”. There is little to support that we’re near-approaching a general purpose, “true” AI, the kind of superintelligent, creative, potentially world-ending and self-improving thinking machine that brings us to the singularity. Its much fairer to characterize current technologies as an algorithm that excels in certain fields, with distinct limitations that we’re still exploring, and have some idea but not perfect of where those limits are.
Functionally, they just find probabilities matrixes for a certain sequence of actions, and can search the problem space much faster than we did before, by simulating the event indefinitely. And they come with the issue that anything that can’t be well-simulated and quickly can’t be “AI’d”, as well as the common issue of catastrophically failing due to not actually understanding the object in total (eg change a picture of a cat to an ostrich by editing key pixels). And afaik, they’ve shown no ability to “change the problem”, a key component of creativity (if you can’t find a good answer to something, consider changing the question; our “AI”s do not.)
And this is naturally why they excel at games (almost by definition a repeatedable simulation) that typically have a very well-defined question. But at the same time, we don’t expect current AI to be capable of taking its “strategies” forward to the next update of starcraft (the problem changes) without re-searching a lot of the problem space (there exist algorithms for training future networks; I don’t know how much progress they’ve made), because they don’t really have strategies in the first place, or a real model for how things interact (they struggle to predict new interactions without sinulating them, or rather, “experiencing” them).
Which is also why its difficult to imagine AI’s will ever truly be driving cars around with the current tech — rather, they’ll succeed likely as an awkward combination of nueral networks, expert systems, hueristics and safeguards. We’d naturally expect most sci-fi usecases of AI to be the same — eg political decision making. And they’ll be limited to the extent that we can render simulations.
And if we pretend these distinctly limited algorithms are in fact the predecessors to our post-singularity successors, simply because they’re able to do a few things we weren’t really expecting (just as computers have proved to be a whole lot more capable than the 60’s general population thought, but far less than what 60’s scifi thought), we’ll do a whole lot of work for quite a bit of nothing.
The fact that its called AI doesn’t mean we’re quickly approaching star trek’s Data AI. It didn’t mean it in the last few AI hype cycles either.
The proof is in the pudding, and we are making more and more pudding everyday. Instead of caring about naysayers, we need to be working under the assumption AI is here to stay and rapidly expanding in scope, and we need to build the social and political structures to be able to handle it.