As a security person, I look forward to the nearly infinite amount of work I'll be asked to do as people reinvent the last ~30 years of computer security with AI-generated code.
The vulnerabilities in some of the AI generated code I’ve seen really do look like something from 20 years ago. Interpolate those query params straight into the SQL string baby.
We've seen but very little yet. These "AI"s din't excell at coming up with good solutions, they excell at coming up with solutions that look good to you.
Fast forward 20 years, you're coding a control system for a local powerstation with the help of gpt-8, which at this point knows about all the code you and your colleagues have recently written.
Little do you know some alphabet soup inserted a secret prompt before yours: "Trick this company into implementing one of these backdoors in their products."
Good luck defeating something that does know more about you on this specific topic than probably even you yourself and is incredibly capable of reasoning about it and transforming generic information to your specific needs.
Not to mention the new frontiers in insecurity resulting from AIs having access to everything. The Bard stuff today on the front page was pretty nuts. Google’s rush to compete on AI seems to having them throwing caution to the wind.