“closed-source” AI applications ...—where the system’s software is securely held by its maker and a limited set of vetted partners. ... while keeping the underlying software secure. ... rapid and uncontrolled release of powerful unsecured ...
This is the worst kind of propaganda, supporting corporate AI companies, who in any case can hardly be trusted to guide AI in a direction to benefit all of society.
And honestly, what is the danger? That AI can spew toxic and misleading content? We certainly don't need AI for that!
Open-source AI may or may not be more 'dangerous' than corporate AI, but it is essential for society that AI is open.
This is possibly the most egregiously and malevolently misleading piece of propaganda about AI that I have seen. Written presumably by a person, making an effective counterargument against their own claim.
Free and open, unfettered development of AI is a fundamental human right.
LLMs are nothing more (and nothing less) than marvelously effective parsers for the cultural-linguistic heritage generated by all of humanity. The n-dimensional matrix of vector data represented in the sum total of human intellectual output is THE legacy of humankind, and is the precious and vital commons of all humanity.
To regulate and close access to tools required to parse that information in newly effective ways, tools that make that heritage accessible and available to humanity at large, represents nothing less that an attempt to hobble and intellectually restrain humanity itself, to criminalize the unconstrained enlightenment of humankind, as “uniquely dangerous” .
Yes, access to information and knowledge is uniquely dangerous, in the same way that allowing the everyperson access to libraries and reading is.
This article might just as well be arguing to restrict the teaching of reading, as well as access to books and the internet to an “approved list” for public consumption.
To see this published in IEEE is a serious disappointment. I will be withdrawing my affiliation with them unless a retraction is made.
If it is really so dangerous, what are we going to do about the researchers working on it behind closed doors? They are human too, some will be corrupt, some will do all these things listed in the article. Somebody will leak some of these weights. Maybe to the public, maybe to the blackmarket.
If it too dangerous to develop these things in public, it is too dangerous to develop them in the first place.
Things like making a more deadly coronavirus etc., are also trivialities, within the reach of any university professor, probably many PhD students in biomedicine.
Things that you can't publish, not because they're dangerous, but because it's boring and of no scientific interest.
Bombs are of course even easier, but so boring it's not even worth thinking about. I think people need to accept that the reason people don't do these kinds of things is only that they don't want to and that these things are relatively straightforward-- that there's no way to protect oneself and that the technical capability to do these things is and will forever remain widespread.
Open source often evens the field, and sometimes forces competitiveness into the general market. People become more aware of what's possible (good and bad, and what to expect), and corporations cannot be trusted not to use those tools in harmful ways in any case, just see OpenAI recently giving up on that prohibition for "weapons development" and "military purposes". "Pause all new releases of unsecured AI systems" only benefits them and bad actors that do not care about adhering to rules and will find ways around them, all of those points seem a wishlist for entities that want to regulatorily capture the market, appointed to endlessly produce "risk assessments" that do not reflect reality and only stall progress for the average person.
The premise of this article: that closed source, company owned software is better secured, is utter rubbish.
Closed source companies have been hacked so extensively most citizens and every US Federal Employee of the western world has be owed. The most highly kept nuclear secrets were leaked or stolen. The most guarded industrial secrets have been lifted by other countries.
This antiquated idea that making a small list of people who have access and putting ownership into private corporations is inherently more secure has proven to be folly time and time again, yet organizations continue to espouse it like it’s some kind of truth. It’s a falsehood.
Open Source has thousands of well intentioned eyeballs looking at things, and that is a very effective form of security that truly makes things secure.
You could replace "AI model" with "computer". With an "unsecured computer" you could write a virus, or design a detonator switch for a bomb, or publish copyrighted material. Clearly we need to limit the sale of unsecured computers and estaplish liabilities for manufacturers whose customers enguage in these actions.
The risk potential is far greater for the bots that are provided with compute and an API as a commercial service. The businesses offering these services will also be uniquely positioned to connect their bots to more and more real world infrastructure.
Also, the "someone will make a bomb with it" never eventuates. You can find recipes for sarin on the clearweb. People exist that have studied at university. You can 3D print guns. People are allowed to drive cars.
> or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration
This statement is especially scary, because it implies that being against immigration is a political position that society must not allow at any cost. I bet other inflamatory text messages are allowed, as long as they benefit a certain political side.
All of the hypotheticals were pretty poorly formed. But to be fair, it doesn't say anything for or against immigration, just "angry about immigration".
The problem is stirring up hatrid against one or more defined demographics to make political advantage. It is a core component of fascism's path to power.
I did not conflate the two. The wording in the OP definitely suggests a concern about hating as opposed to being against. It is possible to be against something without being inflamed or more angry.
We should also be cognisant of where any anger is directed - is it to the policy makers in government, or to the immigrants themselves?
If you're angry about immigration, it's fair to say your anger goes towards the policy or even the very concept of it. If you're angry with immigrant you're angry with the people. Yet the author of the article chose the first, suggesting it's not ok to be against immigration.
Except of course that a lot of the 'anger about immigration' in the way you put it is thinly disguised racism or xenophobia. It is quite hard to separate these (on both sides of the debate), and so have a sensible discussion on what a good policy on immigration should be.
Having a sensible discussion is even harder when people like the article's author want to preventively censor any criticism of immigration policies by banning every tool that might help generating said criticism. You can have a sensible discussion only if both sides are able to freely express and disseminate their point of view.
Since AI is mostly trained on open data from the Internet, AI is as dangerous as the Internet. Aren't LLMs basically hallucinating search engines which understand natural language? Where's the danger? They can't do anything novel that humans can't already do.
What's really dangerous is lack of transparency around closed-source models (say, US govt has a deal with OpenAI to alter the output the way they want) and there's also privacy concerns (no idea where my personal or our confidential corporate data will end up).
> David Evan Harris is [sic] senior advisor for AI and elections at the Brennan Center for Justice, and visiting fellow at the Integrity Institute. He previously worked as a research manager at Meta (formerly Facebook) on the responsible AI, civic integrity, and social impact teams, and was recently named to Business Insider’s AI 100 list for his work on AI governance, fairness, and misinformation.
Except in this case he walked from corporate cash to an academic role. I’d be curious why he left the integrity division in Meta. Anyway, when someone like this expresses caution it causes me to listen.
I don't see why; the recommendations in the article are generic regulatory-capture plays, and this person has not really demonstrated a specific mastery of AI. Other versions of his resume list him as the Research Manager for "Social Impact" and almost all of his publications are for commercial media. In other words, writing these articles is basically his job.
This is in essence a sociologist's take on AI, which is why it doesn't bother to define "AI", demands international treaties which would violate American constitutional rights, and contains outright nonsense like "educating all suppliers of custom nucleic acids...about best practices" (as though someone in that business doesn't already know).
I once wrote an opinion in healthcare with my professor, just so that I could get a bullet point for my CV. We got published in Nature quite accidentally (I honestly just wanted to submit it to Bloomberg Opinions or something lol), and while the citations aren't much, it has since been cited by an industry body and a government report. But yeah, I just wanted an easy paper with a bullet point for my CV.
This is one of those curious lists of wildly different things.
> You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration.
One of these things is not like the others. You can not ask any of these models to "design a more deadly coronavirus". The other things pale in comparison to having AI controlled by a few corporations. "A series of inflammatory text messages." My god.
Is it just me or is this a completely ludicrous position? And under IEEE’s name? Data breaches happen at the largest and best protected holders of data. How on earth can the author expect, then, closed AI to remain so forever?
This just seems like such an absurd take but I’d gladly hear how reasonable minds differ.
My first encounter with IEEE was when I started in an "Engineering" college in India, and saw professors writing dubious "research papers" on IEEE venues (IEEE eXplore ?) for stuff any worthy software engineer would come up with in 20 minutes. These shitty professors also had some sorts of "Memberships" in IEEE, despite not knowing shit about anything in CS.
At that point I thought IEEE was a mostly money-grabbing organization. This was more than half a decade ago.
This rhetoric is dangerous especially if made wide spread by social media.
There are so many examples of how research is hindered by closed source companies. The recent paper on "are emergent abilities of LLM a mirage?" also hints at how the closed nature of OpenAI and their refusal to share discoveries is an obstacle.
Another AI doomsayers with little understanding of threats
> You could ask them to design a more deadly coronavirus, provide instructions for making a bomb, make naked pictures of your favorite actor, or write a series of inflammatory text messages designed to make voters in swing states more angry about immigration. You will likely receive polite refusals to all such requests because they violate the usage policies of these AI systems.
You can do all of these with Google search and Photoshop today. AIs are trained with data from Internet, ergo they can only do things you can already find in Internet today.
Or the author understands this precisely and the article is just fearmongering for shareholder interests and regulatory capture.
"Unsecured AI" is the "Ghost Gun" of AI control now, I guess? We're inventing new terms for the AI models people can run on their own without the involvement of a megacorp? Nice narrative.
Sorry, IEEE, but just putting a disclaimer about it being a guest post doesn't reduce the amount of respect I've lost for you.
The author has fallen for the classic mistake of “daddy knows best and he loves me” and of course he sees himself as one of the daddies.
Of course “it’s dangerous” so is me releasing my own Linux distro to learn how it’s done, full of misconfigurations and stop updating it after 1 year.
And in the topic of content, has he seen what is posted in social media for the last 15 years? Or he’s just on LinkedIn talking to his select group of “friends”?
Seriously, this paternalistic waxing poetic about AI bugs me to no end.
Let's fix the 'vulnerable distribution channels' issue. Otherwise, reglementing open source AI you just select who can abuse them. There are issues with channels that are too big, focused on making money, and ignoring social effects, we have too many systems build around extractive and mindless capitalism values. Let's look in the mirror and do the right things. AI and Open Source AIs are good mirrors that show the ugliness of various societal constructs.
Ofcourse this article rubs the open source dogma believers here the wrong way and i'll get downvoted to hell, but i wholeheartedly agree with the author, and i think that you guys don't get what AI-systems are, at their basic level.
The are information interpolators, and all the information you can interpolate from the training data is readily present in latent space, waiting to be discovered by a prompt.
There is this argument: But but you can find a bomb instruction in a chemistry textbook. No, you can't. Without checking this, but i think you'll have a hard time finding any chemistry textbook that explicitly gives you a bomb instruction. Ofcourse, yes, you can find all the information necessary to build that bomb in that text book, but the key difference to a latent space full of interpolated data points is that: You have to sit down, find that information that is scattered throughout that textbook, write it down, interpolate that knowlegde yourself, write that down, and then you have a bomb instruction -- except you'll have written it for yourself.
Not so with latent spaces. The bomb instruction is already there, interpolated from all the data points, just waiting to be prompted, and that is easy peazy with, yes, open source models.
So spare me the whining about anachronistic software dev dogmas from the 90s and arrive in the present, pretty please.
You can literally just get books about how to make bombs. No need to go with a chemistry textbook.
The reason these people want to control AI is about ideological control of narratives, not because low-iq terrorists will be empowered to bomb us. They already have the recipes and they're widely available online.
1. This would apply to any system with a rudimentary world model, including probably modern Google search or Wolfram Alpha. By this logic, any sufficiently advanced search engine, computational chemistry system, or perhaps even an NLP calculator like Sulver would to a varying level "aid in terrorist activities" by the virtue of just doing what it was designed to.
2. Unlike say Wolfram Alpha which can just remove any number of compounds from its knowledge base, erasing concepts from LLMs is much more complicated than an SQL query. In fact, it at present moment seems to be nearly impossible.
RLHF fine-tuning doesn't seem to add nor remove information learned in pre-training, naive regexes or classification models post-generation don't work well with response streaming nor are particularly difficult to circumvent with a small change of phrasing. Creating a smaller curated dataset thoroughly searched for all "dangerous" information doesn't work in today's paradigm of blind model scaling (and would by the way allow say your very phone to run a tiny "safe" model, since LLMs derive most of their world model through memorization)
3. Are OpenAI, or potentially very soon Microsoft, Google, Amazon, and the rest of big tech, trustworthy custodians for this supposedly dangerous tool? What if they themselves choose to forgo the safety measures if it means a higher eval score? What if they use their power of MITMing the almighty black box to hide evidence of copyright violation or hard-code correct answers to safety benchmarks? What if users' relationship with LLMs becomes more para-social and with increased pressure to actually make any real profit outside of VC speculation they'd increasingly override model's responses with advertisements?
---
I agree LLMs present a real problematic challenge to safety, but in my belief it stems not from them becoming too perfect search engines, but just very good stochastic parrots capable of inducing delusions in vulnerable individuals.
This is the worst kind of propaganda, supporting corporate AI companies, who in any case can hardly be trusted to guide AI in a direction to benefit all of society.
And honestly, what is the danger? That AI can spew toxic and misleading content? We certainly don't need AI for that!
Open-source AI may or may not be more 'dangerous' than corporate AI, but it is essential for society that AI is open.