An international AI regulatory agency would threaten free speech, politicize AI
Photo 197013224 / Artificial Intelligence © Pop Nukoonrat | Dreamstime.com

Commentary

An international AI regulatory agency would threaten free speech, politicize AI

A highly invasive, active, and empowered government could monitor AI use and punish anyone not using it for approved purposes. 

A “call for the immediate development of a global, neutral, non-profit International Agency for Artificial Intelligence (IAAI)” by officially recognized experts is a modest proposal worthy of Jonathan Swift. Unfortunately, a recent modest proposal by Gary Marcus and Anka Reuel in the pages of The Economist is not satire. 

The Economist piece seems prompted by the recent emergence of powerful forms of generative AI, such as ChatGPT. Generative artificial intelligence is so-called because it can generate text or images when prompted. You can ask a generative AI program to write a poem about Thanksgiving told from the turkey’s point of view or draw a picture of Elvis in the style of Leonardo da Vinci. Recently, an AI-generated image won a major photography competition.  

Generative AI is a big deal—real science-fiction stuff. It is not surprising that some folks are freaking out about it. In The Economist, Marcus, an emeritus professor at New York University, and Reuel, a Ph.D. student at Stanford, are certainly in panic mode. To make their case for this International Agency for Artificial Intelligence, they trot out a tried-and-true formula to lobby for expert power, including the power to tell you what is true and false. 

As experts so often do, Marcus and Reuel start by trying to scare you. There’s no discussion of the amazing advances in medicine, research, and social services that AI is already enabling.

“Scientists have warned that these new tools could be used to design novel, deadly toxins. Others speculate that in the long term there could be a genuine risk to humanity itself,” they write.

The expert trope of forecasting doom for anyone foolish enough to flout expert advice was flagged long ago by sociologists Peter Berger and Thomas Luckmann in their 1966 book, The Social Construction of Reality. They wrote, “Thus the general population is intimidated by images of the physical doom that follows” from going against the advice of the expert. We saw the same trope in the recent COVID-19 pandemic, as I discussed at the time.   

Once Marcus and Reuel spin this frightening scenario, they offer their solution: the non-profit International Agency for AI. But don’t worry—it will all be built “with guidance and buy-in from governments, large technology companies, non-profits, academia and society at large, aimed at collaboratively finding governance and technical solutions to promote safe, secure and peaceful AI technologies.”  

Marcus and Reuel say they are just seeking solutions. What could possibly go wrong? According to the suggestions in the piece, we’re not supposed to worry that two significant stakeholders in this enterprise—governments and large technology companies—have a lot of power and influence. And invoking “society at large” sounds good until you ask two vital questions: Who will represent society at large? And who will decide who represents society at large?

Combating misinformation would be among the central missions of the IAAI, Marcue and Reuel say, “The IAAI could likewise convene experts and develop tools to tackle the spread of misinformation. On the policy side, it could ask, for instance, how wide-scale spreading of misinformation might be penalized.”

Thus, powerful experts would decide for you what is and is not misinformation. And if you are “found to have spread misinformation,” you will be “penalized.” 

Sure, any penalties would be meted out by government actors, not the IAAI itself. But Marcus and Reuel assert that IAAI experts would serve as advisors in fighting misinformation. The IAAI would provide a valuable service to any political authority seeking to control speech. The local king, president, or prime minister could point to it as an external authority. It could then be argued that it is not the president, prime minister, or king who says you are spreading misinformation: It is the IAAI.

Thus, the IAAI would be something like an international version of the “Disinformation Governance Board” proposed and shortly thereafter disbanded by the Biden administration. The idea of the IAAI is the stuff of nightmares and dystopian novels. As I noted in The Wall Street Journal in a piece with Abigail Devereaux about the Disinformation Governance Board, “[T]here’s one small problem with empowering ‘truth experts’: Experts are people.”

If you give experts power, they will abuse that power sooner or later.

Marcus and Reuel claim successful precedents with the International Atomic Energy Agency (IAEA) and the International Civil Aviation Organization (ICAO). They admit, “The challenges and risks of AI are, of course, very different and, to a disconcerting degree, still unknown.” And yet they plow forward with their comparisons, both of which are utterly spurious.  

The IAEA makes for a good comparison as it is an intergovernmental organization, not a collection of stakeholders, including governments, “large technology companies, non-profits, academia and society at large.” And, like the ICAO, its goals are more finite, familiar, and–vitally–measurable. Signatories agree to an inspection regime, which makes it hard to hide deviations from agreed principles and practices. 

Unlike nuclear technology, generative AI changes rapidly. It is used by many dispersed actors for an unspecifiable array of purposes. Many of those purposes will be formulated in the future and cannot be imagined now. Who, then, are to be the signatories, and what would an inspection regime look like?

The answer, of course, is that the mass of users of generative AI will not and cannot be signatories to any supposed agreement created by the proposed IAAI. Through such an organization, a highly invasive, active, and empowered government could monitor AI use and punish anyone not using it for approved purposes. 

Marcus and Reuel have heavenly hopes for the IAAI, but it would bring us closer to a bureaucratic hell. Their concrete proposal would create expert power that is easily abused. And they want to use that power to decide for you what is true and what is false. There is a word for governments who try to control your thoughts, and that word is tyranny. Value expertise, but fear expert power.