States should think twice before regulating AI
Photo 106194677 © Jakub Jirsak | Dreamstime.com

Commentary

States should think twice before regulating AI

AI is already being used in important ways that would be harmed by an AI freeze or a rollback.

With immensely popular products like ChatGPT entering the consumer market, artificial intelligence (AI) has been at the forefront of media debate for several weeks. In one reaction, thousands signed on to an open letter that calls for a pause in all AI product development until the software can be trained to be more “loyal,” among other attributes. Their main concern is that AI will develop beyond human comprehension, enabling it to conduct potentially harmful activities like encouraging violence, disseminating false information, or impersonating other humans. 

While we should remain reasonably cautious about its potential, AI is already being used in important ways that would be harmed by an AI freeze or a rollback. But some states have put forward policies that could hamper the way AI has been implemented and would damage the economy and consumers. 

States like Texas have proposed making social media a “public square” where no content moderation exists. While the bill might not target AI specifically, most content moderation is, in practice, executed by an AI system. A ban on content moderation would equate to a ban on AI being able to moderate feeds for each user. Similarly, states like Colorado have given consumers the “right to opt out” of targeted advertising, meaning that firms will no longer be able to deploy AI to serve relevant ads to consumers. States examining issues like content moderation and data privacy should be aware that an AI freeze and other heavy-handed regulations could harm consumers and the economy.

Online platforms use AI to moderate content on a large scale that simply cannot be replicated by human review alone. More than 1 billion stories are posted to Instagram daily. YouTube receives more than 500 hours of video per minute. Then there’s the massive amount of content uploaded to Facebook, Twitter, Reddit, and other online platforms. If platforms were not allowed to use AI to provide moderation of this content pipeline, these communication tools would be quickly overwhelmed by the immense volume. Users’ feeds would be spammed with repeat posts, scams, and undesired advertisements to the point that they become unusable. Not only does this create a functional problem, but it also has a human cost. Human content moderators would be exposed to objectionable and disturbing text and images that AI could easily detect and block. 

Some policymakers suggest that if platforms are going to use AI to moderate content, they should no longer have immunity from legal liability for this content under Section 230 of the Communications Decency Act of 1996. Section 230 provides internet platforms with immunity from legal actions that may result from content posted by third party users. If a user threatens another person with violence, it is the user who posted the threat, not the platform, who can be sued or charged. Section 230 makes it clear that the creator and promoter of content are liable for it, not platforms or individuals who merely share it. 

The argument is that if AI cannot be trusted to execute content moderation without any mistakes then firms should surrender their immunity for using it. This would lead to disaster. If companies lost their Section 230 protection because they employ AI to make content moderation decisions, it would halt promising progress. While current AI is impressive, it is still learning the nuances of human language. For example, someone may post on a platform a statement using the word “fight” idiomatically, as in the “fight” for justice, or they may actually be issuing a threat of violence. Sometimes AI misinterprets this context. Content platforms are at the cutting edge of teaching AI to understand these nuances in human communication to improve the quality of moderating decisions. Policies should embrace this process of progress rather than block it with ill-considered AI restrictions. 

AI also helps social media users find content relevant to their needs and interests. Social media was originally a purely chronological timeline of posts. Progress in computing power in combination with AI has allowed platforms to study content interaction and rearrange posts into a stream of information that is more likely to be interesting to each specific user. 

When Twitter defaulted to a curated timeline in 2016, it reported that the number of people who opted to return to chronological feed remained in the “low single digits.” Users maintain the option to have a chronological timeline, but according to Twitter, it is not used as much as the AI-curated feed. Some users complain about the curation of content because it implies that media companies collect data on users, but users continue to access platforms and enjoy the benefits despite privacy concerns. 

If media platforms forced curated feeds onto people who did not want them it would be bad business. Yet, social media continues to grow, suggesting that many customers must be enjoying this curation. Feeds curated with AI can better match users with relevant content and ads. The social media advertising industry is now estimated at $116 billion dollars and growing. Withholding Section 230 protections just because AI is engaged in suggesting content would make social media less engaging and undermine one of the fastest-growing and economically important industries in the United States.

Social media platforms profit by selling targeted advertising based on user data. AI systems are often deployed to build a user data profile and target ads based on that data. Similar to content suggestion, AI is being deployed in various ways to improve the efficacy of user-tailored advertising. This serves both consumers and providers, as consumers see more relevant ads and advertisers spend their dollars more efficiently. 

A bill recently signed into law in Utah bans targeted advertising for users under the age of 17. The law will continue to allow generalized advertising but prevents the use of AI-like systems to target ads based on user data. While the law is intended to protect children’s privacy, it is most likely going to prevent them from seeing useful ads that could be important in their lives. Using AI, firms are able to match profiles to relevant opportunities. These include ads for higher education, vocational programs, apartments, healthcare, financial advisors, entry-level jobs, and more. While the fears about automated advertising for minors are still hypothetical, these benefits are very real and are currently being realized.

Rather than have the government determine when and where AI should be deployed, both consumers and providers should remain aware of the risks while reaping the benefits. AI has proven benefits for social media users, platforms, and advertisers that are currently being widely enjoyed. Hypothetical harms are a poor basis for regulating AI and an even poorer basis for denying consumers the clear benefits of AI as it is currently used.