Early generative artificial intelligence applications like ChatGPT can create writing and art on command that many believe marks a sea change in what computers are able to do. Calls to manage the further development of this technology through top-down “pauses” and heroic attempts at regulation now come from many of the people who seemingly know the most about the technology, including OpenAI CEO Sam Altman.
Headline-grabbing doomsday scenarios appear more the product of cultural baggage from famous thought experiments and science fiction films much older than the applications now being called AI. Beyond this futuristic veneer, the AI debate is in highly familiar territory. Concerns over generative AI’s capacity to mislead the public, replace information economy workers, and facilitate student cheating represent familiar boogeymen from the many ongoing debates involving the technological boom of recent decades.
Critics of these calls for regulation, licensing, and other preemptive measures against bottom-up development of generative AI have rightly focused on several interrelated points. Innovation resembles an evolutionary process far more than the work of heroic inventors, meaning many future uses are impossible to predict but all too possible to foreclose with short-sighted rules based on today’s knowledge. Experts’ tendency to overestimate their own capabilities, the media’s tendency to fuel panics, and governments’ incentives to allay public fears reinforce each other. And overregulation early in a technology’s development tends to give today’s firms a path to lasting market dominance by suppressing new competition. No matter the intent, such “pause and regulate” approaches to AI would at least partially close the process of innovation to fresh ideas from new innovators.
But another danger lies in the exclusion of consumers and end-users of these novel applications from their crucial role in further development. Consumers and end users are not merely the beneficiaries of innovation but contribute indispensably to the ongoing process of innovation itself. Consumers provide feedback for new ideas through market mechanisms, informing innovators in a continuous process of many small interactions that generally steers technology toward beneficial uses that a room full of the greatest minds could never anticipate. Such free exchange also plays a unique and important role in mitigating the impact of technology on society that observers initially fear.
The reliably unpredictable development of internet applications provides numerous examples of consumers helping guide innovations to places few people foresaw. In the early days of mp3 downloads, both legal and through file-sharing, companies assumed consumers would place a high value on owning files on their hard drives. But through the process of more reliable internet technology and growing consumer comfort with the model, streaming emerged as the truly disruptive force in the music industry. Social media platforms like Facebook were viewed as cultural curiosities before mass adoption began revealing a large and still-evolving list of popular uses.
The role of consumers and end users is especially important in the development of foundational technologies—advances beneficial through a wide array of applications rather than a single use case. Like the internet technology noted above, generative AI will almost certainly derive its benefits by helping people perform tasks, access information and communicate ideas. While the assumption that AI is the race toward computers thinking like humans is proving hard for some to let go of, applications like ChatGPT work by being even better at what computers did already—making calculations with extremely large sets of data extremely fast.
Those currently most panicked about AI tend to view the disruptions still rippling through society from social media as mistakes that somehow could have been avoided had we taken a planned approach such as they now propose. This idea is fatally flawed because our knowledge of the existence of applications like social media, let alone their usefulness and challenges, only became clear as participants in a market process gradually developed these applications.
None of this vindicates a utopian view of AI or the assumption that the march toward progress is a clean upward ascent. But no less naïve is the view expressed by the Future of Life Institute’s open letter urging an AI “pause”:
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.
The foresight of a few early inventors will fall far short of an open process including many end users in the “debugging” of generative AI. But intentionally foreclosing the possibility of end users finding mistakes unforeseen by designers seems the height of recklessness.
Take, for example, the phenomenon of so-called “hallucinations” in AI chatbots. The term, yet another misleading analogy to the human mind, refers to large language models around which chatbots are built, and their propensity to report demonstrably false information as true. GPT-4, recently rolled out by OpenAI, apparently improves considerably on this problem, though is not yet close to eliminating it. User experiences and widespread reporting of such complex errors from the first three versions provided a wealth of examples, painting a far richer picture of the errors’ size and scope than programmers alone could anticipate.
We will learn from these examples as generative AI develops, and face new challenges not yet foreseen. The messy cycle of disruption and adjustment from which policymakers and innovators now try to protect users of the technology will only be messier and less effective in its corrections if they succeed.
Concerns about generative AI resulting in job losses mirror longstanding similar concerns about automation further straining manufacturing jobs in advanced economies. People have long used such fears as grounds to either arrest the process of innovation or prepare for a future where the labor of a significant number of people no longer has value. Automation in manufacturing is indeed ongoing and further automation is likely inevitable.
What these grave predictions always miss is the incomparable power of millions of minds over time to find new ways of working and making such automation a complement rather than a substitute to what human beings can do. A similar process is likely to unfold with generative AI, where many applications now starting to develop can be described as automation for the information economy. As the technology and its uses continue to evolve, so will the ways information workers find to use it to their benefit. The “pause and regulate” approaches currently advocated by some, where a limited group of experts strives to protect workers by maintaining the status quo, would only lead to more painful adjustments in the end.
The capacity of generative AI to misinform and defraud people on scales both large and small currently looks to many like a particularly daunting challenge because of its centrality to debates about technology we already use. The idea that people should be protected from nefarious uses of generative AI, much like ill-intentioned speech online, merits serious consideration. But when such attempts at protection limit information and other self-expression according to inevitably subjective standards, they undermine the ability of consumers of information to learn from both their own and others’ experiences.
The idea that the tension and uncertainty we face from recent technological advances is both an inevitable and indispensable feature of continued progress is understandably disturbing to many. But this ongoing evolution is both a source of the ultimate benefits of technology and a corrective to the problems that arise. Expecting smart people equipped with today’s knowledge to stand in for a widespread process of learning and exchange would be a truly grave error.