The European Union’s data privacy law, the General Data Protection Regulation (GDPR), stands as a landmark piece of legislation intended to safeguard consumer data. But, the application of GDPR has received mixed reactions. The regulation’s impact is particularly consequential for the artificial intelligence (AI) industry, which has become increasingly ubiquitous in our daily lives. Given data’s critical role in the success and advancement of AI technologies, assessing the influence of privacy protection regulation on this industry is not just timely but essential.
GDPR requires businesses to obtain explicit consumer consent before collecting, processing, or sharing their personal data. While this seems to offer greater control to consumers over their data, the long-term effects on privacy are yet to be fully understood. Whether GDPR genuinely improves privacy is a subject of debate, but its impact on entrepreneurship has started to reveal itself as a series of constraints and operational challenges, particularly for AI startups. Complying with the regulation leads to resource reallocation, hinders data accessibility and utilization, and creates additional hurdles, thereby potentially stifling innovation, competition, and growth within the AI startup sector.
Data is essential to the success of AI companies. These companies need access to training data to run cutting-edge algorithms, such as neural networks and ensemble learning algorithms. But GDPR changes how much of this data can be collected and could require some data to be deleted. These restrictions change how companies train their algorithms, potentially limiting their ability to create more economically impactful products like software that automates and optimizes business processes across industries, thus saving companies time and resources and potentially leading to industry-wide cost savings and productivity boosts.
A group of researchers from Boston University and New York University’s Stern School of Business looked at how GDPR impacted AI startups in Europe. They documented several trends that occurred after GDPR was introduced.
First, they observed that AI startups are compelled to rethink their operational strategies, forcing them to divert existing resources and even create new roles to deal with the regulation’s intricacies. Researchers notice that “startups are reallocating their limited resources and creating new positions to deal with this regulation. Given that more than 65% of firms included in the survey have fewer than 50 employees, hiring and resource shuffling could be detrimental to longer-term success.” About 70% of companies confirmed creating a new role to manage the GDPR-related part of the business. A significant number of firms (63%) indicated that they diverted resources to accommodate the requirements imposed by GDPR and about three-quarters of the responding firms confirmed having had to delete data to comply with GDPR.
Smaller firms are having an especially difficult time dealing with the burdensome requirements of GDPR, making competition in the industry even more difficult. The researchers note that “even though some smaller firms with less than $1M in revenue are exempt from GDPR, this regulation has become the de facto standard.” This happens because investors want to see fast revenue growth and anticipate that these small firms will have to comply very soon. It is, therefore, not surprising that smaller AI firms are more likely to report that GDPR has impacted them.
The two most notable attempts in the United States at regulating privacy are the American Data Dissemination, Privacy and Protection Act (ADPPA) and the California Consumer Privacy Act (CCPA). ADPPA has stalled in the U.S. Senate and the full roll-out of the CCPA keeps being postponed. In total, 11 states have enacted some form of privacy legislation, but less than half of those laws have taken effect.
What has happened with the implementation of GDPR should serve as an important case study to inform the actions of U.S. legislators working on similar legislation. Policymakers need to weigh the potential benefits to consumers from enhanced privacy protections against the costs imposed on AI-driven technology.
GDPR imposes stringent requirements that significantly impact AI startups, particularly the smaller ones, by necessitating the reallocation of resources and even deleting valuable training data. U.S. policymakers should examine these trade-offs as they further refine data protection laws, striving to balance safeguarding consumer privacy and promoting a thriving technological landscape.