Our privacy laws need upgrades to address the spread of facial recognition tools
ID 309840907 © Tero Vesalainen | Dreamstime.com

Commentary

Our privacy laws need upgrades to address the spread of facial recognition tools

Law enforcement agencies are increasingly using facial recognition to identify people in images captured by officers’ body-worn cameras.

Law enforcement agencies are increasingly using facial recognition to identify people in images captured by officers’ body-worn cameras (also known as bodycams) and other networked camera systems. Together, these tools enable federal, state, and local agencies to track and identify people at scale in ways that existing privacy and civil rights laws were not designed to regulate. 

Fifteen states have moved to limit or ban some law enforcement uses of facial recognition, including its application to body-worn camera footage, but the federal government and most states lack clear, binding standards to protect people’s privacy and civil liberties when police use this technology. The current patchwork of laws also causes confusion because people in the same community can face different levels of biometric surveillance depending on which agency they encounter. States should strengthen and expand their privacy and civil rights protections around law enforcement facial recognition, and Congress should set clear and consistent rules for federal law enforcement that limit when the technology may be used, how long biometric data may be kept, and how it’s audited and disclosed.

How facial recognition turns camera networks into tools for biometric surveillance

Police use bodycams both to document incidents and to provide raw material for biometric identification. An officer’s bodycam records video during his or her shift, and this footage gets uploaded to a database. 

Facial recognition software can then search each frame for faces, crop those images, and measure features such as the distance between the eyes or outline of the jaw. Those measurements are converted into a numerical template, essentially a digital faceprint, that can be compared to government image databases containing mugshots, driver’s license photos, and immigration records. Some systems are used after the fact to identify people captured in stored footage, but others can operate close to real-time, allowing an officer to see a possible match on a linked device during an encounter. In practice, bodycams become one more high-volume input into the same facial recognition systems that also draw on fixed and networked surveillance cameras

Facial recognition has become routine in policing for many cities and municipalities. For years, cities like New York and New Orleans and states like Ohio have been scanning faces from hundreds of fixed and networked cameras around the city and sending alerts to officers’ phones when someone appears to match a watchlist built from mugshot images. This model, combined with bodycam requirements for officers on patrol, effectively turns dense urban camera networks into live biometric lookouts. This has the potential to improve police efficiency. 

But this surveillance also raises obvious privacy concerns. Because body cameras follow officers into many spaces, they can capture far more than only suspects. They can record people standing nearby on sidewalks, family members and children inside homes, patients and staff during medical calls, congregants in places of worship, journalists, legal observers, and protesters in public squares. Once this footage is subject to facial recognition, all those appearances can be treated as biometric data points, even where individuals are not accused of wrongdoing and may not even know that a digital template of their faces exists. This same face-matching system can also ingest images from other camera networks, extending this kind of incidental biometric capture well beyond the spaces officers enter with body-worn cameras.

Law enforcement access is not limited to cameras they control. In many cities and towns, police routinely obtain footage from privately operated cameras and doorbells by requesting clips from residents who have opted-in to sharing footage or asking companies that store cloud-based footage to share it. They can then run the same facial recognition tools on that video. A recent example is Ring’s Super Bowl LX “Search Party” commercial, which depicted neighbors pooling footage from Ring cameras to track a lost dog. The concept drew backlash from privacy advocates who warned that the same mechanism could be used to track people, showing how tapping into vast networks of privately collected video can effectively extend biometric surveillance across entire neighborhoods with little public oversight. 

Federal law enforcement policies only increase these concerns. Immigration and Customs Enforcement (ICE) officers began deploying body-worn cameras in 2024 in response to calls for more transparency in the field. Shortly thereafter, ICE began implementing mobile tools that allow agents to capture facial images during encounters and check them against federal biometric databases, including immigration records and criminal records. 

ICE policies state that facial recognition will not be used to identify people during live bodycam streams, but the agency can still run an analysis of stored footage once it is uploaded. In practice, that means a face captured anywhere, whether during a raid,  a workplace inspection, or even a benign street encounter, can still be scanned against federal databases. Recent reports indicate that these tools have been used on people in their households, on protesters while demonstrating, and on legal observers. These reports also revealed that images may be stored for extended periods under Department of Homeland Security policies. 

Missing federal and state protections

Fifteen states have moved to restrict or ban police use of facial recognition, with some laws targeting body-worn cameras specifically and others setting broader rules for law enforcement use of the technology. In 2017, Oregon enacted ORS 133.741, which requires any law enforcement agency within the state that uses body-worn cameras to adopt a policy prohibiting the use of facial recognition and other biometric-matching technologies to analyze recordings, meaning they cannot run face-matching on bodycam footage. New Hampshire followed Oregon with a similar law, and from 2020 to 2023, California’s Body Camera Accountability Act prevented law enforcement from installing, activating, or using facial recognition or biometric surveillance on bodycams or their data. That moratorium has since expired, so the state no longer has a bodycam-specific restriction in force statewide.

Other states have placed guardrails on law enforcement use of facial recognition technology more broadly, including when officers may apply these tools to bodycam footage. In 2023, Montana enacted the Facial Recognition for Government Use Act, which effectively prohibits real-time facial recognition by requiring police to obtain a warrant before using the technology on video captured from bodycams. Other states, such as Utah, Maryland, and Vermont, have also limited law enforcement use of facial recognition to specified serious offenses, including felonies and violent crimes, and to narrowly defined emergency situations like locating a missing person or responding to a specific threat to life, while Vermont and Maryland, in particular, prohibit its use to monitor protests or other constitutionally protected activities. That still leaves 35 states without clear, binding statewide standards for police.

Congress has also left major gaps at the federal level. Despite years of warnings from civil rights groups and oversight bodies about the risks of facial recognition, there is still no comprehensive federal statute that sets binding rules for how federal law enforcement can use this technology. Instead, federal agencies largely rely on internal policies that can be changed without public debate or clear accountability. 

In those jurisdictions without clear rules, policies are often left to individual departments, vendor contracts, or informal practices that the public rarely sees. The result is a patchwork in which the legality and oversight of facial recognition can vary sharply from place to place, even as the technology spreads rapidly in everyday policing.

But this patchwork also affects residents of states that have implemented protections. While state legislatures can regulate their own agencies and local governments, they cannot directly dictate how federal agencies use their own equipment and databases. As a result, a city police department may be prohibited by state law from running facial recognition on its bodycam video, while federal agents working the same operation are free to use their own. 

Similarly, state agencies might upload videos into shared systems or honor informal federal requests for clips that are later scanned outside of state processes. Local departments that want to strictly comply with state bans or local ordinances may limit what they share, which can generate tension when federal partners expect broad access to footage. This unevenness can undermine public understanding of, and confidence in, both body cameras and the safeguards meant to accompany them.

States have begun to address the risks of unregulated facial recognition, but those efforts remain uneven and incomplete, leaving significant gaps in how the technology is used and overseen. Continued state action is needed to clarify limits, strengthen safeguards, and ensure that bodycam and other law enforcement uses of facial recognition do not erode privacy and due process. States without laws on the books yet can adopt concrete safeguards already emerging elsewhere. At a minimum, new statutes should require a warrant for most facial recognition searches of government image databases, as Montana does; limit use to specific serious crimes and narrowly defined emergencies, as in Utah and Maryland; and require regular reporting accuracy and bias audits, as Maryland does. 

State laws should also prohibit face matches from serving as the sole basis for an arrest, as Detroit has done, and provide meaningful enforcement mechanisms and remedies when agencies violate these rules. 

These safeguards should also apply when police seek to acquire footage from private cameras or third-party platforms, so agencies cannot evade facial recognition limits by outsourcing biometric searches to privately collected video. By adopting these measures, states without facial recognition standards can help ensure the technology serves legitimate public safety goals without enabling unchecked biometric surveillance.

Nevertheless, state-level protections can’t govern how federal agencies use facial recognition,  so Congress needs to establish clear, comparable baseline rules for federal law enforcement. Because federal agencies investigate within individual states and routinely across state lines, gaps in federal standards can undercut even the strongest state protections, making national rules on when and how facial recognition may be used, how long data may be retained, how agencies may obtain and analyze footage from private camera systems, and how accountability is monitored essential to a coherent system of protections.