Let the good hackers help: Why states need a white‑hat safe harbor now
ID 188477162 © Chatree Bamrung | Dreamstime.com

Commentary

Let the good hackers help: Why states need a white‑hat safe harbor now

We simply don’t have the cybersecurity professionals to staff every agency and utility at the level the threat demands.

Cyberattacks are hitting state and city systems hard enough to become civic emergencies. In late January, a ransomware attack disrupted New Britain, Connecticut’s city network for days, forcing departments to operate with pen and paper while the FBI investigated. Last summer, St. Paul, Minn., shut down key internal systems and online services after what the mayor called a “deliberate, coordinated, digital attack”—serious enough that the governor deployed the National Guard’s cyber protection support. And in October 2025, 60 Minutes highlighted how Chinese hackers exploited a firewall weakness to gain a foothold in Littleton, Massachusetts’s electric-and-water utility network—proof that even a town of 10,000 can end up on the target list.

How can states possibly keep up with more advanced and persistent cyber attacks? They can’t, unless they let more people help. In particular, if independent “white‑hat” hackers—good‑faith security researchers who look for flaws and report them—had clearer legal protection at the state level, some of these weaknesses could be found and fixed long before a foreign actor or ransomware gang stumbles across them. But right now, state laws too often treat those would‑be helpers as potential criminals.

The incidents described above aren’t one-off cases; they’re symptoms of a broader pattern. In 2023, the National Association of Counties’ CIO told Axios that nearly seven in ten local and state government leaders reported facing ransomware, and the Federal Bureau of Investigation (FBI) reported that government facilities were the third-most-targeted sector by ransomware that year. Without increasing cyber capacity, states are incredibly vulnerable.

The exposure isn’t hypothetical—it’s measurable. U.S. Environmental Protection Agency (EPA) inspections since September 2023 found that over 70% of inspected drinking-water systems failed to meet basic risk-and-resilience requirements, alongside “alarming cybersecurity vulnerabilities” like default passwords and shared logins. EPA and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) have also warned that some water and wastewater systems leave internet-exposed control interfaces online—panels that can reveal sensitive system details and, in real incidents, have been used to change settings, disable alarms, and lock operators out, forcing a shift to manual operations. 

The broader attack surface is significant: One analysis highlighted a growing exposure of industrial control systems (ICS) to the public Internet, making them targets. Nearly 200,000 industrial control systems are publicly accessible online, a figure that has risen by 10% since 2024 and is projected to keep climbing. This is the real backdrop for state cyber policy: Essential systems are visible, reachable, and often misconfigured, while attackers are persistent and getting faster.

The threat is scaling faster than our defenses. Criminal groups and hostile states can reuse tools and techniques across thousands of targets, but most state and local governments are trying to defend sprawling networks with thin, overstretched teams, which is why federal help alone will never be enough.

Federal support matters: The Department of Homeland Security (DHS) and CISA can share intelligence and convene “communities of interest” around specific sectors (including wastewater), but they cannot monitor every state network and local operator. As the principal deputy undersecretary for the department’s Office or Intelligence and Analysis at DHS Adam Luke put it at the National Conference of State Legislatures’ Capitol Forum on Nov. 19, “the responsibility falls to federal, state and local governments, as well as the private sector owner-operators of critical infrastructure systems.”  He also added that “I don’t think the federal government can do this alone,” and “It’s going to take every state contributing to those to make sure that they’re safe. Everyone has some responsibility here.”

The problem is capacity: We simply don’t have the cybersecurity professionals to staff every agency and utility at the level the threat demands. CyberSeek counted 514,359 cyber job listings over the 12 months ending at the beginning of last summer and estimates that only 74% of these jobs will be staffed, leaving a persistent gap across both public and private sectors. That’s exactly why states can’t afford to sideline the independent, good‑faith researchers already looking for these flaws. If white‑hat hackers can probe and report vulnerabilities without worrying that a prosecutor will treat them like criminals, they become a force multiplier for understaffed state and local cyber teams. 

Many state computer‑crime laws define what counts as criminal hacking using phrases like “without authorization” or “exceeding authorization.” Those words determine when accessing or testing a system becomes a crime, but they’re rarely defined in a way that matches how real security research works, which often lives in the gray zone between clearly invited testing and obviously malicious intrusion. When permission is judged after the fact—sometimes based only on a terms-of-service violation—good-faith researchers who verify and report vulnerabilities can end up treated like criminals. Here’s how broad state laws get:

California’s Penal Code § 502 criminalizes a sweeping range of conduct done “without permission,” including merely accessing a computer system or copying data. 

Texas Penal Code § 3302  makes it an offense to “knowingly access” a computer or network “without the effective consent of the owner,” with escalated penalties when the system belongs to the government or a critical infrastructure facility. 

Florida Statute 815.06 similarly criminalizes willful, knowing access “without authorization or exceeding authorization” and explicitly increases penalties when conduct disrupts public services, such as water or other utilities. 

These statutes do not provide a clear statewide safe harbor for good-faith vulnerability research. That ambiguity deters disclosure, discourages testing, and pushes researchers to either stay silent or work only where they can get ironclad permission, which many small utilities and local governments don’t have the maturity to provide.

At the federal level, the Supreme Court narrowed the scope of the Computer Fraud and Abuse Act (CFAA) in Van Buren v. United States. The Court held that a person “exceeds authorized access” only when they use valid credentials to get into files, folders, or databases that are technically off‑limits to them—not when they look at information they’re allowed to access but use it for an improper purpose. In practice, the CFAA now follows a “gates‑up‑or‑down” rule: crossing a technical access boundary can be a crime; violating an internal policy or terms of use, by itself, cannot. In 2022, the Department of Justice reinforced this narrower view with a charging policy instructing federal prosecutors not to bring CFAA cases against “good‑faith security research” and, in most cases, not to treat mere violations of terms of service or workplace computer‑use policies as criminal hacking.

But those are federal constraints that don’t bind state prosecutors. And state courts can also move in the opposite direction. As it stands right now, white-hat hackers are vulnerable to state prosecution. Here are some examples of the kind of trouble white-hats can get into right now:

Coalfire penetration testers (Iowa, 2019): Two professional testers were conducting a courthouse physical/network test that the state had authorized, yet local law enforcement arrested them, and prosecutors filed burglary-related charges. The charges were later dropped, but it became a major warning that even written authorization can fail when state/county actors aren’t aligned.

Florida elections websites (2016): A security professional accessed elections-related sites using SQL injection and credentials obtained through the vulnerability, then publicized the weaknesses. He was arrested and faced felony unauthorized-access charges—an illustration of how quickly “showing a flaw” can become “computer crime” when permission and scope are disputed.

Missouri reporter threatened for exposing state data vulnerabilities (2021): A journalist found teacher Social Security numbers exposed in webpage source code by viewing the source code (not bypassing authentication). State leadership threatened criminal prosecution and investigation—again, not a conviction, but a vivid example of how quickly officials can label exposure as “hacking.”

This creates a situation in which states’ own laws leave those who seek to strengthen those systems looking over their shoulders.

The state of Washington provides a good example of how to establish laws that protect white-hat researchers. The Washington Cyber Crime Act explicitly defines “white hat security research” and clarifies that “without authorization” does not include either: (1) white-hat security research; or (2) a “breach of a contract,” including terms of service or acceptable use policies. Additionally, it explicitly says that it is “not intended to criminalize terms of service violations,” and is meant to provide “sufficient space for ‘white hat’ hacking to protect our state.” 

Other states can—and should—go further by pairing a safe harbor for white-hat researchers with a modern disclosure framework. 

States should start with a clear statutory definition of “good faith” (white‑hat) security research—testing undertaken to identify vulnerabilities, performed to avoid harm, and aimed at improving security rather than stealing, extorting, or disrupting. Once that boundary is in the statute, lawmakers can make one important legal clarification: Good‑faith research is not “without authorization” or “in excess of authorization” under state computer-crime laws. And importantly, states should say that a terms-of-service or acceptable-use-policy breach alone doesn’t convert otherwise good‑faith research into a crime. This change removes the most common flaw in “authorization” cases: letting private contract language function like a criminal statute, which discourages reporting and keeps vulnerabilities hidden until the actual threat actor exploits them.

Finally, this safe harbor needs responsible handling requirements so prosecutors can still go after real wrongdoing while good actors have a clear compliance path. The law can condition protection on certain guardrails: no intentional disruption, no data theft or extortion, minimal access necessary to verify the issue, and a good‑faith effort to report the vulnerability to the owner/operator within a reasonable time, followed by a sensible window before public disclosure unless public safety requires urgency. A combination that includes a clear definition, a clear authorization rule, and clear behavioral conditions turns a safe harbor from “permission to hack” into something states actually need: a way to turn outside expertise into safer systems, especially when most jurisdictions are operating with skeleton cyber staffing.

This framework keeps every tool prosecutors need to go after extortion, data theft, and destructive intrusions while finally drawing a line between that conduct and research that meets clear, statutory good‑faith criteria. Washington’s law shows that balance is possible: you can protect space for white‑hat testing and still respond aggressively to real attacks.

Right now, too many independent researchers operate under a cloud of legal uncertainty created by state statutes and by the way law enforcement has applied them. Legislators have an opportunity to send a different message: If you look for vulnerabilities in good faith, document them carefully, and report them responsibly, you’re part of the state’s cybersecurity strategy.