As artificial intelligence has become more sophisticated and accessible, governments have increasingly looked to the benefits that employing this technology might provide. One hotly debated use of artificial intelligence involves monitoring conversations between prisoners and outside callers inside prisons and correctional facilities to screen for specific words or phrases that can signal danger for inmates.
Reuters reported that a panel of congressional legislators sent a request to the Department of Justice asking for a report on the potentiality of utilizing artificial intelligence in federal prisons, signaling that legislators may be open to the idea of utilizing this technology on a large scale. David Sherfinski and Avi Asher-Schapiro of Reuters wrote:
Prisons in the United States could get more high-tech help keeping tabs on what inmates are saying, after a key House of Representatives panel pressed for a report to study the use of artificial intelligence (AI) to analyze prisoners’ phone calls.
But prisoners’ advocates and inmates’ families say relying on AI to interpret communications opens up the system to mistakes, misunderstandings and racial bias.
The call for the Department of Justice (DOJ) to further explore the technology, to help prevent violent crime and suicide, accompanies an $81 billion-plus spending bill to fund the DOJ and other federal agencies in 2022 that the Appropriations Committee passed last month.
The technology can automatically transcribe inmates’ phone calls, analyzing their patterns of communication and flagging certain words or phrases, including slang, that officials pre-program into the system.
A House Democratic aide said in an emailed statement they were encouraging the DOJ “to engage with stakeholders in the course of examining the feasibility of utilizing such a system.”
Several state and local facilities across the country have already started using the tech, including in Alabama, Georgia and New York.
Current issues with artificial intelligence (AI) in prisons suggest that efforts to prematurely embrace the software may pose more risks than benefits. One major difficulty with current AI call monitoring involves the limited data the software has to compare conversations. During the early days of using AI to analyze language, developers focused on popular languages and dialects. As a result, current conversation-investigating AI faces difficulty understanding some communication forms more than others.
This facet of today’s AI becomes problematic when considering its use in the criminal justice system. Although most Americans speak English, American-English has over 30 major dialects. Today, a sizeable number of inmates in American prisons do not speak the sort of English that many developers train artificial intelligence systems to detect. For example, reports note that AI consistently misinterprets African American English (AAE) dialect on a statistically significant level compared to other dialects. Stanford University researchers found, “The technology that powers the nation’s leading automated speech recognition systems makes twice as many errors when interpreting words spoken by African Americans as when interpreting the same words spoken by whites, according to a new study by researchers at Stanford Engineering.”
As a result, using AI in its current form could end up inadvertently discriminating against individuals by flagging come conversations for human review at a rate higher than others. Thus, employing AI in prisons before the technology can adequately catalog the languages that all inmates use would likely create problems for populations who already face discrimination.
A second difficulty for expanding the use of artificial intelligence in correctional facilities does not center around the practical limitations of the technology, but rather the degree to which managers should rely on AI for effective oversight. AI undeniably has the ability to support employees in performing tasks more efficiently; however, correctional facility managers should avoid responding to current difficulties by over-relying on AI in prison administration. There also needs to be reasonable review and appeal when the AI flags a conversation—it should not be presumed the artificial intelligence system is always right.
The labor-saving potential of AI has already grabbed the attention of prison directors nationwide. As with other industries, technology has brought major innovations to the world of corrections but an overreliance on new modes of monitoring inmates can have negative consequences. One example of this behavior was New Orleans investing $70 million in state-of-the-art camera updates instead of providing adequate housing to inmates with COVID-19.
Failing to physically oversee external calls could pose safety risks for inmates, even if callers know that artificial intelligence software is on the line. Even if AI could adequately understand all inmates’ calls, some individuals would likely seek to misdirect the software, just as some attempt to sneak contraband in facilities or continue outside criminal operations while incarcerated. If officials decide to exclusively use AI to monitor calls, inmates could simply utilize codewords or other techniques to get around AI software, enabling easier planning of dangerous activities that could harm inmates and officers alike. On the flip side, AI mistakenly identifying harmless phrases as problematic can lead to unjust punishment of inmates.
However, the very real shortcomings of today’s artificial intelligence should not lead legislators to overreact in the opposite direction and conclude that the technology needs to be banned. Researchers are already working on ironing out some of the practical difficulties associated with using AI in overseeing prisoner conversations. If AI reaches the point of sophistication needed to be successful in monitoring prisoner conversations and corrections officers embrace it as a tool rather than a replacement, the technology could be groundbreaking.
Furthermore, fully outlawing the use of AI in prisons would preclude inmates from benefiting from this technology in the future. In a population wherein at least half of individuals are mentally ill and current prisons only increase the risk for developing mental illness and further behavioral issues, it seems that we should undoubtedly be pursuing technologies that allow us to improve the health of inmates across the country.
Ultimately, legislators should be very wary of artificial intelligence’s current shortcomings before authorizing expansions of its use, but must also be cautious of preemptively restricting this potentially useful technology’s future life-saving potential.