Connect with us

Technology

Police use AI software to write police reports

Avatar

Published

on

Police use AI software to write police reports

Police departments are often among the first adopters of new products in the technology industry, such as drones, facial recognition, predictive software and now artificial intelligence. After already embracing AI audio transcription programs, some departments are now testing a new, more comprehensive tool: software that uses technology similar to ChatGPT to automatically generate police reports. According to an August 26 report from Associated pressmany agents are already “excited” about the generative AI tool that claims to reduce their routine office work by 30 to 45 minutes.

Initially announced in April, Draft One is heralded by Axon as the “latest big leap to [the] moonshot goal to reduce the number of gun-related deaths between police and the public.” The company, best known for Tasers and law enforcement’s most popular body cameras, claims initial testing has saved users an hour of paperwork per day.

“When officers can spend more time connecting with the community and taking care of themselves, both physically and mentally, they can make better decisions that lead to more successful de-escalated outcomes,” Axon said in its disclosure.

The company stated at the time that Draft One was built with it Microsoft’s Azure OpenAI platformand automatically transcribes police body camera audio before “leveraging AI to quickly create a draft story.” Reports are “prepared strictly from the audio transcript” according to Draft One’s “underlying model… to avoid speculation or embellishments.” After additional important information is added, officials must sign off on the accuracy of a report before it is re-reviewed by humans. Each report is also flagged if AI was involved in writing it.

[Related: ChatGPT has been generating bizarre nonsense (more than usual).]

Speak with AP On Monday, Noah Spitzer-Williams, Axon’s AI product manager, claimed that Draft One uses “the same underlying technology as ChatGPT.” ChatGPT’s generative big language model, designed by OpenAI, has often been criticized for its tendency to provide misleading or false information in its responses. However, Spitzer-Williams likens Axon’s capabilities to having “access to more buttons” than are available to regular ChatGPT users. Adjusting the ‘creativity knob’ reportedly helps Draft One keep its police reports factual and avoid the ongoing hallucination problems of generative AI.

The scope of Draft One currently appears to vary by department. Oklahoma City Police Department Captain Jason Bussert claimed his department of 1,170 officers currently only uses Draft One for “minor incident reports” that do not involve arrests. But in Lafayette, Indiana, AP reports that police serving the city’s nearly 71,000 residents have free rein to use Draft One “in any case.” Faculty at neighboring Purdue University in Lafayette, meanwhile, argue that generative AI simply isn’t reliable enough to handle potentially life-changing situations like run-ins with police.

“The large language models that underlie tools like ChatGPT are not designed to generate truth. Instead, they string together plausible-sounding sentences based on prediction algorithms,” said Lindsay Weinberg, a clinical associate professor at Purdue who focuses on digital and technology ethics, in a statement to Popular science.

[Related: ChatGPT’s accuracy has gotten worse, study shows.]

Weinberg, who is director of the Tech Justice Labalso claims that “almost every algorithmic tool you can think of has been shown time and time again to reproduce and reinforce existing forms of racial injustice.” Have experts documented Many instances of race- and gender-based bias have been found in major language models over the years.

“The use of tools that make it ‘easier’ to generate police reports in the context of a justice system that currently supports and punishes the mass incarceration of people [marginalized populations] should be deeply concerning to those who care about privacy, civil rights and justice,” Weinberg said.

In an email to Popular sciencean OpenAI representative suggested directing questions to Microsoft. Axon, Microsoft and the Lafayette Police Department did not respond to requests for comment as of press time.