Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

US police officers experiment using AI chatbots to write crime reports

US police are using a technology that records sound on their body cameras to write up their incident reports in eight seconds.

Some police departments in the US are experimenting with using artificial intelligence (AI) chatbots to produce the first drafts of their incident reports.
One technology with the same generative AI model as ChatGPT pulls the sound and radio chatter from a microphone on the police’s body camera and can churn out a report in eight seconds.
“It was a better report than I could have ever written and it was 100 per cent accurate. It flowed better,” said Matt Gilmore, a police sergeant with the Oklahoma City Police.
The new tool could be part of an expanding AI toolkit that US police are already using, such as algorithms that read license plates, recognise suspect’s faces, or detect gunshots.
Rick Smith, CEO and founder of Axon, the company behind the AI product called Draft One, said the AI has the possibility of eliminating the paperwork that police need to do so they have more time to do the work they want to do.
But, like other AI technologies being used by police, Smith acknowledged there are concerns.
He said they mainly come from district attorneys who want to make sure police officers know what’s in their report in case they have to testify in a criminal proceeding about what they’ve seen at the crime scene.
“They never want to get an officer on the stand who says, well, ‘The AI wrote that, I didn’t,'” Smith said.
The introduction of AI-generated police reports is so new that there are few, if any, guardrails guiding their use.
In Oklahoma City, they showed the tool to local prosecutors who advised caution before using it on high-stakes criminal cases.
But there are examples of cities elsewhere in the US where officers can use the technology on any case or as they see fit.
Legal scholar Andrew Ferguson would like to see more of a public discussion about the benefits and potential harms of this technology before it comes into force.
For one thing, the large language models behind AI chatbots are prone to making up false information, a problem known as hallucination that could add convincing and hard-to-notice falsehoods to a police report.
“I am concerned that automation and the ease of the technology would cause police officers to be sort of less careful with their writing,” said Ferguson, a law professor at American University working on what’s expected to be the first law review article on the emerging technology.
Ferguson said a police report is important in determining whether an officer’s suspicion “justifies someone’s loss of liberty”. It’s sometimes the only testimony a judge sees, especially for misdemeanour crimes.
Human-generated police reports also have flaws, Ferguson said, but it’s an open question as to which is more reliable.
Concerns about society’s racial biases and prejudices getting built into AI technology are just part of what Oklahoma City community activist aurelius francisco finds “deeply troubling” about the new tool, which he learned about from the AP.
He said automating those reports will “ease the police’s ability to harass, surveil and inflict violence on community members. While making the cop’s job easier, it makes Black and brown people’s lives harder”.

en_USEnglish