Quantcast
Channel: artificial intelligence – Futurism
Viewing all articles
Browse latest Browse all 502

New Crime-Predicting AI May Be Prone to Bias Just Like Humans

$
0
0

On Data, Crime, and AI

An artificial intelligence (AI) designed to help investigate gang crimes has worried experts in the field about its serious ethical implications.

The algorithm, which was presented in February at the Artificial Intelligence, Ethics, and Society (AIES) conference in New Orleans, Louisiana, is meant to help police fight crime, but it could not be immune from mistakes traditionally falling in the realm of “human error.”

Predictive policing, the practice of using computer algorithms to determine where and when the next crime is likely to happen using data on previous criminal activity, is not new. The new AI, however, is the first that focuses on gang-related violence.

It’s essentially a machine learning algorithm, built on a so-called “partially generative neural network.” This means that the system can reach conclusions with less information than would normally be needed to achieve the same result. The AI was designed to only look at four particular details from a crime scene: what the main weapon was, how many suspects were present, the neighborhood, and exact location where the crime happened.

The researchers trained their algorithm using data from the Los Angeles Police Department (LAPD) on 50,000 gang-related and non-gang-related homicides, aggravated assaults, and robberies from 2014 to 2016. They then tested it using another set of LAPD data.

Because it is partially generative — which is the innovative side of this research — the AI could work even without an officer’s narrative summary of a crime, a gap it fills using the four factors mentioned. Putting the pieces together, the AI determines whether a crime was likely gang-related or not. Compared to a toned-down version of this method, the partially generative AI minimized errors by up to 30 percent, according to the paper presented at the AIES conference.

Questions of Objectivity

“This is almost certainly a well-intended piece of work,” Google software engineer Blake Lemoine, who was among the audience, told Science. “But have the researchers considered the possible unintended side effects?” Sure, it could help identify gang-related crimes, but an imperfection in the system may mean it could also frame an innocent person. Other experts voiced similar ethical concerns during the conference’s Q&A.

Observers also worry about the potential for bias, an important aspect of the ethical implications of this particular AI. Data used in predictive policing methods, as one commentator told The New York Times, can be skewed, especially when it comes to identifying particular communities as crime “hot spots.” Since machine learning makes decisions using the data it’s trained on, its conclusions could reflect this bias.

AI have been found to be prone to bias in the past. For example, one AI used in custodial decisions in the United Kingdom, turned out to be discriminating against the poor. While many hope that AI could be a more objective partner when it comes to police investigations, it’s actually not immune from mistakes commonly believed to be inherently human.

While an AI able to predict gang crime may be an interesting idea, not even the app’s developers are completely confident about its potential applications. “It’s kind of hard to say at the moment,” University of California, Los Angeles anthropologist Jeffrey Brantingham, one of the researchers, acknowledged in an interview with Science. Given how similar AI have ended up reinforcing biases — racial or social — and promoting discriminatory behavior, this work could benefit from more research before being employed in real life. At the moment, the ethical considerations seem to outweigh this AI’s practical applications.

The post New Crime-Predicting AI May Be Prone to Bias Just Like Humans appeared first on Futurism.


Viewing all articles
Browse latest Browse all 502

Trending Articles