Tech

Infographic: The social impact of Crime-stopping AI

Artificial intelligence algorithms depend on historical data in order to make predictions, and if that data is incomplete or inaccurate so, then, are the conclusions drawn from it.

Artificial intelligence is being applied to everything in business and in our day-to-day lives. The hope is that machines can streamline all the processes of our lives and make them more efficient. But what happens when machine learning is taught something it shouldn’t be, like racism, sexism, or classism? Unfortunately, programmers are learning that this is often the case – our subconscious biases make their way into our programming and can cause some pretty serious unintended effects. One area the impact is being felt is with crime-stopping AI.

Currently, artificial intelligence is being used to predict and prevent crime. The thought goes: crime breeds more crime, and crime is like a virus and happens in clusters, so information about known crimes is fed into an algorithm and the output tells police where to spend more time on patrols. It sounds reasonable, but there are a few problems with this model.

First, not all crimes are reported. Artificial intelligence algorithms depend on historical data in order to make predictions, and if that data is incomplete or inaccurate so, then, are the conclusions drawn from it. 

Second, artificial intelligence has a demonstrated history of bias. African-Americans are more likely to have their images included in a database due to the over-policing of black communities, but at the same time, the algorithms have demonstrated over time that they can’t accurately identify people of color. In a study of Amazon’s facial recognition software, 28 members of Congress were incorrectly identified as criminals. What’s more, 39% of the incorrect matches were people of color, and only 20% of Congress are people of color.

Unfortunately, this technology is already being deployed in crime-fighting efforts across the world. Tech like license plate detectors, facial recognition, ShotSpotter, and more are already being used in communities to find and identify criminals. Last year, Huntington Beach, California deployed a 500-pound autonomous robot police officer that patrols the streets and notifies police when it finds “blacklisted” individuals within the community.

Learn more about crime-fighting AI and its unintended consequences below.

Crime-Stopping AI
Source: Best MSW Programs

What do you think of crime-stopping AI? Let us know in the comments below or on Twitter, or Facebook. You can also comment on our MeWe page by joining the MeWe social network.

*We use revenue-generating affiliate links and may earn a commission for purchases made using them. Read more on our disclaimer page.

Last Updated on

Comments
To Top