NYPD's Big Artificial-Intelligence Reveal

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,759
Reputation
9,875
Daps
175,669
NYPD's Big Artificial-Intelligence Reveal

The nation’s largest police force has developed a first-of-its-kind algorithm to track crimes across the city and identify patterns. Privacy advocates worry it will reinforce existing racial biases.
by J. Brian Charles | March 26, 2019 AT 6:48 AM
NYPD+.jpg

(Shutterstock)


SPEED READ:
  • NYPD revealed this month that it has been using artificial intelligence to track crimes and spot patterns since late 2016.
  • The first-in-the-nation technology is called Patternizr.
  • Privacy advocates worry it will reinforce existing racial biases.
The details of the crime were uniquely specific: Wielding a hypodermic syringe as a weapon, a man in New York City attempted to steal a power drill from a Home Depot in the Bronx. After police arrested him, they quickly ascertained that he'd done the same thing before, a few weeks earlier at another Home Depot, seven miles away in Manhattan.

It wasn't a detective who linked the two crimes. It was a new technology called Patternizr, an algorithmic machine-learning software that sifts through police data to find patterns and connect similar crimes. Developed by the New York Police Department, Patternizr is the first tool of its kind in the nation (that we know about). It's been in use by NYPD since December 2016, but its existence was first disclosed by the department this month.

“The goal of all of this is to identify patterns of crime," says Alex Chohlis-Wood, the former director of analytics for NYPD and one of the researchers who worked on Patternizr. He is currently the deputy director of Stanford University’s Computational Policy Lab. "When we identify patterns more quickly, it helps us make arrests more quickly.”
Many privacy advocates, however, worry about the implications of deploying artificial intelligence to fight crimes, particularly the potential for it to reinforce existing racial and ethnic biases.

New York City has the largest police force in the country, with 77 precincts spread across five boroughs. The number of crime incidents is vast: In 2016, NYPD reported more than 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies. Manually combing through arrest reports is laborious and time-consuming -- and often fruitless.

“It’s difficult to identify patterns that happen across precinct boundaries or across boroughs,” says Evan Levine, NYPD's assistant commissioner of data analytics.

Patternizr automates much of that process. The algorithm scours all reports within NYPD's database, looking at certain aspects -- such as method of entry, weapons used and the distance between incidents -- and then ranks them with a similarity score. A human data analyst then determines which complaints should be grouped together and presents those to detectives to help winnow their investigations.

On average, more than 600 complaints per week are run through Patternizr. The program is not designed to track certain crimes, including rapes and homicides. In the short term, the department is using the technology to track petty larcenies.


The NYPD used 10 years of manually collected historical crime data to develop Patternizr and teach it to detect patterns. In 2017, the department hired 100 civilian analysts to use the software. While the technology was developed in-house, the software is not proprietary, and because the NYPD published the algorithm, “other police departments could take the information we’ve laid out and build their own tailored version of Patternizr,” says Levine.

Since the existence of the software was made public, some civil liberties advocates have voiced concerns that a machine-based tool may unintentionally reinforce biases in policing.

“The institution of policing in America is systemically biased against communities of color,” New York Civil Liberties Union legal director Christopher Dunn told Fast Company. “Any predictive policing platform runs the risks of perpetuating disparities because of the over-policing of communities of color that will inform their inputs. To ensure fairness, the NYPD should be transparent about the technologies it deploys and allow independent researchers to audit these systems before they are tested on New Yorkers.”

New York police point out that the software was designed to exclude race and gender from its algorithm. Based on internal testing, the NYPD told Fast Company, the software is no more likely to generate links to crimes committed by persons of a specific race than a random sampling of police reports.

But Gartner analyst Darin Stewart, who authored a paper on the bias of artificial intelligence last year, said the efforts to control for racial and gender biases don't go far enough.

"Removing race and gender as explicit factors in the training data are basically table stakes -- the necessary bare minimum," Stewart said in an interview with the site Tech Target. "It will not eliminate -- and potentially won't even reduce -- racial and gender bias in the model because it is still trained on historical outcomes."

The implications are troublesome, Stewart told the site.

"As Patternizr casts its net, individuals who fit a profile inferred by the system will be swept up. At best, this will be an insult and an inconvenience. At worst, innocent people will be incarcerated. The community needs to decide if the benefit of a safer community overall is worth making that same community less safe for some of its members who have done nothing wrong."

The NYPD has come under fire before for using Big Data technology to help fight crimes. In 2016, the Brennan Center for Justice took legal action against the department over its use of predictive policing software. This past December, the New York State Supreme Court ordered the department to release records about its testing development and use of predictive policing software.

1901_Brian+27a.jpg

J. Brian Charles | Staff Writer | jbcharles@governing.com | @JBrianCharles
 

Dr. Acula

Hail Hydra
Supporter
Joined
Jul 26, 2012
Messages
26,262
Reputation
8,928
Daps
139,884
See nothing wrong with this.

All it's doing is taking your current criminal behavior and using an algorithm to determine if you've committed similar crimes before based on statistical probability. It's not even that revolutionary on paper tbh.

This wouldn't even qualify what one thinks of typically as AI as more just an application of probability and statistics.
 

BushidoBrown

Superstar
Joined
Oct 30, 2015
Messages
7,357
Reputation
1,563
Daps
20,041
Reppin
Brooklyn
All it's doing is taking your current criminal behavior and using an algorithm to determine if you've committed similar crimes before based on statistical probability. It's not even that revolutionary on paper tbh.
the algo is based off 10 prior years of manually kept data.

So 10 years of data laced with racial bias throughout is informing how the AI should function going forward.
:mjlol: No issue at all
 

Dr. Acula

Hail Hydra
Supporter
Joined
Jul 26, 2012
Messages
26,262
Reputation
8,928
Daps
139,884
It's not even the most amazing use of the idea I've heard. There was one case I forgot where but a police officer was going around raping women in their home.

Before they caught him, they suspected he was law enforcement because the victims stated he held the flashlight like this
10662833-a-policeman-night-patrolman-or-security-guard-shining-a-flashlight-torch-to-investigate-or-search-.jpg


Instead of like this which is how normal people hold it .
listing_flashlighttag.jpg



But this was all the evidence they had. But they did have the locations of where all the tapes occurred. So they found some detective in Canada who was also a PHD holder in Computer Science who created a program to determine geographical probability of where a criminal lives based on all the data of where their crimes were committed .

They took all the locations of the rapes and the probability narrowed the suspect to live in a 6 block square area and they only had one police officer in the city who lived in that area during the times of the rapes and it turned out to be the guy.

Personally I thought that shyt was amazing .
 

Dr. Acula

Hail Hydra
Supporter
Joined
Jul 26, 2012
Messages
26,262
Reputation
8,928
Daps
139,884
the algo is based off 10 prior years of manually kept data.

So 10 years of data laced with racial bias throughout is informing how the AI should function going forward.
:mjlol: No issue at all
If this was the only thing being used to convict someone I would see the issue .But not many prosecutors are going to push forward on charging someone and possibly lose a case on the "likelihood" you committed the crime based on what a computer says.

Also like I said it's not a revolutionary idea in the first place. If you've taken college probability you could implement the same system.
 

Sonic Boom of the South

Louisiana, Army 2 War Vet, Jackson State Univ Alum
Supporter
Joined
May 1, 2012
Messages
82,113
Reputation
24,391
Daps
297,588
Reppin
Rosenbreg's, Rosenberg's...1825, Tulane
See nothing wrong with this.

All it's doing is taking your current criminal behavior and using an algorithm to determine if you've committed similar crimes before based on statistical probability. It's not even that revolutionary on paper tbh.

This wouldn't even qualify what one thinks of typically as AI as more just an application of probability and statistics.
U sound dumb as fukk

shyt will use racial profiling stats to make racial profiling easier:deadmanny:
 

ORDER_66

I dont care anymore 2026
Joined
Feb 2, 2014
Messages
148,671
Reputation
16,745
Daps
590,366
Reppin
Queens,NY
See nothing wrong with this.

All it's doing is taking your current criminal behavior and using an algorithm to determine if you've committed similar crimes before based on statistical probability. It's not even that revolutionary on paper tbh.

This wouldn't even qualify what one thinks of typically as AI as more just an application of probability and statistics.

Funny but you point that camera at all the black neighborhoods of course you see crimes, but white people & their neighborhoods are clean as the driven snow...:mjpls::troll:
 

Dr. Acula

Hail Hydra
Supporter
Joined
Jul 26, 2012
Messages
26,262
Reputation
8,928
Daps
139,884
U sound dumb as fukk

shyt will use racial profiling stats to make racial profiling easier:deadmanny:
It can also help catch serial killers who have a signature pattern of habits in their crimes. You don't need this technology to racial profile people as we see it's done without computer assistance already.

In fact, I would argue racial profiling is an issue of human biases and prejudices. At least a computer if the only information is a dataset of crimes that have been committed and then probability is used to determine the likelihood who the offender is, won't consider personal racial prejudices in it's determination. By removing it away from a human more, you also remove it from human error and biases more.

The rights group quoted in the article calls for transparency of the software to ensure racial prejudices are not present. I agree with them on that. The details of crimes submitted should only rely on details of the crimes and work to ensure only objective facts about the case should be considered. Anyone with a slightly scientific mind and idea of how approach data analysis would already consider that such as people who are most likely to create such a software in the first place.
 

Dr. Acula

Hail Hydra
Supporter
Joined
Jul 26, 2012
Messages
26,262
Reputation
8,928
Daps
139,884
Funny but you point that camera at all the black neighborhoods of course you see crimes, but white people & their neighborhoods are clean as the driven snow...:mjpls::troll:
I agree that this is an inherent bias in data submission to worry about. I guess as someone with a degree in Computer Engineering and dealt with this stuff at least at an academic level, I might have a slight favoritism bias towards it. But at the same time, I'm speaking as someone who can envision working on such software myself. I can put myself in the shoes of someone creating this and I can't imagine the first thought for someone working on it wouldn't be how to minimize human biases as much as possible in data submission. It seems like such a "first thought" concept when approaching this problem. It would be professional negligence not to. We were taught to do this in an introduction to computer learning class it's so elementary.

But ineptness is common and like I said I agree with the quote in article calling for transparency .
 
Top