IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Opinion: Houston Police Should Reconsider AI Partner

Last month, the Houston City Council approved a $178,000 police department contract with a company called Airship AI to expand the server space of 64 security cameras around the city.

The Houston skyline with the sun just above the horizon reflected in the buildings.
(TNS) — Two armed men stormed the Sunglass Hut on West Gray two years ago, threatened employees with a gun and stole thousands of dollars in cash and merchandise. The Houston Police Department got to work investigating — until they got a call from the store's parent company. Using facial recognition software that had quickly scoured through driver license and booking photos, the company's loss prevention department had found their guy: a 61-year-old Texan named Harvey Eugene Murphy Jr.

Never mind that Murphy was 2,000 miles away in Sacramento, Calif., at the time of the robbery. When Murphy returned to Texas and attempted to renew his license at a Harris County DMV, HPD arrested him and sent him to jail for 10 days. By the time Murphy's alibi cleared his name, he had been beaten and raped by three men in a jail bathroom, he alleges in a lawsuit filed earlier this year.

"Any one of us could be improperly charged with a crime and jailed based on error-prone facial recognition software," the lawsuit says. "The companies that use this kind of software know it has a high rate of false positives, but they still use it to positively identify alleged criminals."

Since 2019, at least six other people in the U.S. have been falsely accused of committing a crime after an inaccurate facial recognition scan — including a woman who was held for 11 hours while eight months pregnant. All six were Black, though Murphy Jr. is white. If we can't even get facial recognition right, how can we trust the high-tech tools more and more police departments are using to pinpoint criminals — and even to predict future crimes?

Last month, Houston City Council approved a $178,000 HPD contract with a company called Airship AI to expand the server space of 64 security cameras around the city. The Chronicle had previously reported the contract was for 64 new cameras, but Victor Senties, a spokesperson for HPD, told the editorial board that they strictly purchased video storage space for the cameras HPD is already using. Senties insisted that despite the company name, the deal has nothing to do with AI or facial recognition. When we pressed him for more details on the cameras' capabilities, or on why the contract was with a company dedicated to "AI solutions," we got no answers.

The murkiness is worrisome. Especially when Airship AI's mission is clear: They work to improve public safety and operational efficiency by "providing predictive analysis of events before they occur." That is in line with what then-Police Chief Art Acevedo saw as the future of the department. In a 2017 news conference, he said "You can almost move into a predictive model where you might be able to determine, based on previous patterns of crime, what should we expect at any given month based on a deep dive, an analysis, of three to five years of crime data."

In the early 2010s, police departments around the country looking to do more with fewer resources gambled on a buzzy new idea: predictive policing. It resembled a less fanciful version of Steven Spielberg's "Minority Report." Machine learning algorithms would spot patterns in crime reports, arrest records, license plate images and many other types of data and spit out predictions of where and when a certain type of crime would occur, as well as who might be likely to commit it. The underlying theory was that, like earthquakes and their aftershocks, crime begets more crime nearby. People's past and present, then, could also shed light on whether crime was in their future.

On its surface, predictive policing isn't that different from cops huddling around a map, pinning crime spots and trying to figure out patterns that might help them deter crime before it happens. Or from a responsible citizen calling police about a suspicious person in their neighborhood. As one Chronicle letter writer recently pointed out in response to HPD's Airship AI contract, "Aren't we all urged to see something, say something?"

But just as the fear of terrorism after 9/11 primed Americans to unreasonably report their Muslim neighbors, the data fed into algorithms is susceptible to our own biases. Historically, Black people are more likely than white people to be reported for a crime — whether the person reporting is Black or white. Black neighborhoods, then, get disproportionately flagged as "high risk." This can create a dangerous feedback loop: Sending more cops to an area all but guarantees they'll find more crime, and more crime means more cops get sent there. Ultimately, predictive policing is better at predicting future policing than future crime.

Between 2018 and 2021, more than 1 in 33 U.S. residents may have been subject to police patrol decisions made using a predictive policing software called PredPol, now rebranded as Geolitica. A 2021 investigation by the Markup and Gizmodo found that Geolitica's software tended to disproportionately target low-income, Black and Latino neighborhoods in 38 cities across the country.

Predictive policing was pitched as a solution to human biases — a way to remove human variability from the equation. But by basing the future on a past marred by discriminatory policing, the technology has served to legitimize and reinforce our worst human impulses.

Predictive policing "was supposed to be more objective than having a racist police officer standing on a street corner," Rodrigo Ferreira, a Rice University professor who specializes in technology and ethics, told us last week. "But it's actually doing something very similar — or perhaps even worse because it's hidden behind the veil of a computer."

The case of Murphy Jr. isn't just a story straight out of a George Orwell novel. It's a sober reminder that technology, just like the humans who built it, isn't infallible. It also answers to no one. As a 1979 IBM presentation famously warned, "A computer can never be held accountable, therefore a computer must never make a management decision."

Who is held responsible when an AI-powered tool leads to devastating consequences? The people employing the technology? The police officers who act on it? Or the software companies who keep their proprietary algorithms hidden inside a black box?

These are questions we should be asking HPD. And they should be prepared to answer.

© 2024 the Houston Chronicle. Distributed by Tribune Content Agency, LLC.