For the second time in two months, members of the Long Beach City Council questioned their police department about facial recognition software, and more questions are likely coming now that some members of a city commission have floated the idea of placing a moratorium on the technology.
In a meeting of the council’s public safety committee Friday, LBPD Assistant Chief Wally Hebeish defended his department’s limited use of facial recognition software, calling it a valuable tool that is only used to generate leads in criminal investigations. Even then, he said, it must be supported by other evidence for detectives to take action on it.
“We don’t use that for mass surveillance of the community,” Hebeish said.
The three City Council members at Friday’s meetings—Suely Saro, Roberto Uranga and Suzie Price—didn’t take any direct stance on whether they believed the technology was appropriate, but some city officials have taken a firmer position.
Since early last year, a two-member subcommittee of the city’s Technology and Innovation Committee has been researching facial recognition technology, and this week, they published a report recommending Long Beach ban its use for the time being and establish a way to vet new technologies before adopting them.
“Based on research conducted to date, the subcommittee finds that current facial recognition technologies are not only insufficiently accurate but pose substantive and unequal risk to Black residents and residents of color due to inherent algorithmic biases that have not been effectively addressed in software design,” the report says. “Further, the subcommittee is concerned by the absence of independent auditing entities to certify that facial recognition technology is free of racial and other bias.”
Whether those recommendations get any traction remains an open question. The full Technology and Innovation Committee is set to discuss the report next week.
How does the LBPD use facial recognition?
For now, the LBPD says it only uses facial recognition to generate leads in criminal investigations.
According to the department, the only method it currently uses to do this is taking bystander or security camera video and comparing it to a massive trove of about 9 million mugshots in the Los Angeles County Regional Identification System or LACRIS.
LACRIS, which is maintained by the Los Angeles County Sheriff’s Department, is a massive regional database that stores the identifying information of anyone arrested in the county.
The database, which the LBPD says it’s used for years, puts its own limits on what facial recognition can be used for:
“All searches start with a criminal probe image (suspect) that is searched against a database of previously arrested individuals,” the agency says in its system guide.
When the system provides a match, it’s not a firm identification. Rather, “it assists in the identification process by providing candidates as possible matches to the searched image,” according to LACRIS documents.
LACRIS searches were “instrumental” in identifying suspected looters after the racial justice protests on May 31 last year, LBPD’s Hebeish said at another public safety committee meeting last month.
But those leads from LACRIS, he said, were just a starting point.
“Our detectives have to corroborate that information, and it provides direction for them to look, and further investigative followup is necessary to develop reasonable suspicion, probable cause and put together a fileable case for the District Attorney’s office,” Hebeish said.
Dozens of other law enforcement agencies in LA County also use the system, including the LAPD.
Nevertheless, civil rights organizations, including the American Civil Liberties Union, say the overrepresentation of Black and Latino men in mugshot databases creates an inherently racist system.
In addition, they say, the algorithms used in commercially available facial recognition software more frequently misidentify people of color and women—raising the specter of misguided investigations more frequently targeting underrepresented communities.
The ACLU has joined with a coalition of civil rights and privacy organizations pushing for a ban on the governmental use of facial recognition technology.
In a letter this year to President Joe Biden, they say the technology “is dangerous because it exhibits clear racial, gender, and other biases and it’s also dangerous when it does work. Even if the technology worked perfectly, it would facilitate the mass tracking of each person’s movements in public space—something intolerable in a free and open society. We cannot allow its normalization.”
Locally, critics of facial recognition technology have raised the same issues.
“Before the City gives the green light to LBPD’s continued and unchecked use of facial recognition, the inherent equity and racial justice issues associated which must be addressed,” wrote attorney Greg Buhl, who has used public records to campaign against the LBPD’s use of the technology.
Buhl alleges that the LBPD has at times used especially controversial tools like the ones marketed by Clearview AI—a company that scraped billions of publicly available photos on social media and other parts of the internet to build a massive database of identifiable information it now sells to subscribers.
Clearview AI’s product has sparked class-action lawsuits in the U.S., government investigations in Britain and Australia, and an outright ban in Canada, which deemed it illegal, according to the New York Times.
“It’s probably one of the most evil companies ever created,” Buhl told the council’s public safety committee last month.
At that meeting, Hebeish said his department previously used facial recognition software from vendors other than LACRIS only on a trial basis, “but now we prohibit any trials of this type of software without command level approval.”
At Friday’s public safety committee hearing, half a dozen members of the public decried the department’s use of facial recognition, pointing to the inherent biases highlighted by the ACLU, Buhl and others.
But, “research shows that generally, residents are supportive of improved technology within police departments,” according to the subcommittee report that will be up for a debate next week in the Technology and Innovation Committee.
The report cites a 2019 study from the Pew Research Center that found 56% of the public trusted law enforcement to use facial recognition technology responsibly, and it lauds the technology’s ability to help identify suspects in cases like the Boston Marathon bombing and the riot at the U.S. Capitol earlier this year.
However, it emphasizes, facial recognition still has trouble accurately identifying people who aren’t White men or East Asian men.
“For any other group, including Native American, Black, and Women groups, (it) does not yet have the accuracy needed to be an asset to City efforts,” it says.
The authors point to major cities like Seattle, Portland and Oakland that have either banned facial recognition technology or passed strict ordinances limiting when it can be used.
Unless Long Beach, too, adequately protects against the potential biases and risks, facial recognition could lead to a “potential erosion of public trust in police,” the report argues.
The full Technology and Innovation Commission is scheduled to discuss the report on Wednesday at 3:30 p.m.
Support our journalism.
Hyperlocal news is an essential force in our democracy, but it costs money to keep an organization like this one alive, and we can’t rely on advertiser support alone. That’s why we’re asking readers like you to support our independent, fact-based journalism. We know you like it—that’s why you’re here. Help us keep hyperlocal news alive in Long Beach.