San Francisco becomes first US city to ban facial recognition technology

World Today

In this photo taken Tuesday, May 7, 2019, is a security camera in the Financial District of San Francisco. San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies as the technology creeps increasingly into daily life. (AP Photo/Eric Risberg)

San Francisco is set to become the first major U.S. City to ban facial recognition for use by police and city departments.

That has technologists and ethicists debating an issue that’s likely to trigger debate in more and more cities around the world.

CGTN’s Mark Niu reports.

By a vote of 8-1, the San Francisco Board of Supervisors passed a regulation to ban the use of facial recognition technology by city police and other government departments.

The new statute heads to the mayor next month for final approval.

“We can have good policing without becoming a police state,” said Aaron Peskin, the San Francisco Board Supervisor who sponsored the ordinance.

Anti-Crime group STOP CRIME SF agrees that there should be controls on facial recognition technology, but said San Francisco is going too far.

“If someone has dementia and they wander off, then it would be a good way of identifying them. If there’s a kidnapping, that’s another way of identifying them and saving lives. In terms of crime. If you can find someone who is a repeat offender, this is a great way to stop them.”

Companies and consumers will still be able to use facial recognition technology in San Francisco.

The rules don’t stop private use. But the president of Stop Crime SF believes one of the flaws in the ordinance is that if your security cameras capture facial recognition data, you won’t be able to give that information to police to use.

“I think anytime you see a company or organization sort of trying to not move forward too quickly with a new technology and think about what all the ethical implications are, how it might hurt people. It’s always a good thing,” Don Heider, Exec. Director of the Markkula Center for Applied Ethics at Santa Clara University said.

Heider is a professor of social ethics at Santa Clara University.
He said a key problem is the technology doesn’t give a person control over how their information is being used or shared.

He also cites a Massachusetts Institute of Technology study that found face analysis algorithms were more inaccurate when trying to distinguish women with darker skin.

“There have a been a number of studies that have showed definitively that facial recognition is not very good for people of color,” said Helder. “We could easily imagine a scenario where someone is misidentified and then there’s a no knock raid on their home and somebody gets hurt or killed, and it may be the wrong person.”

But some analysts said, given time, the technology will only improve.

“Most of the facial recognition out there has not been tied to artificial intelligence databases as it will be two to three years from now,” Tim Bajarin, President of Creative Strategies said. “It will take advantage of races, environment, etc, will eventually just make it better. You have to have more checks and balances, but I think it needs to be more narrowly focused as opposed to a broad brush.”

Police departments in cities like New York, Boston and Las Vegas are already using the technology, while a number of other U.S. cities like Oakland and Somerville, Massachusetts are considering bans.

As biometric technology spreads, citizens around the world will increasingly debate the question—should someone else have access to the data that’s written all over my face?