Flagging “America’s most wanted” in public places like train stations or airports. Looking for missing children. Having home automation react differently to different family members.
Those are all usages I am personally totally ok with.
To illustrate lets imagine a really really good system that had a 0% false negative rate¹ , and a 0.000001%² false positive rate. If we were to sample the entire country looking the the FBI's most wanted we would end up with ~3290 matches, 3280 of which are going to be other people³.
Considering high value of the individual the chances of harassment or wrongful arrest (or worst) is pretty high.
1: In reality the false negative and false positive rate are going to be directly inversely related. The more you decrease the false positive rate the higher your false negative rate is.
2: That is 1 in 1 million
3: In this ideal situation it is assuming there is an even distribution. At this point computer vision is significantly worst for false positives for people of color.
I am not advocating for large scale facial recognition, but your argument holds true for basically any measure law enforcement has to apply to locate missing people.
How big is the false positive rate with tips received by phone? Law enforcement probably has to deal with much more wrong clues on a daily basis than to double check an image that was flagged by an automated system. And if even a human inspection can't tell the difference between the missing person and an image of the missing person then it's definitely worth checking out.
It's definitely a problem if you use the face recognition as your only criteria - but if it alerted security to pay more attention to someone or to double check their documents?
Even with the false positive rate it's much more likely an individual identified by the system is a target than a random individual.
> > To illustrate lets imagine a really really good system that had a 0% false negative rate¹ , and a 0.000001%² false positive rate. If we were to sample the entire country looking the the FBI's most wanted we would end up with ~3290 matches, 3280 of which are going to be other people³.
> It's definitely a problem if you use the face recognition as your only criteria - but if it alerted security to pay more attention to someone or to double check their documents?
Where do you imagine this is happening? If its in an environment where papers are expected as standard (e.g. an airport), sure, this is relatively benign. If it's when you're walking through a mall and now you're being stopped and searched, how is that not bordering on harassment?
> Even with the false positive rate it's much more likely an individual identified by the system is a target than a random individual.
Much more likely than without the system, sure. Chances of it being a random individual and not the target? 99.6% (using the math above of 3280 innocents to 3290 people identified).
If a system had a failure rate of 99.6% when harassing people, I don't think I'd describe that as 'much more likely' to be a target than an innocent random.
Wouldn't it be the same as a police officer having a board of wanted people on his desk and (mistakenly) thought a person in a mall to be one of those? He'd check their papers (sorry, I'm European) due to his hunch and decide?
I believe there is no better way to identify wanted criminals at scale, do we want to comb the streets with policemen and "harass" 10x more people in hoes of finding them?
It's not a death penalty to be checked and bothering additional 3200 people in the whole country is a ok tradeoff to find wanted person in my opinion.
It feels like you want to cripple the way police looks for criminals because they don't act very kind towards suspects. Maybe police is the problem and not surveillance?
The reality is that police mostly catch “wanted” criminals during traffic stops or at their homes, or at the homes of known associates. Or when they get arrested for other crimes. People have a way of turning up.
How would this face camera thing even work? Like, a camera identifies so-and-so and a police car rushes out immediately? That’s not really how police do stuff.
If a system identified individual has a 0.4% chance of being on the FBI most wanted, then that is 'much more likely' than an individual picked at random who has a likelihood of 1 in 30 million.
That's a minority of police forces. There are a lot of police that truly care about doing the right things. The media covers corrupt police commonly and caring police rarely.
This is still better than the current paradigm of LEOs stopping and frisking anyone who "matches the description". That phrase has basically turned into carte blanche to harass anyone they want with plausible deniability. With facial recognition at least now there's one more constraint they'd have to operate under for probably cause, right?
While valid, your criticism is with the actual ability/execution of the technology, not the concept. Imagine we get it to a 100% match or, more realistically, use it in a way that makes more sense than just flagging someone and ruining their life forever (not to mention being biased). Would you still be opposed to the technology? If so, your opposition has nothing to do with the above argument.
>Flagging “America’s most wanted” in public places
And what happens if we descend into military rule or a dictatorship? With "America's most wanted" fliers, the general public can just refuse to participate. With facial recognition, anyone deemed an "enemy of the state" can easily be tracked and eliminated.
Always assume the worst and work your way back to the good when doing the cost-benefit analysis.
Laws have a tendency not to stop dictators. They are the law.
In reality laws are only as powerful as their enforcement, and if there is no will to enforce, like what happens during transitions to dictatorships, then they’ll just be ignored.
And dictators have a tendency to use expensive infrastructure that was built before they came into power. Setting everything up so that switching from a free society to a mass surveillance and state control hellhole is as easy as editing a config file is not a good idea.
I think what's more important about not building surveillance infrastructure is not that the infrastructure does not exist, but that building it does not become normalized. It should be morally and socially unacceptable, so those hired to build and operate one will either refuse or sabotage it.
Some people consider violating laws immoral per se. If you make actions leading to a dictatorship illegal, you reduce the potential dictator's supporter base.
Also, in a functioning state, the bureaucracy (mostly) follows the law and the judicary's interpretation of the law. Then again, functioning states don't tend to turn into dictatorships.
(Fun fact: Article 20 (4) of the German constitution explicitly authorizes every German to resist when someone abolishes freedom and democracy. Considering the mentality of the average German, this is probably necessary.)
So in what way will facial recognition and "criminal spotted on 5th avenue" either dissuade or encourage totalitarianism/removing checks and balances?. If a Gov't falls into totalitarianism, a facial recognition ban (both internally within AMZN as well as on a national/local level) won't stop them from requiring AMZN to provide them facial recognition behind-the-scenes.
A tool that could track individuals and was under the perview of Executive Branch (eg Homeland Security) sounds pretty dangerous. Any political rivals of family members who go for STI screening, abortion clinics, gay bars, oncologist, or couples consoling could be either leveraged or leaked for political effect. Or you it could be used to track groups of people, activist, reporters. Or any number of ways I wouldn’t be able to think of.
What I was trying to say is that if you are trying to establish a totalitarism not having any checks and balances on something like that would be handly.
I agree with the general point that it isn't a good method of stopping a government falling into totalitarianism. However it is helpful if there is general consensus against facial recognition because it makes it a bit easier to identify the authoritarian-leading parts of government because they can't hide behind 'it is just usual practice'.
There are other, better, arguments in favor of personal liberty to use against facial recognition by law enforcement. Efficient, highly automated systems crush people who just happen to get caught up in them.
> So in what way will facial recognition and "criminal spotted on 5th avenue" either dissuade or encourage totalitarianism/removing checks and balances?. If a Gov't falls into totalitarianism, a facial recognition ban (both internally within AMZN as well as on a national/local level) won't stop them from requiring AMZN to provide them facial recognition behind-the-scenes.
Technologies take time to develop and deploy, and the fact of their development and deployment is a red flag that tells you something bad is coming and gives you time to react before it can actually be put into place.
Setting everything up for turnkey totalitarianism is not wise.
> the general public can just refuse to participate
The general public will be misled by the media. Already politicians are claiming that the media isn't truly independent or objective with claims of "fake news". If the media falsely claim that you are a dangerous criminal, the public will eat it up and ask for dessert.
While I agree with the aspiration, I don’t see how it’s possible to use facial recognition to flag “america’s Most wanted” or to look for missing children without mass surveillance though?
It only works if it scans _everyone_ .
Additionally what happens when the technology isn’t perfect and innocent people get mistakenly flagged as persons of interest?
The other thing is once it’s installed and in operation, what’s to stop it being used for other purposes? - being used to target people peacefully protesting against the government or whatever.
And that’s the argument the grand-parent is making. It’s a tool. It can be used for good or bad. There are other tools like that.
Because of that, there’s unlikely to be universal acceptance or rejection. And without popular opinion, it will be hard to pass any laws that change the status quo.
CCTV cameras are already ubiquitous. That ship has already sailed. And honestly, there was never an expectation of privacy in public in the first place.
CCTV cameras monitored by humans are COMPLETELY different from a facial recognition system recording the identities and movements of all people. There is no comparison.
There absolutely is an expectation of privacy in public. Being seen in public by a series of uncoordinated people is massively different from a PI tailing you and recording your actions. This form of privacy is generally termed "obscurity".
> Being seen in public by a series of uncoordinated people is massively different from a PI tailing you and recording your actions.
That is actually completely legal to do in all circumstances, precisely because there is no expectation of privacy in public.
> CCTV cameras monitored by humans are COMPLETELY different from a facial recognition system recording the identities and movements of all people. There is no comparison.
Which is not necessarily how facial recognition would necessarily work. More likely would be to scan for known suspects and fugitives. But then we’re back to the “how is it used” question.
It is legal for a PI to tail you not because of the lack of expectation of privacy in public, but because it is impractical to have PIs tail everyone all the time in public. People are generally okay with targeted surveillance. Mass surveillance is the issue. Quantity has a quality all of its own, as they say. There has been a court case which addresses this issue[0].
Any technology which searches for fugitives will necessarily scan everyone. There is no such thing as targeted facial recognition, it can only work by mass surveillance.
You're assuming facial recognition would necessarily be the moral equivalent of "[having] PIs tail everyone all the time in public.". But for that to be the case, facial recognition would have to scan every face it sees, store that face, and then cross-reference every other face it sees against every face it has ever stored.
I'm suggesting a far simpler use case: It scans your face and if you don't match any of the fugitives it's looking for, it forgets about you. I think this use case is far more likely because it's a lot simpler and cheaper to pull off. And that's a big difference.
What happens if the system flags the wrong person as "America's most wanted" and a trigger-happy cop is nearby? Not saying all applications are useless, but most of them definitely warrant additional thought.
Agree. And to take it even further, consider Bostrom's "Vulnerable World Hypothesis" where technology could arise that makes it trivial for individual actors to cause mass destruction. From the abstract:
"A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance."
See "Minority Report". Also, it is always a question of who will watch the watchers. If "recognition" is deployed, it will be taken as proof and now it is a weapon not a shield.
It is a misconstrued quote now but Ben Franklin said "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
Those are all usages I am personally totally ok with.