Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
TSA now wants to scan your face at security (washingtonpost.com)
38 points by mysterypie on Dec 5, 2022 | hide | past | favorite | 42 comments


I don't have evidence to support this, but I have a feeling the TSA and/or other government agencies like the CIA have been face scanning for years. Now they're just announcing the program to normalize it.


Airports already deploy facial recognition. The Global Entry program scans your face upon arrival. And sometimes flights scan your face as a boarding pass for entry.


Global Entry is reasonable:

1. Border crossing rules

2. Photo ID with an ID indexed to a background checked record as one of the rules

3. YOU are the one crossing the border, already declared and known, so there is no new information for authorities

By contrast, driving, taking a bus or train, or flying A to B domestically, an ID check feels like incremental "papers please".


CBP already has facial recognition and gait recognition running in general public spaces at quite a few major US international air traffic hubs. My spouse (who works in the export compliance space) was given a capabilities demo one day while meeting with CBP officials at our local international airport, which included tracking individuals of interest from first appearance on airport cameras to their departure down the jetway.


A friend worked on this project at a major Midwest airport. These capabilities have been available since mid to late 00s.


Yes, but that's an opt-in program. I think the federal government has been face scanning and identifying people passively without consent at and before security for quite some time at this rate.


It's not always opt-in or part of Global Entry, for a number of flights that I've been on there is a facial scanner check in lieu of a boarding pass check, despite being issued a boarding pass, to board the plane for all passengers.

I cannot recall if this was only for international flights. Also, I believe I've seen this in Mexico, again, outside of Global Entry.


Yeah I figured they were scanning faces already.


> But does it mean you’ll get moved to a slow line, get an extra pat down, or a mark on your record? “You should have no derogatory experience based on you exercising your right,” said Lim. If you suspect that has happened, the TSA says you should ask to speak to a manager.

I have to assume that this is a joke. If you speak to a manager, you'll be subjected to more bureaucratic abuse and risk missing your flight, just to have your concerns dismissed because there is no enforcement mechanism.


TSA already punishes travelers who try to opt out of body scanning. In addition to subjecting them to intrusive patdowns, often in public (for those who don't want to risk a special "private screening") they will typically suspend scanning temporarily and make the opt-out people wait as they let others go ahead through a simple metal detector. This may be an easy way for bad actors to help their accomplices avoid body scans.


How is this any more of an invasion of privacy than what they did before? They already have your picture from when you got your ID in the first place, and they already have a record that you're flying that day from your boarding pass.


They now get a considerably more centralized program. Instead of working through 50+ DMVs and whatever equipment the person's branch happened to use to take the photo however long ago it was taken and whatever standard resolution they get the result in, TSA can now use homeland security kinds of money to buy whatever cameras it wants, and collect a stream of photos of people who they have positively identified. And since they also now have a known high-quality image and identification at a specific place and time, they can correlate that with lower-quality images taken elsewhere in the airport to create training datasets to make automated identification from surveillance cameras more robust.

Do they already do this elsewhere, e.g. with Global Entry? Sure.


When your ID is scanned by someone at the TSA checkpoint, it is already accessing the databases you mentioned.

https://www.tsa.gov/travel/security-screening/credential-aut...

https://www.tsa.gov/di-pilots


Right. What I’m saying is, right now, TSA gets a photo from the DMV. If they want higher resolution, or more photos, then they can’t get it from that because a) you don’t go to the DMV that much, and b) the influence they have over state DMVs works slowly. With a facial recognition-based checkpoints at the airport they can get an ongoing stream of photos any time they want in whatever resolution they want with whatever hyped up millimeter wave bullshit they went and procured for themselves this time. They can get more data, more frequently, with more control.


These comments always shock me. The way people will just let their rights slip through their fingers with glee so they can get an intellectual "gotcha".

It's very hard for me to imagine not being outraged that the government is tracking you even more because you want to seem smarter than randos on the Internet.


What rights are 'slipping through' fingers here? While digitization of surveillance might make folks uncomfortable, the rights in question are mostly settled, and have been for a while. The US long operated on court precedent that one doesn't have a general expectation of privacy for one's image in public places, and next to zero expectation of privacy in airports.


The right you are losing is privacy, and the fact that there's been a trend of decisions towards further loss does not mean future losses are inevitable, or that prior losses must be taken as permanent. The existence of court precedent does not mean that policymaking over a right is settled, or anything close to settled. Roe v. Wade was a recent dramatic example of this.

Even within the framework of "reasonable expectation of privacy," the test outlined in Katz v. US leaves no room for the concept that an expectation will ever truly be "settled." To show a reasonable expectation of privacy, an one must demonstrate an individual, subjective expectation of privacy, and also that this expectation is something society would find reasonable.

So gee, does society find it reasonable to expect that you can walk through an airport without having your photo taken and analyzed by the government, and possibly stored in a database where it can be abused by internal or external actors? Viewed in the context of past behavior, perhaps not. Viewed in the context of a government that is suspicious of big tech surveillance even as it collects ever-more data on its citizens, and jurisdictions like the EU passing ever-tighter regulations like GDPR to safeguard privacy, then maybe society's expectations of privacy have shifted compared to where they were in the 2000s/2010s.


While I agree that the expectation of privacy is somewhat flexible and that future legal precedent in favor of enhanced privacy is definitely achievable (though well-defined statutory protections would, in my opinion be preferable), I don't buy that increased automation actually results in a net loss of privacy relative to the status quo.

The US government is acting within well-established legal guardrails if they were to carry out the same tasks in a highly manual fashion. If it carries out the same tasks in an automated manner, the substantial legal issues remain exactly the same: there's a change in efficiency, an increase in data capture, and improved accessibility to authorized parties, but the fundamental act of capturing images, cross referencing them to a database of known persons, and logging movement activities remain basically unchanged. I don't see how you could argue that this harms the notion of privacy in itself any worse than it already was already being harmed, and I don't see how one could construct a sound legal argument that the introduction of automation itself somehow introduces additional damages.


Automation tremendously increases the danger involved. It is substantially more difficult to audit whether the government is operating within those legal guardrails when software is involved rather than humans. Also, the government's capabilities in abusing this data are greatly increased, which in turn increases their incentive to do so. For instance, we have no way of knowing whether or not that camera is being used to calibrate a system that tracks people's movements using lower-resolution sources like Ring and traffic cameras. This, in turn, represents an obvious and severe danger to other fundamental rights, like free association: a system designed to track terror cells can just as easily be used to identify protest organizers.

We also don't really know where the resulting data is stored, or who will have access to it, or which people will abuse this information in the future, whether they are corrupt officials or third-parties who steal the stockpiled data, or what creative applications they will find for that data, or what harm the general public will ultimately suffer. These were good what-if questions on Slashdot in the 90s, but after decades of tech growth and abuse in the public and private sectors, these are now important ethical concerns that must be addressed in order to safely allow government security services to automate more and more of their jobs.


>Automation tremendously increases the danger involved. It is substantially more difficult to audit whether the government is operating within those legal guardrails when software is involved rather than humans.

How is it more difficult to audit when software is involved? If anything, it makes the possibility of immutable audit logging and rule-based alerts for potential misuse much easier to implement than with manual processes. There's a reason that high levels of automation (and associated software-enable audit capabilities) are strongly preferred in regulated business functions (e.g. financial reporting).

Further, the fact that the general public might not know how data is stored or its access is controlled doesn't prove the point that these things aren't being addressed adequately by institutional controls. In any case, that's a matter of governance that has no link the the fundamental legal questions concerning privacy that you've brought up.


Because you now have the existence of this massive flow of information that didn’t exist before. The TSA guard is not taking photos of me every time I fly. I don’t have to worry about what happens with those photos if they don’t exist. But if they do exist, then I am now at risk of all sorts of bad things happening, like the aforementioned calibration of mass surveillance, or some other angle.

If the government says “we’re not using this checkpoint stuff to spy on you or anything nefarious like that, and we’re definitely not putting together a massive database of the IDs and photos of people going through our checkpoints” I have to take them at their word because national security gets broad carve outs from transparency tools like FOIA.

Their word is historically not worth much. Government leaks data all the time. It is actually still not that great at security controls in a practical sense, at least insofar as deterring major breaches of privacy is concerned. That’s why LOVEINT happens. That’s why TSA went and posted a picture of the master key that unlocks all those stupid approved locks on the Internet. That’s why stuff like the OPM breach happen. All of this is stuff that isn’t SUPPOSED to happen, but does, regularly, because in reality government (like everyone else) has little ability to control what will happen to information once it ingests it.

And that’s just the dangerous stuff that happens without any intent on the government’s part. Whether they’re misleading Congress about the scope of warrantless wiretapping, wildly understating the implications of metadata analysis, or sabotaging cryptography by purposefully advancing backdoored algorithms, or just plain wiretapping political activists without cause to wage campaigns of legal harassment, the various agencies of the USG have a time-honored history of outright lies where privacy is concerned. So no, unfortunately, even if there exist “institutional controls” that could solve these issues in theory, we can safely say they definitely have not been implemented.

These issues are extremely relevant to your right to privacy, because rights are a balance of interests. If the government gets this big new way to abuse people, then they need big new protections against that, or they are going to get abused. Our concept of privacy has to be bigger now than it was when automated surveillance wasn’t a thing, because the stakes went up. By a lot.


Automation locks in the status quo essentially forever. Human agents can and are expected to use executive judgement when conducting themselves, and are held accountable for abuse. Automated systems just endlessly watch us, with no "person" ever at fault for any transgressions.

It's simply a bad faith argument to compare a detective watching a single individual to every single person in a large international airport, because one must have a causal justification for their actions while the second simply performs it's programming, endlessly.

If we're going to become endlessly surveilled and tracked, we as a society should at least has the smallest bit of dignity and complain about it. If we all let our privacy be eroded so tacitly, we go down like dogs.


Scale matters for things like this. When it comes to things that potentially expose people to criminal liability or other government penalties, automating anything is a gateway to abuse, present or future.


... and now they have a picture of you everytime you fly.

Everything you say (and present) can and will be used against you.

Your exact appearance will be scrutinized possibly even years later in a way that no TSA agent could possibly remember, if someone decides to accuse you of a crime that involves transport like trafficking. "On such a such a date, our expert witness looked at the TSA photograph and nervousness detection program and the AI detected they were anxious and possibly intoxicated when passing through TSA. Your honor, this nervous and intoxicated behavior was detected by TSA computers on X Y Z and A B C dates as well. Our analysis shows traffickers disproportionately exhibited this behavior, which was listed as supporting evidence on the search warrant affadivit 6 months ago after which a roach was discovered in the defendants ash tray..."


We give a lot to feel safe / for security theatre, not surprised at all.


Airport customs already scan your face at a few international airports. How is this different?


This isn't international travel.


I know but why is it ok for international travel but not ok for domestic?


I don't really understand the question: there are legal differences between (a) entry into the US and (b) travel within the US. The federal government has asserted that there are carve-outs to the Fourth Amendment for the former (the "100 mile" zone), but that's definitely not the case for the latter.


As a citizen, it makes little difference. If I want my rights respected I better not leave the country? That makes no sense. I see no reason why CBPs immigration duties can be used to bypass 4th amendment but TSA's duty to regulate interstate commerce as outlined in the constitution can't use the same logic by saying regulation of commerce to acheive safety of travellers and those in the ground supersedes the 4th amendment because unlike international travel where you can only go to canada and mexico over land, interstate travel is possible to most states without flying.


so long as the data is captured by US companies and accessible only the US government, we should be cool with this


Why?


They're being ironic.


Something similar is implemented in many German airports for EU citizens and it greatly reduces the time to enter the country.

I know, this is some kind of privacy intrusion but if you are traveling by plane, you have already no privacy left anyway.


In Europe the automated face scan is optional. You can also choose to go to a real customs agent.

And the fact that privacy is currently a gone thing in air travel doesn't mean we should keep adding tech that cements the status quo, we could also move back to more privacy-conscious options of course. It's badly needed IMO.


It doesn’t reduce the time to enter the country anymore than blocking your front door unless you pay a toll reduces your time to exit the building.


Could you please clarify why automatic face scan is considered as a privacy intrusion compared to face check by border control officer?


Indefinite archival of your face every time you cross? It's ripe for abuse for data mining to claim (via AI, expect witness, whatever) at the exact time of crossing your face exhibited behavior which was consistent with criminals, such as nervousness/anxiety. Border patrol officer can't provide an exact recollection of your face from 10 years ago, so (without camera) if you look nervous at the border this one time it's going to be difficult to look backwards and retroactively determine you looked "nervous" or intoxicated or whatever the other times too.

A photograph of your face at the pharmacy or post office taken for your passport is pretty much useless as that sort of evidence as it doesn't show your demeanor at the border.


Isn’t the issue here the conclusions drawn from this supposed evidence? Seems like the kind of thing they wouldn’t bother taking to court if the best they can muster is AI or “expert witness” as they’ll be laughed out of it.


DHS, who houses TSA, got a federal search warrant in the middle of the night to forcibly have me "internally examined" at a hospital, during which I was confined/detained for 16 hours cuffed and shackled and strip searched, made to perform bodily functions in front of officers etc. After a dog supposedly "alerted" (but actually did not, and the agent even lamented to me that the dog did not).

They may not get a conviction but a warrant or arrest will turn your life upside down for at least a few days. And they can get that based on basically nothing (I read the affadavit of my warrant and the claim was 3rd degree anonymous hearsay that didn't list the officer or the dogs name).

I am still in debt to this day as debt collectors are chasing me after DHS billed me for their search made on the kind of shoddy "evidence" they obtain from any electronic and other device they have. Every tool they obtain just makes it worse.


Are you saying the federal government billed you for the cost of their investigating you?


Yes.

My story is very similar to this Ashley Cervantes who was also taken to Holy Cross Hospital and billed for a search [0, 1], except in my case they got a warrant which was signed by the judge AFTER (MOST OF) THE SEARCH.

Like Ashley, nothing was found and I was billed afterwards. Ashley's case was even more shocking as she was forcibly and intimately penetrated by the doctor "in search of contraband" at the direction of DHS.

In my case DHS promised me they would bill it in their name, but they either lied or used the billing as a retaliatory tactic when I convinced (some) of the doctors to discharge me as the doctors had no medical basis on which to search me without consent. The bill itself was written by the private medical entities performing the search at direction of DHS.

[0] https://tucson.com/news/local/border/woman-sues-customs-over...

[1] https://storage.courtlistener.com/recap/gov.uscourts.azd.985...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: