How is this any more of an invasion of privacy than what they did before? They already have your picture from when you got your ID in the first place, and they already have a record that you're flying that day from your boarding pass.
They now get a considerably more centralized program. Instead of working through 50+ DMVs and whatever equipment the person's branch happened to use to take the photo however long ago it was taken and whatever standard resolution they get the result in, TSA can now use homeland security kinds of money to buy whatever cameras it wants, and collect a stream of photos of people who they have positively identified. And since they also now have a known high-quality image and identification at a specific place and time, they can correlate that with lower-quality images taken elsewhere in the airport to create training datasets to make automated identification from surveillance cameras more robust.
Do they already do this elsewhere, e.g. with Global Entry? Sure.
Right. What I’m saying is, right now, TSA gets a photo from the DMV. If they want higher resolution, or more photos, then they can’t get it from that because a) you don’t go to the DMV that much, and b) the influence they have over state DMVs works slowly. With a facial recognition-based checkpoints at the airport they can get an ongoing stream of photos any time they want in whatever resolution they want with whatever hyped up millimeter wave bullshit they went and procured for themselves this time. They can get more data, more frequently, with more control.
These comments always shock me. The way people will just let their rights slip through their fingers with glee so they can get an intellectual "gotcha".
It's very hard for me to imagine not being outraged that the government is tracking you even more because you want to seem smarter than randos on the Internet.
What rights are 'slipping through' fingers here? While digitization of surveillance might make folks uncomfortable, the rights in question are mostly settled, and have been for a while. The US long operated on court precedent that one doesn't have a general expectation of privacy for one's image in public places, and next to zero expectation of privacy in airports.
The right you are losing is privacy, and the fact that there's been a trend of decisions towards further loss does not mean future losses are inevitable, or that prior losses must be taken as permanent. The existence of court precedent does not mean that policymaking over a right is settled, or anything close to settled. Roe v. Wade was a recent dramatic example of this.
Even within the framework of "reasonable expectation of privacy," the test outlined in Katz v. US leaves no room for the concept that an expectation will ever truly be "settled." To show a reasonable expectation of privacy, an one must demonstrate an individual, subjective expectation of privacy, and also that this expectation is something society would find reasonable.
So gee, does society find it reasonable to expect that you can walk through an airport without having your photo taken and analyzed by the government, and possibly stored in a database where it can be abused by internal or external actors? Viewed in the context of past behavior, perhaps not. Viewed in the context of a government that is suspicious of big tech surveillance even as it collects ever-more data on its citizens, and jurisdictions like the EU passing ever-tighter regulations like GDPR to safeguard privacy, then maybe society's expectations of privacy have shifted compared to where they were in the 2000s/2010s.
While I agree that the expectation of privacy is somewhat flexible and that future legal precedent in favor of enhanced privacy is definitely achievable (though well-defined statutory protections would, in my opinion be preferable), I don't buy that increased automation actually results in a net loss of privacy relative to the status quo.
The US government is acting within well-established legal guardrails if they were to carry out the same tasks in a highly manual fashion. If it carries out the same tasks in an automated manner, the substantial legal issues remain exactly the same: there's a change in efficiency, an increase in data capture, and improved accessibility to authorized parties, but the fundamental act of capturing images, cross referencing them to a database of known persons, and logging movement activities remain basically unchanged. I don't see how you could argue that this harms the notion of privacy in itself any worse than it already was already being harmed, and I don't see how one could construct a sound legal argument that the introduction of automation itself somehow introduces additional damages.
Automation tremendously increases the danger involved. It is substantially more difficult to audit whether the government is operating within those legal guardrails when software is involved rather than humans. Also, the government's capabilities in abusing this data are greatly increased, which in turn increases their incentive to do so. For instance, we have no way of knowing whether or not that camera is being used to calibrate a system that tracks people's movements using lower-resolution sources like Ring and traffic cameras. This, in turn, represents an obvious and severe danger to other fundamental rights, like free association: a system designed to track terror cells can just as easily be used to identify protest organizers.
We also don't really know where the resulting data is stored, or who will have access to it, or which people will abuse this information in the future, whether they are corrupt officials or third-parties who steal the stockpiled data, or what creative applications they will find for that data, or what harm the general public will ultimately suffer. These were good what-if questions on Slashdot in the 90s, but after decades of tech growth and abuse in the public and private sectors, these are now important ethical concerns that must be addressed in order to safely allow government security services to automate more and more of their jobs.
>Automation tremendously increases the danger involved. It is substantially more difficult to audit whether the government is operating within those legal guardrails when software is involved rather than humans.
How is it more difficult to audit when software is involved? If anything, it makes the possibility of immutable audit logging and rule-based alerts for potential misuse much easier to implement than with manual processes. There's a reason that high levels of automation (and associated software-enable audit capabilities) are strongly preferred in regulated business functions (e.g. financial reporting).
Further, the fact that the general public might not know how data is stored or its access is controlled doesn't prove the point that these things aren't being addressed adequately by institutional controls. In any case, that's a matter of governance that has no link the the fundamental legal questions concerning privacy that you've brought up.
Because you now have the existence of this massive flow of information that didn’t exist before. The TSA guard is not taking photos of me every time I fly. I don’t have to worry about what happens with those photos if they don’t exist. But if they do exist, then I am now at risk of all sorts of bad things happening, like the aforementioned calibration of mass surveillance, or some other angle.
If the government says “we’re not using this checkpoint stuff to spy on you or anything nefarious like that, and we’re definitely not putting together a massive database of the IDs and photos of people going through our checkpoints” I have to take them at their word because national security gets broad carve outs from transparency tools like FOIA.
Their word is historically not worth much. Government leaks data all the time. It is actually still not that great at security controls in a practical sense, at least insofar as deterring major breaches of privacy is concerned. That’s why LOVEINT happens. That’s why TSA went and posted a picture of the master key that unlocks all those stupid approved locks on the Internet. That’s why stuff like the OPM breach happen. All of this is stuff that isn’t SUPPOSED to happen, but does, regularly, because in reality government (like everyone else) has little ability to control what will happen to information once it ingests it.
And that’s just the dangerous stuff that happens without any intent on the government’s part. Whether they’re misleading Congress about the scope of warrantless wiretapping, wildly understating the implications of metadata analysis, or sabotaging cryptography by purposefully advancing backdoored algorithms, or just plain wiretapping political activists without cause to wage campaigns of legal harassment, the various agencies of the USG have a time-honored history of outright lies where privacy is concerned. So no, unfortunately, even if there exist “institutional controls” that could solve these issues in theory, we can safely say they definitely have not been implemented.
These issues are extremely relevant to your right to privacy, because rights are a balance of interests. If the government gets this big new way to abuse people, then they need big new protections against that, or they are going to get abused. Our concept of privacy has to be bigger now than it was when automated surveillance wasn’t a thing, because the stakes went up. By a lot.
Automation locks in the status quo essentially forever. Human agents can and are expected to use executive judgement when conducting themselves, and are held accountable for abuse. Automated systems just endlessly watch us, with no "person" ever at fault for any transgressions.
It's simply a bad faith argument to compare a detective watching a single individual to every single person in a large international airport, because one must have a causal justification for their actions while the second simply performs it's programming, endlessly.
If we're going to become endlessly surveilled and tracked, we as a society should at least has the smallest bit of dignity and complain about it. If we all let our privacy be eroded so tacitly, we go down like dogs.
Scale matters for things like this. When it comes to things that potentially expose people to criminal liability or other government penalties, automating anything is a gateway to abuse, present or future.
... and now they have a picture of you everytime you fly.
Everything you say (and present) can and will be used against you.
Your exact appearance will be scrutinized possibly even years later in a way that no TSA agent could possibly remember, if someone decides to accuse you of a crime that involves transport like trafficking. "On such a such a date, our expert witness looked at the TSA photograph and nervousness detection program and the AI detected they were anxious and possibly intoxicated when passing through TSA. Your honor, this nervous and intoxicated behavior was detected by TSA computers on X Y Z and A B C dates as well. Our analysis shows traffickers disproportionately exhibited this behavior, which was listed as supporting evidence on the search warrant affadivit 6 months ago after which a roach was discovered in the defendants ash tray..."