On arrest, you're required to provide your name and address, not proof. For the absolute majority of UK adults, it takes exactly 2 minutes to verify that data against public records - passport, driving licence, council tax, voter registration.
Lying in that situation is a separate criminal offence all of its own.
>satisfy some shit algorithm that misidentified you as some known threat
Matches with a confidence rating of <0.64 are automatically deleted >0.7 is considered reliable enough to present to a human operator, and before any action is taken a serving police officer must verify the match, and upon arrest verify the match against the human.
>What if your child falls victim to a false identification
The age of criminal responsibility is 10, and absent any personal identification parental identification is the standard everywhere.
>15-year-old Child Q
The good old slippery slope fallacy. Both the officers who strip searched that child were fired for gross misconduct. North of 50,000 children are arrested each year and this happened once.
>Do you really want more unnecessary interactions with the police for yourself or those you care about when your "suspicious behaviour" was having an algorithm judge that your face looked like someone else's?
Thing is 12 months on, 1035 arrests, over 700 charges, and that hasn't happened because the point of testing the scheme thoroughly was to stop that from happening.
Those 'niche' forums you mention are explicitly excluded from the Act.
Apple made the change to advanced security in advance of the bill being finalised, now the government has gone in another direction.
All the online safety act does is implement online the law as it stands IRL. British folk have been using the same ID verification systems to validate identity for nightclub admission, passport applications, driving licence applications, benefits claims, state pension claims, disclosure and barring checks, tax filings, mortgage deeds, security clearances, job applications, and court filings since 2016.
All the reaction is just pearl clutching - 5 million checks a day are being performed, the law itself is wildly popular with 70% support amongst adults after implementation.
There are three levels of checks - IAL1 (self-asserted, low confidence), IAL2 (remote or physical proof of identity), and IAL3 (rigorous proof with biometric and physical presence requirements).
IPA 2016 affords police access to your domain history, not content history, provided police can obtain a warrant from a senior High Court Judge. The box which stores the data is at ISP level and is easily circumvented with a VPN, or simply not using your ISP's DNS servers.
IPA 2016 doesn't exempt politicians from surveillance. It includes specific provisions for heightened safeguards when intercepting their communications. The Act establishes a "triple-lock" system for warrants targeting members of a relevant legislature, requiring approval from the Secretary of State, a Judicial Commissioner, and the Prime Minister. This heightened scrutiny is in recognition of the sensitivity involved in surveilling politicians, particularly given the surveillance of Northern Irish politicians and others in the 1950s, 60s, 70s, and 80s.
Part III of the Regulation of Investigatory Powers Act 2000 (in force 1 October 2007), and Schedule 7 of the Terrorism Act 2000 provides powers over encryption keys/passwords etc. Section 49, RIPA can be used to force decryption, Section 51 to supply keys or passwords. These are identical to powers the police have IRL over safes, deposit boxes etcetera, and the penalty for non-compliance is identical.
You cannot use encryption or passwords to evade legal searches with a scope determined by a court on the basis of evidence of probable cause shown to the court by the entity requesting the search. A warrant from the High Court is required for each use.
Notable cases:-
- Blue chip hacking scandal - corrupt private investigators were illegally obtaining private information on behalf of blue chip companies.
- Phone hacking scandal - corrupt private investigators were illegally hacking voice mail on behalf of newspapers.
- Founder of an ISP using his position to illegally intercept communications and use them for blackmail.
> Those 'niche' forums you mention are explicitly excluded from the Act.
No, they are not.
> Our research indicates that over 100,000 online services are likely to be in scope of the Online Safety Act – from the largest social media platforms to the smallest community forum. We know that new regulation can create uncertainty – particularly for small organisations that may be run on a part time or voluntary basis.
Yes, they are in scope but a "small community forum" has nothing to do but to fill and keep a few self-assessments just in case. There is no requirement to implement age verification across the board (hence why current official guidelines target only porn sites in relation to age verification).
You are being facetious as "priority illegal contents" are the sort that are the ones that are obviously very unlikely to be encountered on a "normal" small community forum. So this is no more than a box-ticking exercise, really.
Regarding age verification, the OSA is explicit states that if you ban all such content in your T&Cs you do NOT need to have age verification.
I take it you didn't read your own link, the language used is "services".
If you happen to be running the UK panty wetters forum from your own server, then you have a problem, but grandma Jessie's knitting circle is explicitly not in scope.
YOUR link goes on to say
>the more onerous requirements will fall upon the largest services with the highest reach and/or those services that are particularly high risk.
Even if your forum falls in scope, you're only required to do a risk assessment, if at that stage you are likely to have a lot of underage users, then there might be an issue.
However, if you're not an adult site, you only need to comply by providing the lowest level of self certified check. Handily, most of the big forum software providers have already implemented this and offer a free service integration.
> I take it you didn't read your own link, the language used is "services".
I do love it when people lie and then try to get sassy when called out.
> Even if your forum falls in scope, you're only required to do a risk assessment, if at that stage you are likely to have a lot of underage users, then there might be an issue.
I also like it when people who accuse others of not reading prove themselves incapable of reading - as pointed out below, what I linked is required regardless of the assumed age of your userbase.
Overdosing is not a crime, it's not even the job of the Police to help, and possession of drugs is being ignored by most forces because an arrest takes two officers off frontline services for 4 hours, when it will most likely result in a caution.
>Here, if the cop sees someone overdosing, they immediately call the ambulance, not walk by and do nothing.
This unevidenced claim is probably nonsense in any case, no police officer would simply walk by. They may very well walk by and talk into their radio to summon the right kind of help, or they may be responding to a higher priority call.
Just because your mate Bob claims they saw something, doesn't mean Bob had any real idea what was going on.
It's like the old saw about a window blind for a hospital ward costing £200, when you can buy one for £20 elsewhere. Thing is the one for £20 doesn't come with a specialised coating that eliminates bacterial or viral spread, or with a bloke that installs it according to the relevant safety regulations, or the supervisor who certifies the installation. It certainly doesn't come with a number you can call to fix the blind if there's a problem with it that includes on site service.
You and I have very different experiences with police officers. Police Officers may walk by someone overdosing is hardly a claim that needs any evidence in my experience because it's so widely understood to be true.
I saw it on video (inb4 deepfake), I did not hear it from Bob. So yeah, the cop in London did just simply walk by and did nothing. I can give you the video if you so want.
Wait, where's "here"? You just said you'd seen the police do nothing about somebody overdosing - and now you're saying that's exactly what they never do, where you are. Wherever that is.
All positives are verified by humans first before action is taken, all the system does is flag positives to an operator. Once verified, then the action movie starts.
Match quality below 0.64 is automatically discarded >0.7 is considered reliable enough for an enquiry to be made.
So far ~1,035 arrests since last year resulting in 773 charges or cautions, which is pretty good when you consider that a 'trained' police officer's odds of correctly picking a stop and search candidate are 1 in 9.
In the UK you don't have to provide ID when asked, appropriate checks are made on arrest, and if you lied you get re-arrested for fraud.
The system has proved adept at monitoring sex offenders breaching their licence conditions - one man was caught with a 6-year-old when he was banned from being anywhere near children.
Before anyone waxes lyrical about the surveillance state and the number of CCTV cameras, me and the guy who stabbed me were caught on 40 cameras, and not a single one could ID either of us.
> "In the UK you don't have to provide ID when asked"
Well if you are suspected of a crime they can arrest you if you refuse to identify yourself. I 'suspect' that being flagged by this system counts as such if you match someone who is wanted or similar.
You can't make an arrest on the basis of refusal to verify identity, unless a specific law is in play, or the Police officer has proof you are lying.
If the police have probable cause to suspect you've committed an actual crime, then you have to ID yourself, you are entitled to know what crime you are suspected of. Yes, facial recognition does count, but it has to be a high confidence match >0.7, verified by a police officer personally, after the match is made, and verified again on arrest.
If you are suspected of Anti-Social Behaviour then you have to ID (Section 50 of the Police Reform Act)
If you are arrested, then you have to provide your name and address (Police and Criminal Evidence Act 2000).
If you are driving, you have to ID (Section 164 of the Road Traffic Act).
Providing false information or documents is a separate criminal offence.
Essentially, police can't just rock up, demand ID, and ask questions without a compelling reason.
> You can't make an arrest on the basis of refusal to verify identity, unless a specific law is in play, or the Police officer has proof you are lying
> If the police have probable cause to suspect you've committed an actual crime, then you have to ID yourself, you are entitled to know what crime you are suspected of
It's always been my impression that this kind of ambiguous phrasing combined with the power imbalance gives the public absolutely no protection whatsoever. Let's say you don't want to provide ID: the copper could come up with some vague excuse for why they stopped you / want your ID. Good luck arguing with that
>the copper could come up with some vague excuse for why they stopped you / want your ID.
In which case, their sergeant will tear them a new one, right after the custody sergeant has finished tearing their own hole because the careers of both of those people rely on supervising their coppers and supervising their arrests. If the custody sergeant has to release someone because the copper can't account for themselves, that is a very serious matter. The sergeant's can smell a bad arrest a mile away.
The copper has to stand up in a court of law, having sworn an oath, and testify on the reasonable suspicion or probable cause they had. If they are even suspected of lying, that's a gross misconduct in a public office investigation.
Assuming they weren't fired over that, any promotion hopes are gone, any possibility of involvement in major cases or crime squads, hope of a firearms ticket, advanced driving, or even overtime are gone. Their fellow officers will never trust them to make an arrest again.
It's not consequence free, I'm not saying it doesn't happen, or that some officers rely on you not knowing your rights, but it is a serious matter.
In the UK Bolt is an alt-Uber app, which for some journeys is cheaper than Uber.
I had an interesting conversation with a Bolt driver recently, after he remarked surprise that I was a 'decent' passenger; it turns out Bolt is the app for everyone banned from Uber.
Many of the drivers work multiple apps and apparently quite a few are refusing to pickup Bolts or treating Bolts with suspicion because Bolt doesn't have the same driver protections as Uber.
When I ran a nightclub ~20 years ago we would share photos of people we had banned, so other venues would also ban those patrons. That left them to the 'trouble pubs' which were consequently heavily policed. What is happening now is no different.
Ultimately there are groups of people within society that can't behave themselves and ruin the experience of [whatever] for everyone else, and society acts accordingly.
Because it's a great deal easier than replacing the logic board after the user ripped the Ethernet port out of it or filled the usb ports up with crap.
I repair used macs. The number 1 problem on all the pre-2012 MacBooks is damaged ports.
Not for nothing, all of these arguments were made when Apple switched from ADB to USB, to Firewire, from 30pin to lightning, and they're being recycled again for the switch to USB-C.
The answer to your question is that each one does something better than the last in terms of portability or data transfer.
My working assumption has always been that Apple makes decisions like this just because they can, frankly.
I would do the same thing if I had the same power to move markets that they do - no need to put up with legacies of the past when I can just force everyone to follow me toward whatever I consider the future to be.
USB provided device portability over ADB and higher transfer speeds, lightning doubled the data transfer rate available to any other device available in 2011 and stopped ports breaking due to 'fluff', firewire provided faster transfer speeds and networking features.
USB-C provides zero insertion force sockets, a single replacement for any plug of any kind from monitors to networking, 24 pins, and 40Gbps speeds.
Reliability. Speed. Adaptability. User convenience.
Having spent this week doing compliance for my small business customers, the cost is not zero but it's really not much at all - I've done full compliance for six companies and it cost less than £250 each (one of those clients is a large NGO).
This guy doesn't like regulation and is playing to the crowd for sympathy.
The secret is bald faced lying. The quoted price may barely cover an updated privacy policy from a lawyer. And nothing else. Not one line of code changed.
Let alone full review of every system and legal review of the DPAs you have to sign and or create with every single co and processor.
Not the OP, but it's pretty straight forward for most people (including the author of TFA). You need to identify what private information you collect. You need to decide what lawful basis you are using to collect that data. If you have no lawful basis, you have to stop collecting that data. When you collect the data you need to notify the user under what lawful bases you are collecting the data. If you are using consent lawful basis, you need to get consent in an opt-in manner. You need to record what statement you have shown to the user and any consent that you receive.
If you are using only contract basis for the data it's really easy. You tell them that you are using their data for purposes of fulfilling the contract. The great thing about contract basis is they can't object. The only thing you need to do is to inform the customer of any 3rd parties you send their information to in order to fulfil the contract.
It only gets complicated if you want to use the data for other things. For legitimate interest (which is essentially exactly the same as the laws that are currently on the books) you need to be able to exclude processing the data if someone objects. You also need to make sure that you don't delete their data if they exercise their right of removal (which is completely bass-ackwards, but whatever). Consent is similar actually, but you have to get the consent up front. The other lawful bases are very unlikely to show up in most organisations.
I think the main problem with most organisations (and it's the case with the company I work for at the moment) is that control of private information is very loose. For example, we use several SaaS systems for our marketing. Some of them are clearly unnecessary and so we either have to remove that functionality or get consent. So there's lots of discussions about whether it is worth a huge wad of text thrown at the user in order to have cat emoji's or some stupid thing like that.
The other main problem is that if you want to use something other than contract basis, you need to build something that allows the user to exercise their rights. It can be a manual process, but if you have a lot of users it might threaten the margin.
Anyway, long story short: If you are only gathering the information that you need to do the work you are doing, there is likely very little (or in a lot of cases I bet nothing) to do. If you are gathering the information to use for your own purposes, then there may be a lot that you need to do.
Not to put too fine a point on that, personally I highly approve of this. I really could care less if somebody's business model is destroyed because it is now too expensive to collect information that you don't need to do the job. Even in the company I work for, where we don't actually use the data for nefarious purposes (AFAICT ;-) ), we're finally having some long overdue conversations about what stupid SaaS crap we're using under the hood. Not to be unkind, but I utterly fail to understand how marketing people fall for the same lies that they spew out themselves... "If only we send our customer's data to this service, they will find a way to drive more business our way! And we don't even have to pay them!" Yeah... right...
I don't do any real business in the EU, but I'm a fairly succesful online marketer. Being able to flexibly use SaaS businesses is so, so valuable for testing and iterating on marketing plans. I would fight pretty hard against a company policy that limited it, since today's marketing test is tomorrow's major revenue driver.
I think you misunderstand what I was saying. We collect data in our system. We use that data for marketing under legitimate interest. Sometimes marketing would like more analysis done on the data than we have time to implement. They hear about some SaaS business that will take the data and give them a marketing plan (Yay! No work to do!). They ask us to ship over all the data to the SaaS business. Sometimes it's a good idea because the SaaS business is legitimately providing an analysis service. Almost all of the time the SaaS business is providing nothing beneficial and instead just scooping up personal data that they sell. It's difficult for us technical people to explain why we can't just arbitrarily ship data over to some random SaaS. With GDPR it will be much, much, much better. Essentially I think it will shut down the fly by night operations that are just sucking data and offering nothing in return. But on the flip side it will mean that these analysis operations will have to charge a reasonable fee for their services (instead of selling the data they collect). This, in turn, will prompt the marketing people to have to do due diligence because they actually have to spend money out of their budget. No more "It's free, so why not?"
Similarly we sometimes get asked to incorporate silly things into our service because the marketing people think that it will create engagement. Again, these are free SaaS businesses that are scooping up data and selling it. Although I made up the cat emoji thing, it's not that far off what we sometimes get asked to incorporate. With GDPR, those businesses are going to have to charge for their services and that's going to have to come out of our budget. We don't have to argue "We're not shipping our whole customer database over to a SaaS just so we can have cat emojis on the the system". Similarly, it makes our systems simpler because if they really want cat emojis, we can implement them -- it's just not "free" (it never was, but it's hard to have that conversation sometimes).
I probably should have left the SaaS thing out of my explanation because it's confusing and only slightly related to what I was talking about :-). Like I said, we use some great services for marketing and will continue to do so under GDPR.
Do a lot of SaaS businesses sell personal data as their business model? When I think of SaaS I think of paid subscription access to a piece of hosted software.
For example, my business model might be to ask you for your login and password information for your bank so that I can help myself to the contents of your bank account. In return I'll send you a newsletter on how to get rich quick :-)
I doubt you are asking seriously, but in case you are, the distinction is: if I need the information to complete the contract, then it is under contract basis and I'm allowed to use it for that purpose. If it's not needed for completing the contract, but I have a legitimate reason for using the data anyway (kind of vague, but includes marketing -- basically all the stuff that was legal before GDPR) I can do so, but I need to tell you I'm doing it. You can object and then I have to stop. If I have no legitimate reason for using the data, but I want to anyway, I can still do it. I need to ask for your consent (which has to be opt in). My service can't depend on you opting in (because I have no legitimate reason for needing the data). I can't deny service just because you opt out. You can also withdraw your consent at any time.
So in my silly example at the top, I could literally ask for consent to use your login details for you bank. If you agreed, I could use them. However, since I have no legitimate interest in your bank login details (other than I wanna look at your bank balance), I can't make my service depend on that.
If your business model is based on making money from data that you have no legitimate interest in and you have no consent for... well, I really, truly have no sympathy at all. I understand that some people may have a different opinion, but I don't think mine is really that unreasonable.
This seems as good a place as any to challenge some of the simplifications that are often given in defence of the GDPR.
Not the OP, but it's pretty straight forward for most people (including the author of TFA). You need to identify what private information you collect.
Fair enough.
You need to decide what lawful basis you are using to collect that data. If you have no lawful basis, you have to stop collecting that data.
Right, but probably the most practically relevant basis for anything non-trivial will be legitimate interests, which of course involves balancing tests. Even today, just a week before this all comes into effect, there is little guidance about where regulators will find that balance.
If you are using consent lawful basis, you need to get consent in an opt-in manner. You need to record what statement you have shown to the user and any consent that you receive.
But this is retrospective and stronger than the previous requirement. Even if you have always been transparent about your intentions and acquired genuine opt-in from willing users, you are now likely to be on the wrong side of the GDPR if you can't produce the exact wording that was on your web site or double opt-in email a decade ago. The most visible effect of the GDPR so far seems to be an endless stream of emails begging people to opt in to continue receiving things, even where people had almost certainly genuinely opted in already before.
For legitimate interest (which is essentially exactly the same as the laws that are currently on the books) you need to be able to exclude processing the data if someone objects.
Not quite. There also appear to be a balancing aspects here, though with some additional complications involving direct marketing, kids, and various other specific circumstances.
Take a common example of analytics for a web site. These may include personal data because of things like IP addresses or being tied to a specific account. Typically these have relatively low risk of harm for data subjects, but if for example a site deals with sensitive subject matter then that won't necessarily be the case either.
A business might have a demonstrable interest in retaining that data for a considerable period in order to protect itself against fraud, violation of its terms, or other obviously serious risks. Maybe the regulators will consider that those interests outweigh the risk to an individual's privacy if their IP address is retained for several years, at least in some cases. Maybe they will find differently if it's the web site for a drug treatment clinic than if it's an online gaming site.
Even if the subject matter isn't sensitive, where does the line get drawn? A business that offers a lot of free material on its site to attract interest from visitors might itself have a legitimate interest in seeing who is visiting the site and tracking conversion flows that could involve several channels over a period of months. This is arguably less important than protecting against something like fraud, but nevertheless the whole model that provides the free material may only be viable if the conversions are good enough. But equally, maybe it's not strictly necessary for the operation of the site and whatever services it offers for real money, so should the visitor's interest in not having their IP address floating around in someone's analytics database outweigh the site that is offering free content in exchange for little else in return?
That's just one simple, everyday example of the ambiguity involved here, and as far as I'm aware the regulator in my country has yet to offer any guidance in this area. Would any of the GDPR's defenders here like to give a black and white statement about this example and when the processing will or won't be legal under the new regulations?
The other lawful bases are very unlikely to show up in most organisations.
I would think the basis that you have to comply with some other law is also likely to be quite common. It will immediately cover various personal data about identifying customers and recording their transactions for accounting purposes, for example. But again, since that will include the proof of location requirements for VAT purposes in some cases, how much evidence is a merchant required to keep to cover themselves on that front, and when does it cross into keeping too much under GDPR?
The other main problem is that if you want to use something other than contract basis, you need to build something that allows the user to exercise their rights.
And once again, those rights are significantly stronger under the GDPR, particularly around erasure or objecting to processing. Setting up new systems that comply may not be too difficult, but what about legacy systems that were not unreasonable at the time but don't allow for isolated deletion of personal data? To my knowledge, there is still a lot of ambiguity around how far "erasure" actually goes, particularly regarding unstructured data such as emails or personal notes kept by staff while dealing with some issue, or potentially long-lived data in archives that are available but no longer in routine use. And then you get all the data that is built incrementally, from source control systems to blockchain, where by construction it may be difficult or impossible to selectively erase partial data in the middle.
Not to put too fine a point on that, personally I highly approve of this. I really could care less if somebody's business model is destroyed because it is now too expensive to collect information that you don't need to do the job.
But what if an online service's business model relies on processing profile data for purposes such as targeting ads to be viable, and regulators decide that a subject's right to object to that processing outweighs its necessity to the financial model?
It's easy to say a lot of people might not like being tracked, but on the other hand, if services like Google and Facebook all disappeared in the EU as a result of the GDPR, I'm not sure how popular it would be. There are two legitimate sides to this debate, and neither extreme is obviously correct.
Thank you! This post starts to show some of the huge complexities that GDPR has for business and their understanding of what the terms of the law mean.
A point is that often statements of a law are defined not by the language but by the ruling of lawsuits that occur around those statements and that is what most companies and lawyers are waiting for, what do courts rule when these lawsuits happen.
The biggest issue that I have heard of (Im no expert) is what does the right to be forgotten actually mean ? Does that mean all your backups are now illegal as you are retaining the customers information after they asked you to remove their records?
I think some of the fear that smaller business have is that this will encourage lawsuits until people understand how the courts will rule on each item.
I think the parent's reply is a good one. We could probably debate some of the finer points, but I think when we get some time to see how it all shakes out in the end we'll have a better vantage point.
I can't find it right now (and I have to get back to work), but there is a reasonableness requirement for requests. So things like backups might be covered by that. I wish there was some direction on that because it's a problem for me at work as well.
My opinion is that the directive's view is that all personal data retention should be temporary. There should be a defined point where the personal data is deleted. Either that's when it's no longer necessary for the contract, or when you no longer have a legitimate interest in it, or when the user asks for the removal.
Up to this point, most of us have been building databases with the intent of retaining the information indefinitely. So we never thought about this. Although I'm a fan of this law, I admit that it's going to be troublesome transitioning from where we were to where we need to go.
And as the parent briefly stated, immutable databases are going to be a serious problem.
I think the UK agency had some text on erasure and backups, and it basically boiled down to this:
If a data subject requests their data to be erased, you should remove their data from active systems so that it is no longer being processed, but you don't have to remove it from backups or other passive systems. You should however store some sort of marker so that if you need to restore data from backups, the data subject's data will be re-erased or otherwise stopped from entering active systems again.
And if a data subject asks, you have to tell them how long you store your backups of their personal data.
I think that's perfectly reasonable. And if your backup retention policy is "forever", now might be a good time to re-evaluate that policy.
Neither the UK nor the EU previously had any general provision for a right to erasure. At EU level, considerable waves were made when the "right to be forgotten" ruling was issued, but that came from a court that was considering a specific case.
I think some of the fear that smaller business have is that this will encourage lawsuits until people understand how the courts will rule on each item.
That concern really is unfounded, though. The primary means of enforcement of the GDPR will be action by national data protection regulators. It isn't some carte blanche for trigger-happy lawyers to start suing every business that gets a little detail wrong or anything like that.
The general concern that the picture is unclear until something happens to clarify it is, unfortunately, much better founded.
> But what if an online service's business model relies on processing profile data for purposes such as targeting ads to be viable, and regulators decide that a subject's right to object to that processing outweighs its necessity to the financial model?
forgive my frank language, but too fucking bad.
edit: my right always outweigh your profits. Sorry.
The only problem I see here is needing data based on contract obligations, I have seen lots of sites packing the data collection into privacy policy or some shady contract, thinking that this is legitimate interest. But legitimate interest is actually the hardest part of GDPR, even if most people think it is a workaround. If you can provide the service without some personal data (not due to financial claims) you can't pack those under "better user expirience" as legitimate interest. I presume, that after 25th, google will stop tracing searches for EU users for example. Legitimate interest has a long recital behind it and is a real problem to do it right unless legalislation requires the data. I would stick to consent for everything else. Just mentioning.
There is only two ways of legitimate interest that I considered for my service; "security" and "better user experience".
The data collected under the former is simply the IP and a timestamp in webserver and app logs, usually purged within 7 days and then any user data included in backups, purged after 3 months.
"better user experience" is not really personal data but I included it anyways; browser type (mozilla/edge/etc.), viewport resolution, pageload time, OS. And not stored in a way that allows correlating them.
> processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.
Let me put it this way: if I found out this guy was using my IP address and machine config to do analytics and perform "security" checks, I'd report him to my regulator. Dead serious.
"Analytics" is not what his company is for, ergo, using my Personal Data to do analytics isn't okay. He sure as hell isn't doing it for my benefit. I'm also not hiring him for security, so the same reasoning applies: he doesn't get to store my IP address in his logs without asking.
And when I say "no" to his opt-in modal, he'll still have to provide me non-degraded service. The fact that he can do so is yet another indicator that the data collection is not a legitimate interest.
The security of their network is a legitimate interest. The regulator would see that alone as sufficient reason to gather data, especially if that data is mostly discarded 7 days later.
No. They could start looking at IPs once they actually had a security problem, but there's no way in hell they "need" to write my IP address hither and yon to protect their network.
Look, you can definitely discover and monitor for problems by simply hashing IPs and storing the hash instead. Once you've detected a potential problem (say, a lot of requests from the same hash), only then do you have a "legitimate business need" to record the actual IP addresses and do some short-term analysis of the situation.
The spirit of the law is simple: if you don't absolutely need to store personal data, DON'T. Just don't. Store something else. Or just drop the data into /dev/null. Saying that you'll delete soon the personal-data-you-don't-need isn't sufficient.
And really, if this is the way GDPR compliance is going to go, "muh security" is quickly going to gain the reputation as the bullshit reason shady people trot out who want to disobey the law. People who actually care about security should push back on that strongly.
I don't think that it relates to you, but maybe just for others: "better user expirience" is not something without it your website could work. If this means handling PI (for analytics (GA way, not local) for instance), you cant just flush it down the legitimate interest.
Over the thumb: you can use it for things were you need PI for your service to work, it is normal, that you request address if you operate the online shop, you can't deliver the goods without it, while analytics is something users don't need and is not required for your service to operate.
I was just writting complaint letter to my phone/isp company where they showelled marketing, questionars, threat assesment (not IT security, customer assessment) analytics and few other fishy things into legitimate interest, without even providing information about which data they use and why exactly. Legitimate interest is a really nasty thing and it is hard to get it right, it is not free "get out of jail" card.
They struggle with it for the same reason people on the political left always struggle to understand why some people oppose new regulations: the question of whether and how to regulate commercial activities is always a proxy for deeper underlying differences in how people view the world. GDPR is just a proxy fight between the left and the right and is showing all the same characteristics.
Consider adventured's sibling post - it quite astutely points out that GDPR discussions are much more vitriolic than you'd expect for discussions of the minutiae of data handling. People who say that GDPR compliance is hard are being attacked on a personal level. He explains it as 'emotional investment' in GDPR but I don't think that's a good explanation; the people arguing most strongly for it are also those saying it's not much work, so that seems backwards. You'd expect people who put in the most effort to be most emotionally invested in it.
There's a much better explanation available: your view on GDPR is a direct consequence of your assumptions about human nature. If you believe in the existence of benign and enlightened technocrats then GDPR seems like excellent progress towards building a better world - it's extreme vagueness and severe penalties are exactly what's needed to foster obedience to technocratic elites. People who complain about this are just being unnecessarily awkward ... just be reasonable after all, and you'll be fine! The EU are reasonable so if you're reasonable too, you have nothing to fear! From this perspective, anyone who objects to GDPR or actually decides compliance is impossible must - almost by definition - be being unreasonable. What are they hiding? Why can't they just get on board; the only answer available is that they have flawed characters and any points they make about gray-area debatable things like cost:benefit ratios must be some sort of obfuscation.
If on the other hand you believe the whole idea of wise and beneficent bureaucrats is naive, then GDPR looks like a hell of a lot like a power grab by the very sort of people who shouldn't be able to grab power. Vagueness is of deep concern because it's in the shadows of vagueness that abuse can be found, and when a law is nothing but vagueness, it even makes sense to question to motives of those who created it - that's a problem because lots of self-styled Europeans have bought into the EU's utopian rhetoric and can't separate criticism of the EU from criticism of themselves and their desired future.
There's no real scientific way to prove whose assumptions about human nature are right. The USSR was a rare example of a real-life experiment in who was right and for a long time it proved the American style, conservative, small weak government is better mentality to have superior results. But that was decades ago and many have forgotten or weren't alive back then, so now rule by technocratic dictatorship seems attractive again.
As a consequence GDPR discussions will always have the same flavour as Clinton v Trump debates, or Brexit debates, or whether to restrict spending on political campaigning. They are ultimately about the same issues.
You're bringing your US-American assumptions about politics and left vs right into the context of European politics where they are a poor fit. The world is not Democrats vs Republicans.
There is no GDPR debate or fight: it's done, it was done six years ago, and the only people pushing back are American companies who are unhappy that Europeans don't want their data hoovered up by corporations they have no control or oversight of, for who-knows-what purpose.
Sorry, but do you want to say that EU has no left and right in politics (parent post did not mention Democrats or Republicans)? Or that everyone in the EU is unanimously happy with GDPR? Seriously, if a law's getting applied only after a long while it's passed - it's not unheard of to have a debate as people start to actually care.
Maybe I'm wrong, but I think that parent example is not US-specific at all and is applicable to just about any country where there are people that learn toward different beliefs (that "direct consequence of your assumptions about human nature" part of the post)
The assumptions may be wrong, but that was generally constructive.
First of all, come on, obviously I'm not saying there are left and right in EU politics (and in the national politics of EU countries), but what those left and rights are concerned with don't match 1:1 with the issues under debate in American politics.
Partly because there is a much broader political spectrum -- Democrats in the US roughly line up with, for example, the Conservatives in the UK or the CDU in Germany -- but also because it's just a different set of issues and preoccupations.
I do think it's fair to say that within the EU there is a general consensus about the importance of data privacy, and I also don't detect any resistance to the GDPR in general, or any question that it should be repealed. (That was partly sealed by the revelation of US spying on Europeans a few years ago, which hasn't been forgotten.)
Second, if I'm honest, I find the whole "assumptions about human nature" is a bunch of hokum and quite the opposite of constructive. Nothing about GDPR has to do with "obedience to technocratic elites", and is in fact about rejecting the ability of institutions which are not democratically accountable to gather personal data and monitor people, or make decisions that affect their daily lives, without their informed knowledge and consent.
GDPR is not a "power grab" (hah!), it's about distributing the power that comes from control of information more evenly. The EU has a lot of flaws, but this is one of the most democratic and equalizing bits of regulation that they've produced, and frankly the concessions it makes to large companies are huge.
I don't accept the argument that to be in favor of this I must be in favor of USSR-style totalitarianism. If anything, the inefficient planned economy of VC-funded startups, with their cults of personality around founders, that want to collect data and influence populations with impunity are the petty dictators of the 21st century. Personal rights should trump the rights of corporations, and I am deeply suspicious of people who would equate the two.
But that's all making a mountain out of a molehill: most of what GDPR does is harmonize existing regulation across the EU to make it easier for companies within and outside of Europe to do business here, adds enforcement teeth to the regulatory agencies and harmonizes the penalties, and sets out in actually rather specific detail what is required to be compliant, while giving everyone years to implement this regulation.
If people don't want to comply with GDPR and just block all EU users, then that will make the internet a nicer place for us, so by all means go ahead!
You probably haven't looked then. Despite assumptions elsewhere, I'm from Europe and still live there for example. The idea that everyone loves GDPR is naive. Only today I was working next to someone who was trying to figure out how it applied to her (tiny) business, and getting annoyed by the process. She's just copy/pasting the contents of an email she received into her own mail copy to avoid having to do extra work.
Nothing about GDPR has to do with "obedience to technocratic elites"
No? I think you missed by points then.
The GDPR was created, is enforced by and serves the interests of regulators. It specifies so little it is essentially a direct grant of power to those people - they can do whatever they want within its framework and that framework allows nearly anything.
As for 'technocratic elites', did you see political parties campaigning on this issue? I sure as heck did not. Right now the hot topics in European politics are immigration, terrorism and economic growth. Not data protection.
is in fact about rejecting the ability of institutions which are not democratically accountable to gather personal data and monitor people
Of course companies are democratically accountable - outside of monoplies (rare), you can just not trade with them if you don't like their data handling practices.
Interestingly enough, GDPR gets the most support from the center-right parties within the EU, in my experience because it's a law that protects individual's rights to their data. You'll find that it's not that popular among the left-wing member parties.
It's a deflection tactic by people who are emotionally invested into GDPR. Note the extreme emotionalism that GDPR draws out of its supporters. That makes it difficult for some of the supporters to have a rational discussion when it comes to the flaws of GDPR. They don't have a legitimate response to the context in question, so the easy approach is to attack the credibility of the person stating that they've struggled with compliance, rather than engaging in substantive discussion about the problems that GDPR generates for small businesses. The fear for the supporters is that if they admit to there being any flaws in GDPR, that will then act as a threat to GDPR (which they view as a monumental victory for privacy). They don't want to give an inch of ground, no matter the issue, because they're afraid of having GDPR diluted, taken away, and or not spread to the rest of the planet.
This is also why in all cases you'll see the GDPR supporters go after the character of the site/service owner (including always questioning their motives to muddy the waters). It's an attempt to short-circuit any reasoned debate, to destroy the credibility of the opponent. This has happened numerous times on HN in the last month or two.
You still have data hygiene policies to enact which are confusingly legislated. Also legitimate interest is a loophole created to appease some lobbyists but the legislators declined to make clear anywhere because they don’t give a shit about commercial needs.
What exactly did that less than £250 get your customers in return?
Even if you had a business that was whiter than white in terms of compliance with previous data protection laws and had perfect documentation of all its data collection and processing activities, it would surely cost far more than that just for the time to write some basic notes on the extra things you now have to tell data subjects and/or your regulator, get them reviewed by a lawyer, incorporate them into the relevant policies, and send notifications to anyone affected about your updated privacy policy.
Is that £250 each just for GDPR compliance? That's one law in one region. Now multiply it by the number of legislative bodies worldwide and the number of relevant laws passed by each - how much does that cost?
If a business doesn't want to be compliant with the laws of all ~200 sovereign states in the world, they're most welcome to just select a single one and do business there (not the US though, as laws often change for each state so that's right out the door). I'm sure the competitors in the space will love having one fewer competitor.
Complying with the laws of 200 countries is negligible for a big business like Google, a significant burden for a small startup, and a prohibitive expense for a new open source project.
Are you compliant with the copyright laws in your company? Are you sure you have licenses for all the software you use? Have you audited the software you write to ensure that none of the programmers have included code without an appropriate license? How about patents? Are you sure that the software you write does not infringe on patents somewhere? There are people who will happily audit your company in exchange for a truckload of money... For some reason, most people don't think this is necessary.
Your risk in GDPR is similar to your risk in IP law. If you don't comply with the law and someone calls you on it, you might have legal proceedings against you. In most cases it's pretty obvious if you are compliant with the law (Well, to be fair, it's completely unobvious if you are going to get randomly sued for patent infringement, but I digress...) If you are have a very complex situation, then maybe it is worth some legal advice, but it's pretty freaking obvious if you need the data you have collected in order to fulfil the contract or not.
An audit without certification will never give you anything that you could not have come up with yourself. So feel free to buy a GDPR audit but realize that you are just buying an opinion.
In the USA, the word "audit" is used to describe any process by which a company tries to determine if it's in compliance with some set of rules. Sometimes that process has special legal consequences, but it usually doesn't. The final deliverable is often literally called an opinion.
No lawyer or accountant has ever given me anything that I couldn't have come up with myself, with sufficient study. I still paid them, because the law is very complex and I have other things to do with my time. That's how any country with a nontrivial legal system works.
You seem to have great confidence that you understand how the GDPR will be enforced. I'd suggest that:
1. Not everyone knows as much about EU law as you do. This is especially true for people who don't live in the EU.
2. You might be wrong. Maybe GDPR compliance really is dead simple, and the lawyers who keep answering "it depends" are just cheating their clients; but from my experience in complying with similarly complex regulations, I wouldn't bet 20M EUR that's the case.
On arrest, you're required to provide your name and address, not proof. For the absolute majority of UK adults, it takes exactly 2 minutes to verify that data against public records - passport, driving licence, council tax, voter registration.
Lying in that situation is a separate criminal offence all of its own.
>satisfy some shit algorithm that misidentified you as some known threat
Matches with a confidence rating of <0.64 are automatically deleted >0.7 is considered reliable enough to present to a human operator, and before any action is taken a serving police officer must verify the match, and upon arrest verify the match against the human.
>What if your child falls victim to a false identification
The age of criminal responsibility is 10, and absent any personal identification parental identification is the standard everywhere.
>15-year-old Child Q
The good old slippery slope fallacy. Both the officers who strip searched that child were fired for gross misconduct. North of 50,000 children are arrested each year and this happened once.
>Do you really want more unnecessary interactions with the police for yourself or those you care about when your "suspicious behaviour" was having an algorithm judge that your face looked like someone else's?
Thing is 12 months on, 1035 arrests, over 700 charges, and that hasn't happened because the point of testing the scheme thoroughly was to stop that from happening.
What proof do you have that it doesn't work.