So the upgrade took a few hours to complete and it didn't happen instantly in defiance to the laws of physics. Next time play it safe and upgrade during the weekend.
The post is dated May 27, is Google planning to announce a new feature for Apps this week and this is some sort of a preemptive PR attack?
1. There is no indication in the upgrade process that the upgrade would "take a few hours to complete". Quite the opposite, it indicated there would be no downtime.
2. The customer service reps seems to have little insight into what was happening with the account which is a bit scary. I'm always nervous of black hole "processing windows" where all you can do is wait and hope for the best. It's bad when it is customer facing, it's worse when it is customer service facing.
3. Upgrading on the weekend could potentially be more troubling given that customer service may not be staffed with the quantity/quality to fix issues if they occur. Not saying this is or is not the case for Google, but running into problems on a Saturday morning and getting a "call us back during normal business hours" is just as troublesome.
What should have happened in my opinion?
1. Google should document that there can be a temporary delay of x - x hours while the upgrade happens.
2. Florian should have scheduled the upgrade ahead of time with the team, letting them know that worse case there may be an outage of 'x' minutes/hours/days.
3. Google customer service should have better tools as to upgrade process for both the admin (customer) and customer service rep so that it is not a black hole of 'wait and hope'. Even a step 1 of x, or you are number # in queue, or estimated time, etc.
It wasn't that it took a long time. It's that the interface showed a state as if all the mail was deleted. That could have been devastating.
We recently upgraded to paid Google Apps for Business and we didn't get any downtime. This is how it is supposed to work.
Unfortunately we also had irritating problems after the upgrade. We upgraded because our customer support email address had run out of mail quota. Paid google apps has a 10x higher quota, but upgrading to premium didn't increase it.
After ringing customer service they refused to increase it immediately and said that it would be increased "in a couple of months time".
Meanwhile hundreds of our customers were going unanswered.
I suppose I can imagine a scenario in which they would want to wait until after credit card chargeback window before lifting a quota limit, but their support department should understand the paralysis of a business offer to solve the problem.
We had to create a temporary support2@mycompany.com email and manually port email across which wasn't fun and played hell with our ticketing system.
> What law of physics necessitates that such an upgrade takes 6 hours?
the one where the customer isn't paying enough money to have somebody working 24/7?
If your business depends on a particular service, you don't accept something vague - you get things signed in contract with legally enforcible SLA's. If you can't afford that, then you just have to live with shitty service.
It's Google. You never see the Google front page go down. There's an implied availability that Google uses to their advantage. People are comfortable with upgrading _because_ it's Google. Google wouldn't fuck you.
How do you get 24x7x365 coverage with only 4 people? The "standard" is it takes about 5 to fill a position 24x7x365 over very long term on a larger average with absolutely no failure (unstaffed) tolerated. As in, 50 people can fill 10 positions absolutely positively all the time, but it doesn't downscale well at all to just 5 people and one desk. You can "fill" ten positions with a lot less than 50, but even at scale there's going to be a heck of a lot of time when there's only 8 or 9 people there hence the quotes around "fill". For instance you can "fill" 10 positions with 30 people but there's going to be a heck of a lot of time where the supposed 10 positions only have 7 (or even fewer) bodies actually on deck.
Depending on the business sector of course. If you're a stereotypical weekday business then a 6 hour outage at 9am on a weekday would be a disaster, but a 6 hour outage from midnight to 6am on Sunday morning wouldn't even be blinked at because no one cares.
I agree with you midnight to 6am Sunday would probably be okay. Just would be a really bad time to get an email stating an issue with a payment gateway or marketplace listings or an issue with the website.
Really? Do you run a highly reliable email service yourself then? I certainly would not use gmail if that were the case. You should seriously look at why you are so reliant on email, as it is not really a reliable service, the mail servers at the other side could easily delay mail for 6 hours, it is only a best efforts eventually consistent protocol with very long delays allowed (you don't normally get bounces for days if the mail is queued for retry).
Well the world of business tends to rely pretty heavily on mail, not sure what other methods of reliable messaging there is?
Sure there is phone calls etc, but emails are used when you need an actual record of something. Also relying on mailservers to not bounce the emails and actually deliver in the end is laughable, when mailservers do go down this hardly ever happens properly.
Well, websites (or APIs) are generally much more reliable forms of messaging, because you can get immediate feedback on processing. The OP said it was business critical; most corporate email is not business critical, and missing business critical email in a channel with spam, high volume unimportant messages, and also relying on a human to answer stuff is all pretty unreliable. Sure you can feed email into a ticketing system, but at least there there is also a web interface (so you do not rely on email).
If you need a record of something, I do not think that unsigned emails are legally binding anyway, again you should probably submit signed contracts over the web not email.
If it is not time critical, email can work, but the OP said a six hour outage would be a problem, and I still think that is one they have created themselves and is a business risk.
It sounds like email isn't critical for the corner of the universe that you operate in -- good for you.
For lots of people in different roles, email is an essential tool for getting work done. Not everyone has a role that can be readily translated into an API. Business Dev/Sales for example, depends on email to communicate with the various folks that they need to engage in. Whatever those folks do, it ends with the company getting a check, so it's important.
Generally speaking, it is pretty inexpensive to deliver a 99.9% available mailbox with a 100% guarantee of external mail delivery. The fact that Google bungles a conversion from free to paid service so poorly is a sad statement when they are supposed to be a shiny alternative to the traditional Micrsoft messaging stack.
Bus dev/sales can survive a day without email every now and again (in my experience, far more than a day without a phone system). Everything will resume the next day, sure it pays the bills but a six hour outage is not "mission critical", it wont stop the "check" from arriving (email is not a payment method after all).
I have known large (email dependent) businesses have 2 day outages on the traditional Microsoft mail stack too. There are coping strategies. It is annoying not mission critical.
Are you serious? Running my small business, I used email to communicate with my coworkers, potential hires, customers, prospects, partners, potential partners, reporters, accountants, lawyers, banks, web hosts, PayPal, UPS, various government departments etc etc. Being completely cut off from that for effectively an entire business day would be destroy my ability to get anything outside "solo hacker in basement" coding done, and there really isn't a sensible mitigation plan or alternative to e-mail for this (unless you'd like to convince my bank to start posting, say, notifications of incoming wire transfers to a web form of my own design?).
Your bank provides a website you can poll to find out about stuff. Not ideal, would be nice to have a proper API, but so far we only seem to be getting these for credit card payments but this is changing (eg see gocardless). For most of those things you list you could manage in an email outage, as you have phone numbers or other alternatives.
If it is that critical do you have 99.99%+ (52 minutes a year) uptime guarantees on your mail service? That is what you are asking for, and to actually deliver that (rather than an empty SLA promise) is something that very few businesses actually try to work to, especially small companies. Gmail certainly does not try to provide this.
Most people would assume that this is nothing more than an administrative change on the account, with no long drawn out six hour process. And if there is an upgrade time required, it absolutely should have in-your-face warnings given the importance of email 24/7 for many organizations.
Even if we buy that this is more than an administrative change and it somehow moves to better hardware, this is a problem that I would think that Google would have built to a mostly transparent process -- at most long term archives are unavailable after a very brief initial migration, etc.
That exactly the sort of assumption you can't afford to make if you've honestly got company-destroying problems if your email goes down for 6 or even 24hrs.
It's easy in hindsight, but I've seen that happen before, and I have no doubt that if you'd considered the risk and (perhaps ironically) googled for information, you too would have known about this.
On the other side of the coin (and perhaps the reason Google haven't cared enough to fix the problem), SMTP is nicely designed so as to not result in this sort of thing losing any mail - "well behaved" email systems will just queue and retry mail for 5 days if needed.
That exactly the sort of assumption you can't afford to make
Okay so what if the transition took five days? How about thirty days? In the absence of seemingly any warning information at all on this, how does one ever perform such a change?
Let's take it further -- what if adding a user took down your email for a month? That is just as rational as removing an artificial limit is ("Oh didn't you know? You pushed our global user database past the threshold so we had to migrate platforms"). How about if you send an email that you CC to ten people and that takes your email down for days?
If this wasn't an expected behavior, and is seemingly a mere administration change, there is no universe where it can be pinned on the user. Doubly obvious given the confused responses of customer service.
Exactly my point. If you're making changes to mission-critical things, you need to have both reliable information on how long those changes will take, as well as rollback plans to cover the risk of things going wrong.
Randomly clicking things like "change my email system" buttons, then complaining afterwards that it didn't work how you expected is the sort of mistake most of us have made at least once.
Once you've suffered through those mistakes, you tend to view phrases like "expected behavior" and "seemingly a mere administration change" a lot more suspiciously.
If it's mission critical, don't "assume", don't "expect" - things are often not as they "seem". As they say "Trust, but verify." Yeah, Google (or Rackspace, or MessageLabs) will _presumably_ "get it right", but when the consequences of "presuming" are business-destroyingly-high, verify the presumptions first.
Interesting. Some insight, some contradiction and confusion especially when compared to earlier reportings on the first slides:
- The "direct access" claim is replaced with "FBI interception unit" which is "government equipment on private company property to retrieve matching information from a participating company" that detail isn't mentioned in slides but provided in annotations.
- The case format notation points to "real-time notification" when a target logs in or sends emails/IM/VOIP etc:
"Depending on the provider, the NSA may receive live notifications when a target logs on or sends an e-mail, or may monitor a voice, text or voice chat as it happens (noted on the first slide as "Surveillance").
The "Depending on the provider" bit is interesting as it suggests that there are potentially different levels of "participation".
- "On April 5, according to this slide, there were 117,675 active surveillance targets in PRISM's counterterrorism database." can a FISA order cover a target across service providers or each provider requires its own order? the number of targets could dramatically be revises downwards depending on that.
I would imagine that the "depending on the provider" bit has more to do with their existing infrastructure than participation per se. A live notification for when someone is on Facebook or even Google would probably be much easier to get (and more useful I suppose) than their iCloud sync.
Edit: Also note that Apple is a late addition on their graph and Microsoft is the first. Don't mean I think that says much about one versus the other, but if MS has been a provider since '07 they probably have much better access either through influence or better understanding than they do at Apple at the time this was presented.
Re: apple vs microsoft. Almost certainly Microsoft was added early because of MSN messenger and Hotmail. MSN Messenger was pretty big internationally, notably in China. It would be interesting to know if all Chinese messages were routed through US based servers. Apple wasn't as significant a player in the email and instant messaging space until more recently.
Obviously there is plenty of room for speculation but what seems to emerge, at least as I see it, is that even the worst case scenario doesn't entail actual "direct access".
In the case of activity timestamps (which I'm sure legally don't get the same protection as content) they would be sent by the companies to the FBI/NSA not have their actual servers monitored by them.
There's a line between the provider and the FBI. That linesis explained as pull, rather than push. That nuance notwithstanding, how is this not direct access?
You want me to speculate about arrow direction?! alright, generally speaking the access is not "direct" because the "boxes" act as buffers. I can't say if they "pull" the boxes or they just serve subpoenas to them and get the data pushed back.
Quite the opposite - no speculation is required. The NSA has direct access. Any discussion that focuses on how is a discussion of semantics, and as such is of anecdotal interest.
- Statistical analysis of Google closures shows that they deprecate products at below industry pace, so your impression about that is also wrong: http://www.gwern.net/Google%20shutdowns
According to Google "NSA powers" in their case are restricted to FISA orders, so I'm not sure how a random worker at a government contractor can produce these. Snowden was a sysadmin for a contractor and that is how he got his hands on their internal documents.
Is no one else paying attention to anything beyond the "slides" in this story?!
Aren't the NSA claiming they only need a FISA warrant if both ends of the correspondence are (reasonably believed to be) US citizens on US territory? For those of us in "the rest of the world" or any Americans corresponding with us I believe the restrictions on the NSA are "Yeah, do whatever the hell you want!"
FISA, the Foreign Intelligence Surveillance Act, only creates warrants for surveilling foreign persons. It also requires that the surveillance actively minimize data collected on US persons in the process.
The NSA is claiming that FISA warrants are required if either end is a foreigner, and that if the connection is US-US that they're not allowed to examine that conversation at all.
That's stupid. Particularly how you list 'plentiful storage' as a drawback, if that's the case then it's plainly an issue of law as it pits privacy against usability.
Maybe engineer a service that is harder to wiretap? It is not easy, but they have some of the best computer scientists on this planet working for them. If I were them, I would start somewhere around here:
It's not directly apropos this particular thread, but Google has engineered an email service that is particularly difficult to wiretap. To wit:
(a) They're the Internet's foremost adopter and proponent of DHE ciphersuites, which drastically reduce the impact of losing the RSA key that underpins most site's TLS security, and, just as importantly, forces adversaries to actively MITM every connection in order to decrypt them.
(b) They're a pioneer in key pinning, which bakes the identity of their key into the Chrome browser binary, meaning that when your Chrome browser talks to Google's mail service, it's unlikely to trust any otherwise- valid- looking certificate presented by a MITM attacker.
Google's mail service is better encrypted than most banks.
Difficult to wiretap in the sense of intercepting communications to and from Google, yes.
But it's also engineered to give Google itself access to your data so they can improve their behavioral profile of you.
I think what people are suggesting is that if end user privacy was Google's priority rather than gaining access to user data for their own use, they could engineer a service that didn't place themselves as a man-in-the middle.
They're a pioneer in key pinning, which bakes the identity of their key into the Chrome browser binary, meaning that when your Chrome browser talks to Google's mail service, it's unlikely to trust any otherwise- valid- looking certificate presented by a MITM attacker.
Chrome doesn't pin actual certificates, just public keys of CAs. If some organization had access to Verisign, Equifax or Geotrust keys, they could just create new certificates for *.google.com, which Chrome would accept.
Am I right that a) applies only to direct communications between the end user and Google's servers?
Actually if someone sends an email to my Gmail account using his ISP SMTP server, is the connection between the two SMTP servers likely to be encrypted?
No, I don't believe that's true. The information in your mail is not more valuable to Google than the integrity of a bank account is to a bank.
Google has a financial interest in having access to the content of mail messages, but (a) it's an interest "in the large", not in any specific account, and (b) it's a nonrivalrous interest.
> Maybe engineer a service that is harder to wiretap?
You can't solve a legal problem with engineering. We're talking about the same agencies who had the ability to get all of the major phone carriers to install wiretapping services – there's no reason to believe they wouldn't do the same to anyone else of interest.
Yeah, but if you read the linked paper, there are things Google could do to protect its users' privacy. That paper is about privacy-preserving targeted advertising, which would not give Google anything for the government to subpoena or search while still allowing them to conduct their business. There is no reason Google has to make itself an easy target.
Which is why I said they should start there. I did not say it was a completed solution. Right now, Google is not even trying to protect their users by any technical means, relying instead on the courts.
I absolutely agree that they aren't even trying, but starting there would reduce the effectiveness of their ad business and hence reduce revenue, so I don't see that happening.
Wiretapping in this context refers to both getting information on the wire and getting information stored on Google's servers. At this point the distinction between the two is completely pedantic.
Die. Lose to a competitor that is based in a free country and/or offers a paid service and therefore does not need access to the entirety of people's data to serve ads.
Again with the 'carefully worded denials' - the denials were similar because they were accused of the same thing, which is allowing "direct access".
The most worrisome and misunderstood part of these reports is the "direct access" bit: can the government arbitrarily query company servers? their denials address that, they clearly say that is not the case, instead they sftp the data after being served with court orders or warrants and yes also the secretive FISA requests.
So by revealing the number of FISA requests they receive and their scope they hope to clear this "direct access" mess. As even FISA orders are much more acceptable than wholesale access.
As for the development being reported here: I think it has merit seeing how this clearly falls under the first amendment, but I'd like a lawyer to chip in.
From what Google's said, it appears the government can't arbitrarily query Google's servers. Google has stated pretty clearly that someone at Google has to check off before an account is pushed to a machine that the government can access and that the data cannot be accessed without this happening.
That's Google. We've yet to hear from many of the other companies in the program about whether this sort of access is technically impossible, or whether it's an honor system that the government is supposed to follow.[1] I haven't been closely following the Facebook, Microsoft, or Apple statements, so maybe they have also been explicit that it is a restriction that is implemented by technical means. Some of the companies haven't said anything yet.
How many of the companies really make sure there is legitimate documentation for each request? Do they really do this every time, or have they become resigned to the fact that there's nothing they can do, so they just rubber stamp each request coming through, even without the proper legal documentation?
[1] This seems to be a major issue--the President and NSA leaders have claimed that analysts "cannot" access your phone metadata and phone call content without the correct legal instruments. But by "cannot", they seem to mean "they are not allowed to" rather than "it is not possible for them to".
> Google has stated pretty clearly that someone at Google has to check off before an account is pushed to a machine that the government
my understanding of PRISM and all this is that the entire internet is vacuumed and everything is stored, just in case. I cannot imagine a guy "checking off" on every email or every mailbox for millions of gmail users every day or even once a month manually. With 11K terabytes of digital data created per hour by US, I cannot imagine any sort of manual system being implemented.
It has to be totally entirely automatic, otherwise it won't fly.
Any understanding of PRISM outside the classified world seems to be incomplete. Some people version of PRISM seems to involve caching the whole Internet. That might sound implausible, yes. But we won't know until or if the whole thing gets declassified.
Yes, but that vacuuming is apparently being done, just not under PRISM (for example, see https://en.wikipedia.org/wiki/Room_641A). PRISM is just one method of getting the data.
There was a fifth Powerpoint slide published by the Guardian[1] which clearly distinguished between PRISM and "Upstream" methods which collect "communications on fiber cables and infrastructure as data flows past."
The PRISM program mentioned in the Powerpoint slides is very likely the same program that is mentioned in unclassified documents such as Army Field Manual (FM) 3-55, Information Collection[2]:
> 6-12. Two joint ISR planning systems—the collection management mission application and the Planning Tool for Resource, Integration, Synchronization, and Management (PRISM)—help facilitate access to joint resources. PRISM, a subsystem of collection management mission application, is a Web-based management and synchronization tool used to maximize the efficiency and effectiveness of theater operations. PRISM creates a collaborative environment for resource managers, collection managers, exploitation managers, and customers. In joint collection management operations, the collection manager coordinates with the operations directorate to forward collection requirements to the component commander exercising tactical control over the theater reconnaissance and surveillance assets. A mission tasking order goes to the unit responsible for the collection operations. At the selected unit, the mission manager makes the final choice of platforms, equipment, and personnel required for the collection operations based on operational considerations such as maintenance, schedules, training, and experience. The Air Force uses the collection management mission application. This application is a Web-centric information systems architecture that incorporates existing programs sponsored by several commands, Services, and agencies. It also provides tools for recording, gathering, organizing, and tracking intelligence collection requirements for all disciplines
They don't need to store all the data if they can just compel whoever is storing it to give them access to said data. (Which seems to be what is alleged).
re: [1]... Right. In fact, this morning I think we heard this is definitely policy and not technology. We were told that for this to happen [paraphrasing from memory] "One person would have to break the law [analyst], his boss would have to break the law [because he's supposed to approve the access], and remember this entire process is 100% auditable, so we'd catch them for sure."
Of course, this isn't remotely reassuring for a bunch of reasons. Most of all though, I'd be curious to hear more about how the auditing process works. He kept saying "auditable" I noticed, not you know... "actually audited".
Snowden mentioned in the Q&A that 5% of the GCHQ accesses are audited, as one example. He mentioned 5% as if it's a low value but that's actually fairly high, especially if randomly-picked.
Yeah, there are generally two things keeping society in order. Ethical beliefs about right and wrong and fear of punishment from the powers that be for breaking the law. My concern with the NSA is that there is a culture of "the current laws are unduly stifling on our jobs, so us ignoring them is 'required'", coupled with management's belief in same and thus non-interest in prosecuting people that cross the line. Not to mention such prosecution would inevitably be public and thus the program exposed and the public seeing it is being abused. Taken together you have a perfect recipe for safeguards that exist in theory and are utterly ignored in reality, "for the greater good".
Why do people assume that Google has the only copy of what is on Google's servers. It is not hard for the NSA, since they are already admittedly the "man in the middle" to have copies of all data going in and out of any server they target.
Google and Facebook are trying to clear their name here.
But what I'm afraid of is that this mess with deciding exactly how much access the government has to Google will turn into a distraction from the larger picture. Which is that, in all likelihood, the NSA does not have access to Google. What they do instead, and what the name PRISM implies, is that they connect to the backbone (Verizon/AT&T), scoop up ALL data, and store it in their freshly built data center in Utah.
The slides I saw seemed to indicate when certain applications or filters came online. Such as a filter for Facebook data, or a filter for Google search/map/GPS data, etc. That's how I interpreted the graph, at least. It would indicate the NSA is rolling out specialized applications to handle data coming to and going from specific sites. Which allows them to more intelligently decipher what is being said, in more or less shotgun fashion.
Hence, the name PRISM. It's a project to split the full Internet stream into a Facebook bucket and a Google bucket, etc.
The problem I have with the "duplicate the Internet" theory is that it favors the hard solution vs the easy solution.
The hard solution is to secretly duplicate traffic from every data center operated by each of these companies, reverse engineer every HTTP request that goes back and forth so that the data can be parsed, maintain it for every product change that happens at these companies, circumvent HTTPS by compromising the certificate authorities, store it all, and still maintain a massive analytics tool that can make sense of the astounding amount of data coming through.
The easy solution is to avoid all of the technical ugliness of acquiring the data, and just legally make the companies give you the relevant information, neatly structured and packaged. NSLs are the ultimate hack.
It honestly wouldn't surprise me if the gov't has issued a secret subpoena for every PRISM provider's SSL key (e.g. Google/Facebook/Yahoo/etc). That way they get to claim "hey, we're not giving them full access" and the government gets what they want anyway.
As I understand it, they don't have to focus to the data center of those companies when doing the duplication.
For example for emails: emails travel unencrypted through the hops, and they would store them all, and then constantly analyzing them. When something suspicious comes up, they would go to the email provider to ask for more data. So for example if gmail address is there, they would go to Google and use their PRISM interface to get more data associated with that gmail adress, if it will be yahoo email, they will go to Yahoo for more data, etc.
Gmail users sending to each other will only relay inside Google's own private network. If all of my co-conspirators are using Gmail, there are no external relays to be tapped. Someone would have to read all of our SSL/TLS traffic to see what we're writing about.
This is even more complicated when the data centers are in other countries, and none of the data actually enters the US. So if two EU users were accessing Gmail from the EU, the data may never enter the US at all. This means any network tapping would have to be done in the EU as well, requiring cooperation from many international telecom companies.
It's still easiest to just force Google to hand it over via NSL. Google's still legally bound to deliver the data even if it isn't physically stored in the US.
I wouldn't be so sure if that was the easy solution, as it depends on the cooperation of those companies.
They at least have the choice to resist in some way or another.
They also could be using both solutions simulatneously.
From their perspective, why not?
A lot of the communication won't be encrypted anyways, and some of it will be, but they may be able to decrypt it at some point in the future.
The hard solution isn't just a little bit harder ... it's several orders of magnitude harder and more expensive. It's also highly vulnerable to simply using encryption. The easy solution works because the US companies are bound by law to cooperate. There's no reason to believe that legal pressure on these companies has failed to get the government what it wants.
It doesn't help just to have network transmission data if the data is encrypted. Google has increasingly been moving all of their services to https, I think facebook might be also.
If the government had a wiretapping program for fiber-optics, they wouldn't call is PRISM. Why? Because you don't name your top-secret stuff with descriptive names that imply what it does.
PRISM is a web-app. The slides make it pretty clear its a web-app. The army field manual link helpfully posted before in this discussion outright says its a web-app.
> So by revealing the number of FISA requests they receive and what sort of data is being sought they hope to clear this "direct access" mess. As even FISA orders are much more acceptable than wholesale access.
Not necessarily. As another commenter pointed out [1], a single FISA order doesn't have to correspond to a single citizen. One order can encompass millions of accounts.
> their denials address that, they clearly say that is not the case, instead they sftp the data after being served with court orders or warrants and such, including the secretive FISA requests.
While I think it's reasonable to doubt the claim that the NSA has true direct access to servers, I haven't been given a reason to doubt that information can be requested without court orders, warrants or FISA requests.
> Not necessarily. As another commenter pointed out [1], a single FISA order doesn't have to correspond to a single citizen. One order can encompass millions of accounts.
They would want to publish the scope of the FISA requests.
The other companies aren't going this far and I think they deserve a credit for what they're doing.
And I disagree with commend you link to, the solution isn't limiting data collection, sure it makes you a target but more data equals a better product. It's an issue of government overreach not engineering decisions.
I agree with this entire comment. Even the part where you disagreed with part of the comment I linked to. I even responded to that poster before replying to you. :-)
(Sorry about that. I meant to link to the comment for the text of the FISA order, and not for the jab against Google.)
Steve Gibson presented a good case[1] that the companies are telling the truth, but that NSA nevertheless has the equivalent of full access by tapping the tier-1 or -2 router nearest to each. Fiber-optic "splitter" makes the codename "Prism" cogent.
You didn't add any valuable information to this discussion. Why doesn't Gibson know what he's talking about? Explain what exactly regarding SSL breaking? What's the jump?
If you're talking about this, you should have a cursory understanding of what SSL is and why the MitM attack Gibson is describing is, at best, far fetched.
His point wasn't "all fibre optic" but that by tapping specific routers, e.g. one close to Facebook where FB traffic is concentrated, the NSA can filter and store nearly all FB traffic while FB has full deniability. At the referenced link are links to court documents in which exactly this kind of tap was revealed to exist at AT&T.
As to SSL, is there a claim that NSA has broken it? I wasn't aware of that. Not relevant to Gibson's idea, anyway.
> At any rate, assuming all fibre optic is tapped, how does that explain breaking SSL?
Large governments don't need to break SSL. They have SSL root keys and can man-in-the-middle at will. Doing so across the board would likely be detected, but targeted usage likely wouldn't be.
If this was widespread, I'd expect someone to have found a Google cert signed by different root. Then again I suspect Google pins their certs in chrome for a reason.
> Doing so across the board would likely be detected but targeted usage likely wouldn't be
This whole conversation is about wholesale data access, so targeting is not relevant. Besides, even if you are talking about targeting, the claim is, they are storing data and then targeting 'retrospectively'. So without a time machine there's no way they are going to be able to go back and MITM the targeted conversations they want to listen to after the fact. They would have to be MITM everything all the time.
> how does that explain breaking SSL? That's a really big jump.
How about this: the NSA has issued a secret subpoena for the private SSL key of every listed provider (Google/Facebook/Yahoo/etc). They are using those keys to transparently decrypt traffic and suck up what they want.
This is a distinction without a difference. If I (the NSA) can request yesterday's backups, isn't that close enough? I don't particularly care if they have direct access to Google's servers. Having access to the backups (through sftp or whatever other mechanism) is bad enough.
It's a checks and balances thing. If you're a large ISP and you retain physical control over your servers and network, if you're asked to hand over too much information, it's at least possible to delay and fight it in court. If they have root then you don't even know what they've done.
If they had to request anything from Google or FB, they wouldn't need such huge storage capacities. My guess is that these large companies have been forced to forward data (e-mail, chat lines, posts...) to the NSA as it arrives. It's not "direct access", it facilitates all the searches the NSA could wish for on NSA's own servers and does not contradict any of Google's, FB's or the NSA's claims so far from what I can tell (they store the data, then "collect" it as needed).
First of all, it doesn't really matter what Google says because they could be lying. Second of all, there are trivial ways around "direct access". Google will have world class mirroring capabilities, so they need only mirror to a government server. They could do this manually (per request) or automatically. This would fit within "no direct access".
When news of the PRISM program was first revealed two weeks ago, officials at Facebook, Google and other tech firms informally conferred on a public response...
The leaked slides say clearly the government can query the servers at will. They get real time login data, logout data and payload data.
I don't know why people keep putting this into question, giving Google the benefit of the doubt, when they were caught pants down, no questions asked.
If the companies mentioned in the slides just complied with the law, why would they be singled out in those slides? Honoring search warrants and FISA requests is an obligation, not an extra.
The reason Google was singled out as a partner since 2009 is because they gave the government full unrestricted access.
The reason Google was singled out as a partner since 2009 is because they gave the government full unrestricted access.
Which part of the PRISM slides make you think that? They certainly indicate pretty much full access to accounts which have been OK'd by Google (at least in archive form), but that is very different from 'full unrestricted access' to servers. I'd say we don't really know the extent of it, and welcome Google's decision to try to challenge the government in court to reveal more details.
As far as I read them the few PRISM slides we've seen don't really indicate:
1) The extent of access (how many accounts, how many accounts per order etc)
2) The mechanisms for FISA access
3) Any time delay in receiving documents/access
4) Whether data is realtime or not after access is granted
and the figures that FB, MS, Apple have announced hardly constitute full unrestricted access to all accounts as you seem to be implying. It's still a serious invasion of privacy, there are serious doubts about the efficacy of the FISA court supervision, and for foreigners I'm not even sure there are any protections at all (the NSA might not even feel obliged to get specific permission for non-US communications), so for everyone outside the US this is really invasive, but I'm not sure I can agree with your characterisation of these slides as showing full access (full access to what, to all Google servers, seriously?).
Well, judging by your comment history(about 100% anti-Google), I won't be taking your word for it. I'm still waiting for the dust to settle and see where Google ends up.
Right now, it's just a bunch of people pointing fingers at each other. The truth will be found once everyone calms down.
The post is dated May 27, is Google planning to announce a new feature for Apps this week and this is some sort of a preemptive PR attack?