Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AWS said it mitigated a 2.3 Tbps DDoS attack (zdnet.com)
164 points by furcyd on June 17, 2020 | hide | past | favorite | 80 comments


2.3 Tbps is (peak) ingress, but AWS Shield egress of 2.3 Tbps costs around $53k per hour. I am curious what kind of bill would you get from AWS after this is over? Also interesting to know if the Cloudflare's price tag of $200/mo. will also cover such an attack without nullrouting or severe throttling where site becomes unusable.


It kind of feels like this is getting into insurance territory. Never thought I would ever consider the possibility of a "DDoS insurance" in all honesty.


“DDoS insurance” would almost certainly be covered under cyber security insurance, which any decently sized organization should have these days.

Even without that, I wouldn’t be surprised if it’s covered by other forms of insurance, such as policies that would cover an internet or power outage. It would probably be harder to get the insurance to pay out, but if your legal team is big enough, maybe...


Yup. I've been working in cyber security for going on 10 years now and even when I started at my first company, we already had for-pay DDoS mitigation services and cyber attack insurance. The insurance covered the cost of compensating customers for any outages that came from a successful attack (fictional example: site was down for an hour due to a DDoS so we had to reimburse millions of dollars to thousands of customers all at once).

These days incident response retainers are very common, where you would pay a security firm like Mandient or Verizon or IBM a small fee every month and in exchange they guarantee that they will have an expert on site within X hours if you have a security breach. Almost every major company has a retainer like this.


Who gets the retainer fee?


The company you pay for the retainer gets the retainer. For example a place I worked had a retainer with Verizon. Every month we paid Verizon ~$500/hr for X number of hours. If we didn't need those hours for incident response we could use them for penetration testing or consulting to improve our detection tools, etc. But when something bad happened, we had already pre-paid for their consultant to come visit and help us.

The real benefit to this situation is you get to avoid haggling over price and waiting for a quote from the sales guy and running it past your legal department. All that enterprise sales nonsense is already done so when you pick up the phone it's all business with no negotiation.


Who at verizon gets the retainer fee if the consultant is never requested? How is it distributed?


That is insanely expensive. 2.3Tbps would cost you about $100k per month, so that's a massive economic advantage for the attacker.


They charge you for _egress_ not ingress (and only for AWS Shield Advanced).

Standard: free; only pay AWS egress charges. Advanced: $3K/yr + ~$0.05/GB data out

So blocking 3Tbps costs nothing additional to the user.


In the article it says that it only lasted 3 days.


Sure but how ridiculous is it that AWS shield costs in two hours a month of bandwidth? That makes is more economical to buy 2.3Tbps for an entire month than to defend against it for three hours.


Isn't it like that for all cloud offerings?


Cloudflare kicks you out in this case, already with a couple of Gbps you can get kicked out (so essentially the attackers win), so few Tbps, don't even think about it except if they can get really good PR out of the story.


What are you talking about? We don't kick people off for "a couple of Gbps". Did you see our latest report? https://blog.cloudflare.com/network-layer-ddos-attack-trends...


I'm referring to the cases where you get attacked and Cloudflare proxying was disabled on purpose by Cloudflare.

So your account is all active, but you don't have the proxy-through-CF part.

As a result all the traffic goes directly to the origin (as a side-effect, your origin IP address is revealed to the attacker).

You receive the email "Sorry, CF offers limited DDoS Protection, call our Enterprise team for help".

From your blog post I understand you don't do it anymore, which is good. Really, it was quite bad to see the DDoS protection failing you when you need it the most :|


See this blog post from 2017: https://blog.cloudflare.com/unmetered-mitigation/

TL;DR: This doesn't happen anymore.

(Disclosure: I work for Cloudflare but I don't know anything more about this than is written in the blog post.)


I had this experience too about 7 years ago. It was extremely frustrating because the origin IP being exposed to the DDOSers meant I had to move servers too instead of just upgrading my CF plan.

To be fair, at the time I believe "DDOS protection" was a feature only explicitly listed on their $200/month plan, and I was on the $20/month plan. I think it's part of all plans for the last few years now.

At my job now, we use cloudflare workers, but I'm almost hesitant to rely on them too much because I know they'd all break immediately if CF ever decided to surprise-switch our account off of the proxy-though-CF mode like I had happen before. I don't know if that move is still part of CF's playbook or not given the changes in how they handle DDOS attacks; maybe I'm being a bit irrational since the DDOS attacks were a painful experience to deal with overall. I hope this time CF will just ask us for more money if there's a problem before pulling out the rug. Other than that bad experience around that DDOS attack, CF has been one of my favorite services.


You don't need to worry: https://blog.cloudflare.com/unmetered-mitigation/

Workers actually launched four days later. Funny, I didn't realize at the time that unmetered mitigation was really a prerequisite for Workers -- obviously you couldn't build very much in Workers if a random DDoS attack could cause it to just shut off. Huh...

(I'm the lead engineer on Workers.)


That's a much better response than jgrahamc's


Yeah, sorry. I was grumpy.


>Really, it was quite bad to see the DDoS protection failing you when you need it the most

Could be called "A business plan upgrade moment"



It looks like it went from limited to unmetered. Am I reading it wrong?


But who are the attackers, and why? What is the motivation / objective?

Is there an economically positive criminal activity that involves DDoSing an AWS-hosted UDP service (probably video calls... probably like Zoom)?


Sometimes it's accidental (but probably not in this case).

One time a customer came to us and asked us to PenTest their server, checking it stands up to a DDoS. They said they owned the server and it was their network, so we said "we can run a small one for you which should give us an idea of some pain points".

We run the "mini" DDoS against the server, it takes a little more to sink the server than expected, but we just ramp up a few more connections and it is fine. We lift off on the test attack, but customer site doesn't come back up. We contact them and they say they will contact the VPS host. * Heart sinking moment *

We test other websites running on their cloud from a different connection - we had taken out their entire cloud infrastructure (this was a small provider). After a short while they were back up, but not before another few conversations with the customer. I really don't even want to know how badly positioned we were legally that day.

Lesson learned: Always double check.


I won't forget having a pentester nmap a local network - it hard locked every single phone handset corporation wide. People had to walk around pulling the power out and putting it back in every single desk in multiple buildings.


At one place I worked, we had a printer that would die whenever the PCI-DSS auditors would run a network scan.

There was a Windows vulnerability that came out in 2010-2011ish, when I was working there, that I had to deal with. I ran an nmap scan of the entire network looking for the bug, and accidentally BSOD'd half the office...


Wow haha, I wonder what caused them to fail so badly?


I don't know but desktops used passthrough networking, so there wasn't a PC with network connectivity left.


Did you try to contact the provider to provide(heh) an explanation?


I believe they were contacted yeah, but at that point I washed my hands of the project.


Baddies will sometimes do big DDOS’s of a public site to showoff their botnet’s capability. That clout is then used as proof to sell their service to customers. I could see this being an instance of such a thing.


I would suspect extortion. Pay us or we DDOS you offline.


I believe that sort of attack can't be achieved repeatedly. These attacks leave traces and clues behind them, and investigators are able to better pinpoint and protect for the next one. I heard they also can't be sustained for a long time.

I believe they are more like an attempt to discover limits in the network or some targeted systems. Some systems are also vulnerable while they reboot, so attackers only need a one-time reboot.


Not an authority or expert in any fields of discussion itt

From what I've been told, you're right. DDOS attacks can routinely expose information through failure modes the ops team never prepared for. What happens when your failsafes fail? If they didn't test for it and put mitigations in place then it's rather likely that sensitive error messages or service details, or whatever, is being exposed over the wire. So aws mitigated this attack. Does aws know for a certainty that they revealed nothing sensitive in the process? Maybe, maybe not. If the attacker is good, and 2.3tbps is pretty fing good, then could the victim even be positioned to know what to look for? In uncharted territories the attacked is always down from the attacker.


I heard they also can't be sustained for a long time.

That used to be the case, but with the popularity and widespread use of IoT devices it won't hold true for long. If you can hack home appliances you could hold an attack for hours, if not days.


Yep, seen that in person at a startup. The company was receiving regular threats to pay some bitcoins or be DDoS'ed.

History has, the company used to be DDoS'ed regularly, sometimes offline for days, before moving to cloudflare.


Or shorting the stock of the target? The instant your DDOS topples the target that is. Or going long after the DDOS if you believe it'll recuperate.


I'd assume something sinister... There has to be a reason :)



Zoom is primarily on AWS and some Azure. That recent story re: Oracle is talking about an expansion into OCI, but AFAIK they aren’t there yet.

https://www.datacenterdynamics.com/en/news/most-zoom-runs-aw...


Little known fact: the load balancer takes care of many DDOS attacks, and this protections requires no additional config or costs.

For web applications, you can use ALB to route traffic based on its content and accept only well-formed web requests. This means that many common DDoS attacks, like SYN floods or UDP reflection attacks, will be blocked by ALB, protecting your application from the attack. When ALB detects these types of attacks, it automatically scales to absorb the additional traffic. This scaling activities are transparent for AWS Customers and do not affect your bill.


They've been doing transparent DDOS mitigation for a long time. Almost since the beginning.

When I moved reddit from datacenter to AWS in 2009, I no longer had to deal with DDOS attacks. They just magically disappeared. I'm pretty sure reddit was still getting DDOS attacks after the move. :)


I can't share details beyond I was working at Google at the time, but I saw a 1Tbps DOS attacks back in 2012ish; so I doubt that this is "the largest ever", though it might be AWS Shield's largest ever. I don't think Google shares their numbers, though.


> The previous record for the largest DDoS attack ever recorded was of 1.7 Tbps, recorded in March 2018.

The article says this ^. 1tbps in 2012 might have been a record but it’s been nearly a decade


I think that was the point. It seems quite implausible that in 8 years the size of the maximum attack would only increase by a factor of 2.


1 Trillion = 1 million x 1 million. The scale of distribution must be astounding.

How much traffic is being generated by a single endpoint in one of these?


many eu countries have gigabit per second connections, so thats like 2300 infected hosts at least. not many at an internet scale.


Many providers in EU countries offer 1gbps fiber as an option, but the most sold option is more like 200-250mbit because the price is lower. And many of those are cable based which means only 25 or 50mbit uplink, not symmetrical like fiber. So that would require 100,000 endpoints participating.

But this was a reflection attack, so most of the bandwidth was coming from poorly secured servers. In datacenters those would most likely have 1 Gbps uplink speeds.


... Or only 2300 high value endpoints. You can select your endpoints, you know.

Also, even cable is capable of Gigabit speeds. I briefly had 1.2Gbps via cable, downgraded to 400Mbps as it became cheaper.


I feel like you didn't read the comment you replied to carefully.

Did you have GB upload speeds with cable too?


Not gigabit upload, but I got close at 700Mbps, and some cable providers offer gigabit upload speeds in some regions, cogeco for example offers gigabit symmetrical cable in Trois-Rivières.


France is widely offering 1 Gbps symmetric fiber in major cities. The 3 major ISP offer the same unique package for 35e a month with internet and phone and TV, not negotiable. That's gigabit fiber if your home is in a area covered by fiber.


Do the ISPs have enough outgoing bandwidth to the internet for all their 1 Gbps symmetric fiber customers to maintain a 1 Gbps sustained upload simultaneously?


no


But any given provider won't have a 2 Tbps uplink out of their own network. It would likely have to be highly distributed across many providers. ISP's don't build their networks to support all customers running at max at the same time, as most consumers will only push that 1Gbps limit for short bursts.


I don't see why they would not have Tbps uplink.

ISP infrastructure is symmetric fiber. A single fiber can do 100 Gbps without troubles. And when you have to bulldozer the road to lay the fiber, you don't put a single fiber but a pack of a hundred. There is no crosstalk between fibers unlike with copper, so one has interest to put as many as possible.


Sure. You can spread your attack over a few countries and as a result a good 20 ISPs. I assure you if your ISP offers cheap gigabit they have a 100Gbps uplink.


This attack specifically involved servers.


How is such a large attack even possible? Genuinely curious.


In short in many cases an amplification attack is used where you send a small quantity of traffic from a spoofed address (the victim's) to a server that will reply to the victim with up to several hundreds of times (even 50000) more data [0].

[0] https://en.wikipedia.org/wiki/Denial-of-service_attack#Ampli...

More detailed descriptions:

https://www.cloudflare.com/learning/ddos/memcached-ddos-atta...

https://www.cloudflare.com/learning/ddos/ntp-amplification-d...

https://www.cloudflare.com/learning/ddos/dns-amplification-d...


send out packets with fake source IP. the fake source IP is your attack target. servers / services respond with a larger packet back to the attack target. Its called DDoS amplification. you can prevent this on networks you own by implementing Unicast RPF https://en.wikipedia.org/wiki/Reverse-path_forwarding#Unicas...


Is there any research into making attacks like these less feasible? I think it's rather disconcerting that given a large enough botnet, anyone could take anyone offline that doesn't have the money and resources to fight it.


>Is there any research into making attacks like these less feasible?

Yes. Such services are developed by most, maybe all, the major CDN and CDN-like providers because they can sell DDOS protection/prevention to their customers. It's an active area of cat/mouse between the people selling the services and the people selling the DDOS attacks.


Ingress filtering (source IP verification) would mitigate by orders of magnitude such an amplification attack by not allowing spoofed packets to leave the network.

People doing a better job with securing their internet exposed servers would also help.


I think we need more liability here, both for the manufacturers of insecure devices as well as the equivalent of “gross negligence” for incompetent sysadmins.


If you don't have money or other valuable resources, you won't be a target of this large botnets.


That's not true. There are lots of enormous botnets that have been deployed against people like Krebs on security who are not particularly wealthy but have important information to share.


Well imagine somebody trying to push back a competitor which may be a young startup which is threatening to take part of their cake.


If the young startup is a threat, then it has real value no?


No? A big player might decide to preemptively squash ANY startup that could possibly become a threat while it's cheap and nobody else cares enough to notice.


Which might be cut by annoying all their customers. And there might not be value (now) but it might be cheaper to just DDOS any potential threat rather than risk competition.

Of course, that's under the belief one's above the law.


Potential can be worth more than assets


Activision should learn from AWS, their servers get DDoS'd constantly.


Can't deny service if the server isn't serving. :(


Who says it's the largest ever?

Plenty of folks keep this stuff secret.


I mean, until someone says otherwise, it can be the largest ever[1].

[1] on public record


Maybe it's more 'might as well say it since no one can prove otherwise'.

Ok, it's no longer the 'largest ever' in the title above.


I wonder if this had to do with the T-Mobile outage


This attack took place in February.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: