Why don't providers just set up a system that creates a country-level null route for a given destination IP? And have a UI with a checkbox for the user to do it, for any selected country. It would mitigate the issue, and once it's over, the user can un-restrict traffic / or just keep blocking if it's a non-valuable source.
I know you can do this on the server, using many different techniques. But this does not help as the traffic still reaches you (that you have to pay for).
You can also do this with Geo DNS (and get much less of a bill).
And the ISPs, datacenters, and anyone with a router can block ASIA or China allocated IP ranges. Especially if it's not the type of a flood that's designed to attack the routers (instead of the web-server).
The point of their website is to make censored content available to Chinese users.
China is attacking them to prevent Chinese people from reading the website.
Your suggestion is to make the site unavailable to China.
Do you see why it is not a solution? You are basically setting up a market for censorship-- the attack doesnt ever have to end-- depending on how much China is willing to pay to keep the website offline.
The great firewall does already block this site, and I cannot view it now without turning on my VPN. Therefore this site is pretty useless to me currently, but someone has to fight the censorship. Maybe one day it will actually succeed?
If normal citizens had access to a VPN in which to access this site from another country, it would be quite redundant to use this site then wouldn't it? Maybe I'm not understanding.
Please, don't perceive this as being rude, it's not meant to be.
Having provided IP transit at a largish network provider in a previous life, you have no idea at the complexity involved what you're asking for. It could be done, but the costs involved are non-trivial.
If you're honestly interested in the complexity involved, start reading about BGP, dynamic routing protocols, router/switch fabrics, control plane integration, autonomous systems, peering agreements, etc.
I feel it shouldn't be unreasonable to expect AWS/Cloudflare/Akamai to have policy-based routing to blackhole a lot of these source subnets. Of course it's complex, but these are some of the largest hosting providers in the world.
I've found this is a common thing to say with AWS employees. One of them insisted that Amazon's ridiculous ephemeral storage policy (immediate, permanent, and irrevocable deletion on any halt or stop event, making accidental data loss a real possibility) had to be that way because it would just take too much hardware to allow a cooldown period before the drives were wiped. There's no way I believe that. I think Amazon is just used to intimidating customers with exactly that line of reasoning: "No offense, but you have no idea how hard the cloud is", and people buy it because "the cloud" is the new hotness.
I've had RAID 6 fail. It should be extremely rare, but isn't. And at AWS's scale, It's not hard to imagine servers going offline regularly. Ephemeral storage as a policy makes sense to me in the sense that you can separate out that what's important from that which is ephemeral, and provide cheaper storage than a more HA solution like ganeti.
Why isn't the persistent data put onto an EBS volume?
If ephemeral storage is a drive local to the virtual machine's host (which I think is the case) then having a cool down period would mean holding the hardware that used to be used by you in reserve until the grace period expired.
It's possible but it's a lot of work to solve a problem that can be better solved by not relying on ephemeral storage persisting.
I mean, it's a nice idea in theory, but in practice stuff finds its way onto the ephemeral disk even if you have EBS volumes mounted, and "Sorry, we just deleted all your crap, I guess you should've had that on EBS" (which is an extra fee by the way) is not an acceptable solution to the problem.
Yes, it would mean holding the hardware in reserve for the cooldown period. I'm not talking months here, just enough time to recover from an accidental "sudo shutdown -h now" instead of "sudo shutdown -r now" (or similar). It'd be nice if Amazon sent an email warning about the condition and giving you an hour or so to go in and save your data/restart your instance. They could even make it a policy that you're charged for the time your instance is running + 1 hour to facilitate cooldown feature if they're really that worried about it; it's better than wiping data as soon as someone stops (from AWS console) or shuts down (from real console) an instance and providing absolutely no avenue for recovery, no matter how quickly you notice the mistake.
I have never had stuff accidentally find its way onto ephemeral storage. The ephemeral storage is mounted at a specific location of /mnt. Everything else on the system (OS, binaries, application code and resources) is stored on an EBS volume.
You have to specifically put something into the /mnt folder if you want it to be stored on the ephemeral storage. Any other location is safe and will persist through halts and stops.
In practice the only thing you should ever use the /mnt folder is maybe a Nginx disk cache, or as an alternative /tmp or something like that. Basically if stuff you don't want to lose is finding its way onto the ephemeral storage then you are doing something wrong.
It depends on your users. We have some users that are not super well-versed in AWS and they just see a big disk and put data there, and someone has to come back and move it to an EBS volume to make sure it's safe. /mnt is also used as a staging area for large files and the intention is always to move them to permanent storage when done, but that sometimes doesn't happen. /mnt is usually, in the non-AWS world, where the bigger, more authoritative disks, like an NFS mount to the NAS, would be mounted, so it's counter-intuitive to tell users to treat /mnt like /tmp. Even if someone is using /mnt as a temporary store because they understand EBS v. ephemeral, if they shut down from within the instance, they don't see any warning about the doom of the ephemeral data, and it may be unclear that a shutdown/system halt is the same as a "stop" in the AWS console, and they could lose the data that they had in the staging area unexpectedly.
There are plenty of plausible situations where an AWS user can find themselves with important, even just temporarily important, data on ephemeral. Whether those are the result of "correct" usage or not, it's beyond the pale to just zap that data away and tell the customer tough titties as soon as a shutdown command is issued.
I've been looking for work for a while, but I won't even respond to solicitations or job board posts that so much as mention the cloud, agile or scrum.
What you said adds pretty much no value to the conversation, I didn't mod you down, and I didn't look to see precisely what field you work in, just the same, cloud is pretty much an obfuscation for on-demand co-location and shared-hosting services.
While you may not like the trend, and some providers have better options than others, and you may require some operations over others... but very few businesses can afford to manage multi-site infrastructure that can dynamically scale. Most are probably served just fine with rented servers, traditional colocation, or don't even need more than one VPS...
That doesn't make the technology bad, and only makes you seem ignorant in your prior statement.
I'll go into it later, but for the most part I consider the cloud a really bad idea. I also regard most cloud companies as "buzzword-enabled" so as to attract investors.
My gripe with scrum and agile is not so much with the methodologies, but with companies that think they have a methodology when in reality they have a bureaucracy.
Geo DNS (to be more precise AS numbers is what is used) is something that is sometimes used as a very last resort - customers in China are customers as well, and if you drop everything from China thats effectively what the attacker wanted.
I recall only hearing about one time when packets from Chinese ISPs were completely dropped for some reason and only for short period.
I've also have an anecdotal reference that one can persuade providers that actually deliver traffic from China to do filtering on ingress on next hop routers after China, but that should be something very serious, that impacts their revenue as well and prolonged. As another commenter noted - costs for providers are very non-trivial.
In my experience DDOS is always money competition, it costs money to mount one, it costs money to defend against one. Unfortunately when one of the sides is [allegedly] a country it doesn't play very well.
the real reason is because Amazon wants to do business in China, so they absolutely cannot do something like that on their end without getting blacklisted by China's government.
So now you DDoS the captcha system. For companies not operating with massive bandwidth and computing power, you can just overwhelm their defenses. Cloudflare can get away with it, because they explicitly set out to be able to "service" those super huge number of requests.
I was working on an anti DDoS system for SIP, a UDP-based protocol. Basically the options were: 1. lockdown, just whitelist known good customers, and break many scenarios. 2. Attempt some kind of analysis, like sending out probes to determine good/bad IPs. 3. Scale the hell up. Write L7 stuff that can go at wire speed, and get lots of wires.
Needless to say, #1 is the easiest to implement, but allows you to get your pipe saturated. #2 requires compute + pipe, and #3 is the only thing that'll really work.
This matters because DDoS'ing a telecom can be very lucrative. I can say with good confidence that demonstrating DDoS capabilities are probably worth 5-6 digits in blackmail against many companies.
Greatfire is unique that they want the site to remain accessible to ordinary Chinese users while withstanding the DDoS attack (so they can't blackhole all traffic from China either).
If they put a reCAPTCHA wall in front, the GFW can simply block reCAPTCHA (easy -- it is a Google property and they block everything else from Google anyway) and no one from China can access Greatfire without a VPN. Mission accomplished.
If the goal was only to block Greatfire for non-VPN users, then they could just use the GFW for that from the start. The use of a DDoS can only imply that China wants the site offline for everyone, even VPN users.
I think Greatfire is evading the GFW by hiding their mirrored content behind innocent looking websites such that the GFW does not block it. Once the censors discovers a Greatfire node, they block it, but then Greatfire just moves on to another IP address or domain name.
With this DDoS, they are taking the different route of attacking the infrastructure of Greatfire such that they can't serve traffic from China at all. Causing massive bills and outages for Greatfire is probably a bonus, but I don't think that is their main intention.
I know you can do this on the server, using many different techniques. But this does not help as the traffic still reaches you (that you have to pay for).
You can also do this with Geo DNS (and get much less of a bill).
And the ISPs, datacenters, and anyone with a router can block ASIA or China allocated IP ranges. Especially if it's not the type of a flood that's designed to attack the routers (instead of the web-server).
So what's stopping Amazon?