IMO this is little help. The root of the attack, ie, how the attacker injected the code into user browser is not shown. Traditionally attackers can change domain names within hours after they realized that the destination was discovered.
IMG The correct approach when studying this type of attack is to understand the source of the attack, not the outcome
TBH, this whole blog post just seemed like a poor ad for Akamai's Page Integrity product, and deliberately overcomplicated 2 problems that should be very simple to deal with:
1. How did the malicious script get injected in the first place? CSP should eliminate if not greatly reduce this attack surface.
2. The blog post states "Also, a lot of CSP policies don't limit WebSockets usage." Umm, what? Why not, this seems incredibly stupid to me, that's the whole point of connect-src. What I did find was this issue report, https://github.com/w3c/webappsec-csp/issues/7 , stating that connect-src 'self' doesn't allow websockets in many browsers because it's technically not same-origin, so all I can imagine is that there is some (bad, lazy) practice where if you wanted connect-src 'self' to allow websockets back to the same host you just said "fuck it" and also put ws:* in your connect-src, which is just a bazooka aimed at your foot.
> CSP should eliminate if not greatly reduce this attack surface.
Sure. Or just don't load untrusted 3rd party javascript in your payment form. No banner ads. No dodgy trackers. The page where a user enters their card information is a high security context.
No, you're missing a very important vector described in the post. It's not just about making the payment page secure. Indeed, many (most?) e-commerce websites already link off to a secure 3rd party payments page. The problem is that if the originating page gets compromised, it can replace the link off to the secure payment page with a link to a malicious page.
This is not theoretical. Many websites include a primary "marketing" site built in something like WordPress that links off to a more hardened "app" site for authenticated users with stronger security policies. If your marketing site gets compromised, malicious scripts can usually pretty easily replace links to your app site to a spoofed page.
Before HSTS & key pinning was a thing, most people who visited gmail still browsed to gmail via http which redirected to https. Because the original query wasn't secure, someone at defcon did a silent MITM of gmail by catching the insecure http requests and not redirecting them to https. Then they caught people's credentials that way and proxied the real gmail, with all the images flipped all the images upside down so people knew something was up.
Its interesting to think about web redirects as a kind of chain of trust. The earlier insecure request destroyed the security of the secure context established later.
The correct approach is clearly not to ask how 3rd party scripts are being injected into your web page but instead to just purchase Akamai's Page Integrity Manager.
Injection and payload are orthogonal, somebody who can inject scripts just sells that to somebody who has a payload.
At least my Instagram story feed is full of ads where these people find each other. (I mark them all as "scam or illegal" but Instagram removes such ads with a few weeks of delay and never bans their accounts)
Completely agree, I was searching the comments to make sure someone pointed this out. The vector here is the how the injection occurred and not whatever the injected script is doing. Once code is injected its game over.
IMG The correct approach when studying this type of attack is to understand the source of the attack, not the outcome