They do. They reward sites that load fast. They also offer AMP as another way to speed sites up, which in turn makes them rise in rankings. But Google offers a large suite of tools to help web developers speed up their websites.
I hear this a lot, but whenever I search for something common like lyrics or weather, I get only large bloated webpages, while the lightweight and simple ones are nowhere to be found. Unless Google opens up their reasoning for ranking, I genuinely don't believe they add enough weight to load-times.
Huh, what? What site do you get? I always get azlyrics which is a fairly simple and clean site. EVery other one I tried seems more bloated.
Obviously, there's many other factors at play. SEO is a complicated world, especially in such a crowded place as song lyrics. Though I noticed Google actually has an inline box for lyrics now which avoids the issue entirely.
azlyrics is far from what I would consider simple or clean. It took over 30s to load for me and had way too much junk on the page.
Yes, there are many factors, and I clearly don't know all of them. I definitely see the problem that Google's inline (or amp) is trying to solve, but I think encouraging the right sites to grow to the top is a much better solution.
If only it were as simple as that. AMP websites get a place in the search carousel of Google. So even if your website loads faster than an AMP page, it would not be placed in the carousel - and that's Google forcing companies to adopt AMP even if they already have a fast website.
The world isn't that simple. Search engines know how fast their fetches are all of the time, but they only know how fast a javascript-heavy site is when the search engine crawls with javascript on -- which is rarely.
And by construction, AMP pages are easy to analyze without having to execute anything.
You perhaps underestimate the complexity and fidelity of the Google crawl, indexing, and ranking pipeline. Many intelligent people have spent many years trying to solve the sorts of problems you describe.
(disclaimer: I work at Google, and used to work on Search)
I totally get that Google has done a lot of work on these issues. It is still the case that no website that wants good ranks should depend on Google executing their javascript. Implementing AMP, on the other hand, means that you know you're going to get a good mobile speed score.
In no way was I dissing Google's crawl or crawl team. I'm just handing out conservative advice for website owners.
(disclaimer: blekko Founder/CTO. Nice to meet you.)
AMP pages can be tested and proven to load fast. There's no way to guarantee that a non-AMP page will be fast for a user. The way AMP is designed allows Google to inspect the source and determine whether the page meets the requirements or not. Caching the page guarantees that the content Google tests is the content the user gets.
Imagine if Google rewarded plain old websites the same way they reward AMP sites. You'd have sites gaming the system by hiding ads and disabling web fonts just to get ranked higher, then taking a crap on "real" users visiting the site.
AMP is a standard for creating fast / jank-free web sites, which all search engines can implement freely. Google is spearheading the effort because nobody else did, but they're not monopolizing it. Everyone is free to contribute.
It is not true that AMP does not accept contributions. What is your data to back that up?
Here is actual data: https://github.com/ampproject/amphtml/graphs/contributors
AMP currently has 275 contributors (people whose code was merged) on its open source project. Lets aim high and say 50 of them work for Google (probably a bit less), that leaves about 225 other contributors.
If all browsers sent the "Origin" HTTP header [1] with POST requests (such that web applications could rely on it) then CSRF [2] tokens mentioned in the article would become obsolete. You'd just have to check whether the "Origin" header sent by the browser is identical to your scheme + domain name (e.g. "https://www.example.com") and be done. Chrome and Safari have implemented the "Origin" header long ago but unfortunately Firefox [3] and Edge [4] have not yet done so.
The "Origin" header is similar to the "Referer" header but never contains the path or query. Furthermore, CSRF protection requires it only for "POST" requests (i.e. "GET" requests are unaffected). So there is little incentive for an option disable it for privacy concerns.
"It's probably worth reading through https://github.com/w3c/resource-timing/issues/64 and the proposal I linked above. In short, it's not clear that implementing the Origin header the way Chrome supports it actually helps with CSRF and it makes it harder (impossible really) to distinguish CORS requests."
Seems Firefox will implement it anyway because it's still better than nothing.
The Origin header is not as good to prevent CSRF since it's a known value. A CSRF token is a one-time value generated in the server, it's impossible to guess or get a valid one from the outside.
It boils down to how much you trust browsers to implement this without fucking up. In the past trusting browsers to get it right was a questionable idea, with Flash being a particularly reliable weak point which caused Rails to change how they do CSRF protection. I'm not sure Adobe ever fully fixed the issue in all browsers.
Nonces have the benefit of only relying on browsers preventing cross-domain reads.
When Flash is deprecated, and if a site wants to use CSP, then this might start looking like a better trade off.
ATM though, nonces can be automatically added to all same domain forms on your site with JavaScript and you can check it trivially on all POST requests, getting most of the non-CSP related benefits without waiting on browsers.
And even if browsers were to implement it, there is still a long tail of browsers out there that will take forever to update.
CSRF protection is not about attacks from evil clients (you can easily spoof any header with the HTTP client library of your choice, of course). CSRF protection is about preventing innocent / well-behaving clients from being tricked into POSTing some data on behalf of their (logged-in) user.
Yes. Forwarding a unique CSRF token from the backend gives you some assurance that it's a legitimate request, initiated from a pageview within a timeframe. A header (origin) which always has the same value (the hostname) is inherently less secure, though I overstated how much in the previous comment.
You can use it to identify unsophisticated attacks, sure.
However, if someone has the ability to make malicious HTTP requests on my behalf using my browser can you really be sure that they don't have the ability to make malicious HTTP requests with altered headers through a malicious extension or a browser specific exploit or some other vector?
You still have to do all the other attack mitigation strategies in addition to checking the Origin header, and I'm not sure the extra complexity buys you anything in the long-term.
Not sure I follow your reasoning: CSRF requires a browser, as that's where you'll find a logged-in user that you want to force into doing something without their knowledge. The network layer is already protected by HTTPS. Some browser plugin might modify the header, at which point the whole exercise is pointless anyway as you can't trust the client in that case. Happy to learn where I'm wrong.
Can you expand on the threat model here? After noodling a bit I can't think of an attack that CSRF prevents that an origin header wouldn't, but obviously that doesn't mean there isn't one. Be real curious to hear of one!
On SERPs, both the "flash" icon and the AMP carousel are AMP-exclusive which suggests that Google is abusing its market power.