The claim that process isolation is "antiquated" is absurd, since it's what everyone else is doing except for those, like Firefox, who have not yet started to do anything at all. I don't know any other projects that do what NaCl does, so regardless of whether Google should switch to pure NaCl for sandboxing, if they did, it would be innovation, not scrapping an antiquated technology.
As for whether they should - although Native Client's software fault isolation can provide better latency than hardware-based solutions because there's no need for a context switch, it takes a hit in throughput (7% overhead) because of the code contortions involved (though being able to trust the executable code might improve things). There would be significant issues with supporting JIT in such a model, because JITs typically like to have rwx memory. Multiple sandboxes in the same process wouldn't work with the current NaCl model on 32-bit platforms. And although the SFI has been well secured, putting user code in the same address space as the master process might make exploitation easier - this would be partially mitigated if the address of the user code could be hidden from it, but doing that would require additional overhead because addresses could no longer be stored directly on the stack.
The actual phrasing is "antiquated style of process isolation", which leads me to believe that he's not saying that process isolation is inherently antiquated, but that the way Chrome does it is.
In that case, it still needs some explanation. Chrome's process allocator is apparently pretty complicated, so assuming a reader knows enough to just take it as read as "antiquated" is a bit much.
> The actual phrasing is "antiquated style of process isolation", which leads me to believe that he's not saying that process isolation is inherently antiquated, but that the way Chrome does it is.
That was my assumption too. Chrome's IPC was developed in secret and never made it into WebKit, instead, Apple later developed WebKit2 - in part as a response to Chrome's IPC - that does IPC for WebKit in a different way. It sounded like the author was saying that Google's way, which is older than WebKit2, is inferior.
I don't see how the fact that Chrome was secret prior to its 9/2/08 launch matters much here. It's existed publicly for ~4.5 years at http://src.chromium.org/viewvc/chrome/trunk/src/ and all of the code is pretty actively developed [1] & [2]. Chromium has now been public much longer than it existed privately, and for any particular file in Chromium, the odds are good that it's substantially different from launch day.
Chromium WebKit does not directly provide a multiprocess framework, rather, it is optimized for use as a component of a multiprocess application, which does all the proxying and process management itself. The Chrome team at Google did a great job at trailblazing multiprocess browsing with Chrome. But it's difficult to reuse their work, because the critical logic for process management, proxying between processes and sandboxing is all part of the Chrome application, rather than part of the API layer. So if another WebKit-based application or another port wanted to do multiprocess based on Chromium WebKit, it would be necessary to reinvent or cut & paste a great deal of code.
> I don't see how the fact that Chrome was secret prior to its 9/2/08 launch matters much here.
It might not, yeah. But I've heard theories that part of the reason it never made it into WebKit was how it was developed in secret and how that annoyed Apple. Together with v8, another secretly-developed project that also did not replace it's parallel in WebKit.
I won't speculate on why people make the technology decisions they do - and in this case I'm somewhat distant from the WebKit team.
I am pretty familiar with Chrome's multiprocess webview harness. This is one case where approx. 2 years ago it was incestuously tied with a lot of Chrome-the-desktop-browser internals. Part of the technical debt accumulated with the sprint to ship something. So I can see why back then it wasn't appetizing as-is.
A heroic effort by a team of engineers finally managed to separate it in 2012 into "content" (not Chrome, get it?) and set up its own shell, suite of tests, and public API for use by embedders (one of which is src/chrome, but there are others now even within Chromium). It's still not as elegant as I think any of us would like, but it is now usable from a standalone app.
One of these days I need to write a series of posts about some of the "big C++ app design" lessons we've learned as a team over the past few years.
It would be pretty ironic if Apple were actually annoyed that Chrome was developed in secret considering they forked KHTML and developed it into Safari & WebKit in secret in essentially the same way.
Firefox uses a sandbox process for plugins like Flash. Mozilla implemented per-tab processes, a project called Electrolysis, but it would break many popular add-ons. Firefox OS is able to use process isolation, through, because it doesn't need to support legacy add-ons.
Really? I haven't heard a single thing about Electrolysis in a long time, except I remember reading somewhere recently that it was abandoned. I'm happy to hear that Firefox OS is using it; still, keeping legacy add-ons working seems like a bad reason to stop pursuing such an important security feature on the desktop.
I was employed at Mozilla when the pronouncement to "suspend" work on Electrolysis came from on high.
I think addons are a small part of the picture.
The big deal is there are only so many Firefox developers, and Electrolysis was turning into a real sinkhole of time with no end in sight. There were a lot better ways to make Firefox a better browser in the meantime, and those things are being done.
Firefox on Android used to use Electrolysis until it switched to the current Java front-end. And FirefoxOS is using it for sure: process per app, so that apps can be easily killed in low-memory conditions.
As for "legacy add-ons", that would be "every single add-on". Not to mention that the desktop Firefox UI itself would have to be heavily rewritten as well. The judgement, as mcpherinm says, was that the cost was too high for the possible gains. I agree that process isolation is good for security, but there are various other security mitigation strategies that can be used that had a higher bang for the buck at the time.
Yup, I think some people don't realize why processes exist and what they're here for. It just looks like a concept that we've been using for a long time so, "it must be too old and be replaced", without understanding _how_ it works and _why_ it's there.
There's no way that "resource-intensive and unpredictably timed context switches" account for 100-200ms on modern hardware. With modern multi-core/cpu boxes I think a typical time for a context switch is on the order of 10 microseconds, and that doesn't take into account in things like CPU affinity. (It's still going to depend on a lot on OS and hardware. For example: new i7 vs old core architecture)
If there's actually high costs, it shouldn't be due to a "antiquated" multi-process architecture, but other things like marshalling data to and from the JS engine.
I don't know about these claims — just because most OS research is now 20+ years old doesn't mean that a result, process isolation, is antiquated. You likely use a very modern operating system with process time-sharing (1964) using virtual memory (1950s/1960s) and so many other results of OS research from 30+ years ago…
This is how science works: research suggests some result; until other research suggests a different result, the first result is our best working hypothesis. Process isolation is hardly antiquated.
https://plus.google.com/103382935642834907366/posts/XRekvZgd... is a Chrome developer talking about cache metrics within chrome. He claims the cache is capped at 320MB. Now the graphs in the article never show the cache size hitting 320M, so possibly they are both right. ;)
I would prefer it if the article had more information on how to reproduce their tests. For example, they claim a faulty hashmap implementation. This seems like it would be possible to benchmark. Instructions on reproducing the 100ms delay between button click and network traffic would be cool too - as would data on how much worse that makes facebook's latency. Also, is their cache backed by ssds or spinning disk?
I'm also impressed that the speed of context switches matters on websites.
The jump to claiming that chrome caching is tied to ads is interesting. Perhaps the author could fill in more details.
Let me respond to this comment from the article:
"""
This is not the case for Chrome: the browser keeps all the cached information indefinitely; perhaps this is driven by some hypothetical assumptions about browsing performance, and perhaps it simply is driven by the desire to collect more information to provide you with more relevant ads. Whatever the reason, the outcome is simple: over time, cache lookups get progressively more expensive; some of this is unavoidable, and some may be made worse by a faulty hash map implementation in infinite_cache.cc.
"""
Chromium (and thereby, Google Chrome) does not cache forever. The author is clearly misled by the infinite_cache.cc file he referenced. That is our experiment file, designed to examine a theoretical "infinite" cache's performance for data gathering purposes. It doesn't actually cache the resources, but just records the keys (basically, the URL). It only runs on a small set of user browser sessions (only for users who opt-in to helping make Google Chrome better and a subset of their browsing sessions).
As my previous Google+ post mentions (thanks for the parent for linking it), we cap the cache size at 320MB. The author is simply factually incorrect about the aforementioned claim.
As for cache performance as the cache gets larger, I fully believe that it gets slower. We have data that backs up this assertion. Of course, larger caches means that more gets cached. And there are ways to restructure the cache implementation to avoid the painful latency on cache misses. While cache misses are indeed a large percent of resource requests, it is misguided to analyze the cost of cache misses in isolation. For the opposite argument about how we should be increasing cache sizes, see Steve Souders' posts: http://www.stevesouders.com/blog/2012/10/11/cache-is-king/, http://www.stevesouders.com/blog/2012/03/22/cache-them-if-yo..., etc.
The caching issues are far more complicated than described in the original post. The data is much appreciated, and we have similar data that we're looking at as we're making our decisions about caching.
To set the cache to only 100MB, you can always use the "--disk-cache-size=104857600" flag.
My issue with chrome is that it eats up a lot more RAM than Firefox. When doing research, I often have 30-50 tabs open. With Chrome my system runs out of physical RAM and starts thrashing. With Firefox, the UI becomes unresponsive due to it's single threaded design.
I wish Chrome would start a Memshrink project like Mozilla did or Mozilla would finish with they started with electrolysis.
On my 2GB netbook, chrome has gone from my preferred browser to unusable due to the high memory footprint of recent builds. One killer is the GPU process often taking 200+MB.
Before giving up, and switching to FF, I tried the --disable-gpu --disable-software-rasterizer switches to disable the GPU process but that prevented videos from playing at full speed.
> My issue with chrome is that it eats up a lot more RAM than Firefox. When doing research, I often have 30-50 tabs open. With Chrome my system runs out of physical RAM and starts thrashing. With Firefox, the UI becomes unresponsive due to it's single threaded design.
Alternatively, since you seem to be a power user, you could consider upgrading your hardware to 8 or 16 GB of memory; it's not that expensive nowadays, and given your power user status, more memory = faster computer experience = higher productivity. Or just more tabs.
[old man mode] Back in the day we upgraded our computers instead of blaming software.
There's no need to answer FUD with FUD: Firefox's UI does not become unresponsive with increasing number of tabs. Certainly not with just 50 anyhow; I do that all the time and never notice a slowdown. The tab-closing animation is less smooth than chrome's however. And while it's certainly true chrome uses more memory per tab, I can't imagine running into that problem very easily even on somewhat outdated hardware. A 4GB system should be able to do 50 tabs normally, and how much more do you need?
I have a quad-core, 8GB machine at work and a dual-core 4GB machine at home. Both are running Win7 64-bit. How much more do I need?
I have noticed slowdown in Firefox's UI on both machines. More important than number of tabs, is CPU usage. For example my HTML5 heavy trading platform often causes the single-threaded Firefox UI to freeze and slowdown on both machines, while I have never noticed Chrome freezing when this site is open.
On the other hand, Chrome's UI runs smooth as butter until either open tabs or other programs bring memory usage to over 90% Physical Memory in the task manger. Recent builds hit that wall a lot quicker than it did a year or so ago.
I am a Firefox users so this is not coming from a bashing Firefox POV. I could open 100 - 200 Tabs in Firefox without problem if i disable Javascript and all Add on.
As soon as you have some heavy JS usage website, even 50 Tabs can slow down the UI. This is on Quad Core Ivy and 8GB Ram with SSD.
The amount of JS on one website now is getting insane. Chrome have similar problem as well like the OP have said. Just a different one.
The FUD is that the # of tabs has anything to do with it. Also, these problems have been getting a _lot_ better with recent releases, which are much better at avoiding blocking the UI thread. It's not perfect, but it's not something you see very often either. Let me put it this way: I can't even remember the last time I've had a UI slowdown, and I use FF on various machine with lots of tabs all the time. (Firebug's still really slow, though).
So when you say "some heavy JS usage" what exactly do you mean? Certainly not stuff like google docs/mail/calendar, and they're all heavy on the JS...
A large part of Chrome's memory usage is due to its process isolation. It's also what makes it more likely to be responsive, even with UI blocking bugs.
I don't know when you changed the behavior, but until a few months ago the cache was infinite. I close friend was unable to use Chrome because it took minutes - yes, minutes - to boot until we found is was a problem related with the huge cache it was maintaining.
No, we have never had an infinite cache. What is more likely is that your friend may have encountered a bug. If he has information on this issue, please file a bug at crbug.com/new and I will be happy to triage it.
I don't understand why the author comes across as having a serious axe to grind. What is aptiverse's horse in the race?
Also this: This is not the case for Chrome: the browser keeps all the cached information indefinitely; perhaps this is driven by some hypothetical assumptions about browsing performance, and perhaps it simply is driven by the desire to collect more information to provide you with more relevant ads.
I don't know about him, but I LOVE the fact that Chrome will show me if a link in purple or whatever if I visited it 2 years ago. Completely, totally, absolutely love. Other browsers can be (the last time I checked) configured to behave similarly. When you browse hundreds of web pages a day, catalog only a few of them, and then research a topic you once looked up over a year ago again, it helps to know which pages you've seen and which you haven't.
As the author notes, that's mainly due to the (according to him, but also the only explanation I can think of) lookup table implementation.
Something like this should be done in a hash table for constant lookup regardless of the number of entries (as entries are not being added "in real time" non-stop, the cost of bucket resizing shouldn't be too great) or with a trie for the best storage properties for many URLs for the same domain (the case author notes). If done right (correct hashing algorithm, good implementation, decent collision handling for the hash table; or any decently-performing in terms of space/time for the trie) this shouldn't be a problem.
It's always going to be a problem; even the best hash tables of this size will suffer performance problems as they age. As the table increases in size, less of it will fit in memory/various caches, and more of it will involve (expensive, potentially numerous) disk seeks.
Once you take caches into account, hash tables are not ammortized O(1). Having said that, those graphs show delays of well over 100 milliseconds, and that sounds excessive. It's possible the delay is primarily due to a poor implementation and not so much due to inherent limitations.
For something like link coloring over a long history, a Bloom filter would seem to be ideal for reducing the number of true hash table lookups you'd need per page.
Large bloom filters have exactly the same problem. And since they're fixed size, you'd need a potentially huge bloom filter to avoid huge numbers of false positives; more likely you'd need to periodically regenerate it based on the original data.
This is a really tricky optimization because on a positive hit you've introduced more random I/O! After all, you've got the bloom filter and then the hash table lookup. False positives are also bad - so you only save something on true negatives. Is it worth it? Only if you get the tuning just right.
Thank you so much on for this comment. I'm currently teaching a lesson on hash tables and this happens to be a great example of their real-world applications and implications.
This philosophy is embraced by the developers working on WebKit: in fact, the code responsible for rendering a typical web page averages just 2.1 effective C++ statements per function in WebKit, compared to 6.3 for Firefox - and an estimated count of 7.1 for Internet Explorer.
What is an "effective C++ statement"? That's a really odd measure and I can't get my head around it.
It's marketing bullshit. He probably wrote a statement in C++ and measured the time it takes to execute, I have no idea (also impossible to know what sort of statement). numbers suggest around 2% accuracy (7.1), that's pretty impressive given that we measure "code responsible for rendering a typical web page" per "function" over "effective C++ statement". All of those well defined in his textbook I bet.
It's more likely he just rounded a number of results and rounded to the first decimal place to display it better, not to suggest some level of accuracy.
Last I heard IE was closed source ... I'd love to know how he got his hands on the IE code base to make that measurement... All his posturing about - design decisions being a business need of google make me wonder whether this was a M$ sponsored article...
They say 'estimated'. A way to do that is to look at the disassembly of IE code, assuming an on average constant number of assembly instructions per C statement. I don't think that is a bad assumption, but it does ignore potential differences between compilers.
Also, IE source likely is available through Microsoft's Shared Source Initiative (http://www.microsoft.com/en-us/sharedsource/default.aspx). After all, Microsoft has vehemently argumented that IE is inseparable from the OS. I think it would require some lenient interpretation of that license to use it for this purpose, though.
Well you could instrument the source code and measure the number of calls, and compare that to the rough number of non-call ops per function. Not a perfect metric but it's something. His claims do seem far fetched though.
They mean that each function does relatively little work, so to render a given web page, the number of function calls (and therefore vtable lookups) is higher than in other browsers. This translates to more overheads and slower execution.
MSVC with PGO will do it, by inserting a guard on the vtable pointer and inlining the most common implementation. But you still take the possible cache miss of reading the vtable pointer, of course. And the other commonly used compilers (clang and gcc) don't do anything like this, even with PGO.
This is based off general programming knowledge, as I have never used C++, but I think it is referring to a highly-modulated programming style that uses many general functions.
I.e., instead of writing a few quick lines of code to perform bisection search in the middle of logic flow, you create a general bisection search function and implement that.
This leads to high productivity "a statement per function", and makes code cleaner and easier to update, but can substantially increase overhead costs.
No, C++ is particularly well suited to that since it's possible to implement those with zero (and in practice less than zero) overhead. The problem is overridable functions; i.e. not generic implementations that work on various structures by compiler specialization, but methods on classes whose implementation varies at runtime by dynamic dispatch. E.g. a getLayoutWidth method with a different implementation for blocks and inline runs, and where it's possible to call the implementation without knowing at compile time which it is.
This is probably off-topic, but I've started using Safari religiously now. When Chrome first came out, it was barebones, fast and did the job. This is what Safari currently feels like, so I'm using it. Fast, stable, and does the job.
Now Chrome feels like it's just another bloated browser. Which is slow, and hogs my computer. </opinion>
IMHO, cloud print, the webui reimplementation of gui system toolkits (like the print preview dialog). There are some new features that make me a bit worried with the direction chrome is taking: the new NTP with a search box duplicating functionality already in the omnibar; the chrome app launcher (I want a browser, not a whole OS).
They've gone overboard with their value-adds, and there's a non-trivial amount of features that are not required for simple web surfing: sync, apps (& background apps), cloud print, themes, phishing/malware detection, omnibox stuff, etc.
None of that stuff appears on the default screen layout though. I mean, I agree some of it is useless cruft (but seriously: you put malware detection on that list?!?!). And that's true for pretty much all mature software.
But as far as "just give me what I want" I continue to think Chrome does it better.
I think the only strong argument on that side is one you don't make: on first run, Chrome stops and asks you to sign in to your Google account. I do that willingly, because I really like bookmark synchronization. But it's definitely not a "minimal browser" thing and if you're not a Google user it's probably pretty annoying.
When you're browsing, you don't see most of the stuff you mentioned. Chrome may present different options to you but these are opt-in and possible to hide permanent.
So you concede there's lots of stuff that many of us won't use? That's pretty much the definition of bloat. Hiding it doesn't make it go away.
I'm sort of just playing devil's advocate here though, since Chrome's "bloat" isn't an issue for me _yet_, though I am weary of it becoming the next Firefox (whose snowballing of features led to an amount of bloat which incidentally caused me to start using Chrome in the first place).
One, two, three... sure, you won't notice a new feature here and there. But, hundreds of features later and the app takes just a little longer to startup, a little longer to update, is slightly less stable than it used to be, has a few more attack vectors for malware, etc. It's more-or-less the principal of the matter, using the right tool for the right job and whatnot; when a web browser starts resembling some mismash conglomeration of functionality which just so happens to touch on web browsing, and all you really need and want is web browsing, then yeah, it's bloated.
> So you concede there's lots of stuff that many of us won't use? That's pretty much the definition of bloat. Hiding it doesn't make it go away.
I define bloat differently. It needs to a) unnecessarily add complexity to the core functions I personally use, and b) degrades performance. wget has tons of features that I've never used, but I don't consider it bloated. To me, hiding it well does in fact make for a bloat-free application.
I'd just like to emphasize: when you're browsing, you don't see that stuff.
I do not see any of the additional stuff that the above poster mentioned. I use exactly one feature in chrome other than just it's web browsing at that is sync. Even then, I haven't interacted with sync since the first time I installed Chrome on my computer.
> I'd just like to emphasize: when you're browsing, you don't see that stuff.
This is strange since many people say Firefox is bloated but im my opinion it looks about identical to how Chrome looks like, so they seem to define bloat in some other way.
What you would consider bload, I'd call clutter. I de-cluttered my Firefox to this way that I only see the tab-bar, a command-bar and the web page.
To me, it was the UI and magnitudes of toolbars that would come with every addon (even the useful ones) that would then need manual disabling, and the need to restart browser over and over again (at times due to new addons being installed, disabled/enabled or browser updates) that pushed me to Chrome.
This isn't a problem anymore, however, and hasn't been for a long time - at least, not unless you really try.
Both chrome and firefox have excellent session restore, so even if you need to update (which you usually don't), you won't really notice. Even session cookies are properly restored, as are partially filled forms, though some pages' scripts cause the form restoration to fail.
Toolbars have always been rare, and they certainly are now; it's probably possible to still install a toolbar extension, but I can't remember the last time I did. That may be more a change in typical extension style than in the actual browser, however.
Perhaps Chromium proper would be a good fit for you; it's got the polish of Chrome UI but lacks all of that crap that Google has started bundling with Chrome (sync, etc.).
The hardest part is finding the downloads, since they go to great lengths to prevent anyone from easily obtaining a compiled binary.
Chromium includes sync. Basically the biggest parts you'll be missing are a pdf reader and flash. pdf.js has a chromium extension to make it easy to use there (though depending on the document, it won't necessarily feel faster and more minimal), but if you need flash for something, Chrome is one of the better ways to keep it updated these days. You can always run Chromium and open up Chrome for Flash, though.
Oh wow, it has sync? I thought all of the proprietary Google features were kept in Chrome; I'm not sure how sync could've been implemented in an agnostic browser...
Thanks for the further info though, it's been awhile since I've updated my Chromium.
>all of that crap that Google has started bundling with Chrome (sync, etc.).
Why do people keep bagging on sync? Personally, I'm a fan of not having to reïmport my bookmarks and reënter all of my autosaved passwords every time I reformat or change computers.
It's kind of an extension of the annoyance from not using G+ and yet having it crammed down my throat; it's the same tactic just with a little less gusto.
I actually did try it out when they first introduced it, but when I installed Chrome on a different machine and tried to get my settings there, the sync server got totally confused and re-enabled all sorts of settings that I'd disabled (like syncing my entire history and removing my cookie-blocking exceptions.)
Agreed, this was a long time back, so I actually decided to give it another try today, but now it seems to have a problem with 2FA... I gave it both my master and app-specific password, but it consistently comes back saying the app-specific password is wrong. Has anyone else noticed this problem?
2fa with chrome sync is a PITA. I've found the best way to enable sync is to login to gmail which usually triggers a dialog that asks you whether you want to use those credentials for chrome sync.
Yet another example how horrible this "ecosystem" craze everyone seems to be enamored by these days quickly becomes. Why the fuck do I need to sign up with Gmail to use something in chrome? (How would it even work? I moved from google apps to fastmail, so my MX records are all different now.)
It's the same with most RSS readers nowadays... every single app I've tried assumes I have a Google Reader account, and flat out refuses to work without it. Same with Google Talk. All these apps implement rss/xmpp backends anyway, why do they hardcode google into it?
Sorry for the rant, it's just frustrating how (needlessly) difficult it is once you go outside the sanctioned path. Death by a thousand cuts.
The stuff that is included with Chrome shouldn't have any impact on performance, the webkit and v8 engines are the same as that of Chromium. For Google, they do all their development on Chromium (which is an open source project), then apply a few patches and a new logo on top to get Chrome. It isn't some great conspiracy that they don't distribute Chromium binaries, it simply isn't their end product.
EDIT: Apparently it is official / sanctioned, thanks for the correction.
PREVIOUS: Yeah I saw that, but that's not an official or sanctioned site or binary, and thus didn't feel comfortable openly suggesting it. There's no way to guarantee that the contents haven't been modified or that they're even kept updated.
Perhaps you were being tongue-in-cheeck, but I do feel the need to point out that the grandparent said they were using Safari, implying that they're on OS X.
I admit to liking Safari, and enjoy the seamless experience (on a mac), but I am a bit hesitant to switch completely given some of the things Safari apparently lacks in the realm of security.
As an example, I don't believe Safari currently supports HSTS or cert pinning. Both Chrome and Firefox do.
The pinch and zoom feels much smoother on Safari, it's especially helpful on my laptop when viewing from a distance.
Tha being said I'm really tired of being in the "walled garden" I have a Windows desktop and Andriod Tablet that would love to share iCloud tabs and bookmarks. Chrome can do all of that.
That's exactly where I am. Chrome got to the point where it was using 2GB+ of memory. Safari uses a lot too, but consistently less than 1/2 of what Chrome would use and seems not to leak as much over time. It also boots MUCH, MUCH faster.
Uninstall any extensions you have running in Chrome to get it feeling fresh again. In my personal experience (strictly subjective, not benchmarked), a clean chrome install runs faster than a clean Safari install.
"... and perhaps [Chrome's aggressive caching] simply is driven by the desire to collect more information to provide you with more relevant ads."
The whole article loses credibility with me because of that one statement. Surely this guy knows that a local DNS or document or image cache is not being used to provide the Google mothership with more data to improve Adwords performance, right?
Having tested the interactivity of various WebGL applications on low-to-medium-end graphics cards, I will say that Chrome's does have significantly higher latency than Firefox. Try this demo:
Crank the number of instances up until the frame rate drops (on my Intel HD Graphics 4000 it's about 20,000) and then drag your mouse. Notice that the drag latency is significantly worse in Chrome than Firefox.
I can get up to 51k before I start to notice a drop in Chrome (M27) and in Firefox (R19) I can only get to about 27k. So it might just be that Chrome isn't optimized for your Intel graphics and is better with dedicated cards?
Opinion on process isolation is the most controversial statement in the article, but the other claims are more interesting and easier to verify. If the findings are confirmed by others, it will hopefully drive webkit/chromium ecosystem towards improvement.
I am just wondering here: two blog post from a startup in "stealth" mode. One about CSS "advanced" tricks. The other one about criticizing Chrome browser. Are those guys about to release a competing Web-Browser? :P Too less cues to know, but still the question popped up in my mind.
It would explain the criticizing tone of the article. Don't make me tell what I didn't tell: they could be proven right or wrong, it is not my point. I don't want to follow the debate about Chrome performance here. But what I suggest is: unknowing the real goals of Aptiverse's business and their interests I would backup a little bit and try to look at the big picture. And avoid religious Emacs/VI war.
But well then, I am trying to make bold bet anyway. They could be about to release a new ground-breaking web-browser in the near future, make everyone switch and fix the issues they announced in their blog post. Don't know what future is made of! :-)
When it comes to browser benchmarks, there is a lot of emphasis on JS performance, page load time, and memory consumption. But one area where I've seen Chrome absolute fall apart is scrolling performance. Safari regularly gives me 3-5x the frame rate when scrolling, to the point where some websites are nearly unusable on Chrome (say, Reddit with RES) but are butter-smooth on Safari.
There was in interesting "Ah ha" moment in the development of the Xerox Star system when they figured out all the layers of abstraction meant that each character placed was taking a lot of subroutine calls. Flattening the architecture resulted in a 10x improvement in performance. It was an amazing result.
I have done my own tests a while ago (with Firefox and Chromium) and have concluded that for the sites that I frequent Firefox performs much better and uses less RAM.
Every now and then I repeat that experiment... So far always with the same outcome.
The notion that Chrome is faster is mostly measured by benchmarks; for my browsing behavior that does not translate to real real life performance.
I had the same experience. For a while when I was using a slow internet connection, I noticed speed differences a lot. For a few sessions I would try loading each page on Firefox, Chrome, and Safari, and Firefox was clearly the fastest.
This is another reason why we should be wary of everything moving to WebKit. If one day it's just too slow, we're not left with easy options to move away from it.
I agree... You can only validate what is presented to you; a hacker won't kindly hand you a piece of code to run after exploited a hole and ask you to verify it.
It also rather circuitous, browsers already verify what they are executing.
"This is because the synchronization needs to occur over a low-throughput, queue-based IPC subsystem, accompanied by resource-intensive and unpredictably timed context switches mediated by the operating system. To understant [sic] the scale of this effect, we looked at the latency of synchronous, cross-document DOM writes and window.postMessage() roundtrips."
Web pages running in different Chrome renderer processes can only communicate using postMessage. WebKit's design makes it practically impossible to access DOM or JS objects across different processes or threads (the only browser that can do this as IE -- top level browsing contexts have run in different threads in IE since the beginning).
You can test this hypothesis by creating two same-origin documents in different processes. In the first window, do window.name="foo"; then in the other, do window.open("javascript:;", "foo").document.documentElement.innerHTML="hello"; this will work in every browser except Chrome.
Chrome actually provides a way for web developers to explicitly allow a window.open invocation to create a new renderer process (see http://code.google.com/p/chromium/issues/detail?id=153363 ). This way, the author can allow Chrome to use a new process if they don't need access to the popup beyond postMessage.
So, I have no idea where that peak 800 millisecond DOM access latency came from, but it's not from IPC across renderer processes. I'd love to see the benchmark that was used to get that number.
The circumstance I described that Chrome doesn't support is very rare in real-world web pages. It's pretty unusual to use window.open to get a JS reference to an existing window, and not supporting that makes Chrome's renderer process model much much simpler. So, it's a reasonable trade-off.
TL;DR: Chrome does a good thing. This blog post is written by someone who is not well informed.
It makes it simpler, but does it also make it slower? Seemed that the blog's point was that communicating between windows is slower in chrome than in other browsers because of this behavior. You simply further explained why, and confirmed that it is only in chrome. Right?
Now, I do think there is room for argument that this is a better way. But you do not seem to be undermining any of the blog's points. Those being that chrome has a slower process to communicate between windows, and that it is the only browser that does this. The frequency with which this is needed was not a point of contention.
The author claims that windows in different processes communicate slowly in Chrome.
My claim is that windows in different processes cannot communicate at all in Chrome. Only same-process windows can communicate -- and that refutes the author's claim that IPC slows down cross-window/frame communication in Chrome.
Hmm... interesting. I was honestly under the impression that all tabs (or windows) were separate processes. Are you saying that if you call window.open with a same domain that a new process is not started? (Sadly, I don't have chrome handy to test this right off.)
That's right -- Chrome uses the same process when there is a JS reference between the windows.
(In addition, Chrome will sometimes make windows/tabs share a process if there are a lot of tabs open, to save memory. There is a limit to the total number of render processes that Chrome will have.)
Cool, thanks. I am, not shockingly, curious how this works, now. Is it just a hinting mechanism, or can the rendering process of a tab/window change on the fly? What happens when you go to a new url in an opened tab? (I mean these more as things I'm now interested in. Maybe I'll get off my virtual butt and check the source. Granted, that source tree is less than casually approachable.)
The only real wtf here is the history issue. The rest of the stuff -- especially the "antiquated" process isolation -- seem like reasonable trade offs. Switching to a better hash implementation should solve 90% of the performance problems.
This is a fantastic blog post, and I am thankful Alex took the time to write about it. There is no question someone over at the Chrome team is starting a conversation about these findings.
The best thing about Chrome is that they move fast. So I suppose the first step is to get an official response by someone over there....
I started using Chrome in late 2010 and used it continuously until about 3 months ago. It started suffering the problem of tabs just freezing about every few minutes of browsing. This behaviour started happening around the same time on two different machines - one quad core desktop, another lenovo laptop. Since then I've gone back to FireFox, but this article is making me think I may just need to do a fresh install of Chrome.
While we are at the Chrome performance problems, I have encountered one recently when running Javascript code in Chrome. I have job progress data shipped back from the server to the browser asynchronously, which drives the progress bar and the data are concatenated together for display. Chrome simply can't handle long string of concatenation. It just hangs. Other browsers have no problem.
Are you talking about performance issues as in visual lag, or performance issues as in "the list is empty for a while", because I experience neither, but a friend of mine had issues where no suggestions would show up for a while, but he relied upon them for some reason, and it annoyed him.
That's interesting that Chrome's caching is so broken, I wasn't aware it actually existed at all because I use the real internet that is not next door the googleplex and every time I hit the back button I enjoy a 1 - 5 second break waiting for the page to be completely reloaded.
>This is not the case for Chrome: the browser keeps all the cached information indefinitely; perhaps this is driven by some hypothetical assumptions about browsing performance, and perhaps it simply is driven by the desire to collect more information to provide you with more relevant ads
>Some of these issues - such as the "infinite history" or the antiquated style of process isolation - may be driven by Google's business needs, rather than the well-being of the Internet as a whole.
How does caching the files indefinitely lead to better ad targeting? Keeping the history, perhaps, but I don't believe Chrome's web history is used to target ads when they have a lot of other ways of doing it, like Google cookies from people logging into Gmail at home and work, third party sites using Ad Words or Google+ etc. etc.
>> How does caching the files indefinitely lead to better ad targeting?
>It doesn't, of course. That's pure FUD; Chrome doesn't contribute to ad-serving in any form other browsers don't.
Actually, I'd argue that caching files (indefinitely or otherwise) speeds up the internet for the user; higher speed = more pageviews = more ad impressions and potential ad clicks.
Google's quest for internet speed is a win-win-win win for us since we get faster internet, win for them since they get more ad impressions / revenue, another win for them for gaining goodwill and a positive reputation.
I would go further with your argument (which is perfectly valid for me) for the sake of completeness: it helps them to have a foot in web standardization and, more importantly, to offer a viable alternative solution, fully integrated and "in control". Secured. I don't think it is only a matter of "speed". :-)
Before Chrome they were to the "mercy" of the leader in the market place: IE - with its OS companion MS-Windows, the first "barrier" to the the web. Which is the main playground of Google. IE was not really moving the web forward, and known for a lot of issues. I remember people reluctant to use they credit card on the web, because of their unconscious feelings of MS-Windows/MS-IE insecurities. Stuff evolved A LOT from there. Microsoft IE is now much more respectful of the w3c standard AFAIK, more stable, etc. And as you see, from that stability emerged a lot of business. I could not envisaged so much possibilities if the status-quo was still holding today as in 1998. I would make a bold statement, saying that thanks to FireFox, Chrome, Hackers, we are now seeing all those startups...
It was a very important challenge for Google (and it is not finished) because they have incentive in people using the "open" web more and more, as you told. The more user on the internet, the more time they spend on it, as you said, the more they watch ads/spend money/consume. And I still know people frightened by this "Tool" that they don't understand. Viruses, Credit card number steal, etc. "Who are those guys, the Anonymous hackers?" I was asked not a long time ago. I was visiting friends owning a PS3 when the PS3 network have been closed down because of act of pirating, totally chocked by its useless video games. Etc, etc. Long list of example.
Google understood early that it was in their interest to work on that matter. Those topics will take more and more place in news in the near future, I guess. Google won't be able to sort everything out of course. But they were needing to push further the control of their own fortune. Chrome was a step forward going into the action.
They are still working on that full "Vertical" offer. Chrome was just ONE part of the full scheme. They've released Android, now they are releasing Google Pixel. Tomorrow, Google glass. That must be exciting times at Google because the work of so much year is taking forms, and I guess it will translates in even a better future. At least they are showing to me that they perfectly envision from a long time ago which the treats are and the challenges for their business. And how to tackle them. Facebook, native guis, any other kind of "closed" web (as opposed to open web) are another kind of threats, but that is another story, I guess... :-) .
As soon as I read the part where the author is hinting that Chrome uses user's browser history to deliver more relevant ads I thought the author either does not understand how the history feature work or he is being disingenuous. I've been using chrome for nearly 4 years without ever deleting history. The only slow pages that I've experienced are the non-English pages (Arabic in particular). They (non-English pages) are slow specially after an update.
A quote worth noting from the article: "Some of these issues - such as the "infinite history" or the antiquated style of process isolation - may be driven by Google's business needs, rather than the well-being of the Internet as a whole. Until this situation changes, our only recourse is to test a lot - and always have a backup plan."
This glosses over how anti-user infinite history is.
The problem of the ever-expanding cache is annoying but easy to deal with - Ctrl+Shift+Del, select only "cache", then "obliterate since the beginning of time".
But if you follow this procedure with your browsing history, your (or at least, my) browsing experience is significantly degraded because all your URL autocompletes are gone, at least until you re-visit all your regular sites. You can tell chrome to delete, say, just your browsing history from the last week, but that doesn't help you when what you want to do is delete all browsing history except that from the last week (to preserve your autocompletes).
In what possible world could process isolation be driven by business needs? For that matter to describe it as being 'antiquated' is rather bizarre. Every modern browser is moving in that direction including IE10 for security and performance reasons.
For that matter, I have no idea what 'infinite history' means, every browser records history and I have no clue how that could possibly impact performance in the slightest.
As for whether they should - although Native Client's software fault isolation can provide better latency than hardware-based solutions because there's no need for a context switch, it takes a hit in throughput (7% overhead) because of the code contortions involved (though being able to trust the executable code might improve things). There would be significant issues with supporting JIT in such a model, because JITs typically like to have rwx memory. Multiple sandboxes in the same process wouldn't work with the current NaCl model on 32-bit platforms. And although the SFI has been well secured, putting user code in the same address space as the master process might make exploitation easier - this would be partially mitigated if the address of the user code could be hidden from it, but doing that would require additional overhead because addresses could no longer be stored directly on the stack.