Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To take advantage of multiple cores doesn’t require multiple (heavyweight) processes. A single multithreaded process would do just as well.

Not to say it’s a bad idea, just that we’ve got to remember that multiple processes is nothing revolutionary. A single thread for each tab would probably work just as well from a parallelization standpoint, but you’d lose the ability to kill one without bringing down the entire browser.




Helps in a few ways:

1. Fewer synchronization points. You now have a two-level hierarchy (threads & procs) vs just 1 (just b/w threads). You can reduce the scope of the hardest-hitting syncs (e.g. malloc) to contend within a smaller arena.

Anyone well-versed in Win32 know if there are any GDI-related advantages to different processes? Synchronizing graphics context accesses is another PITA, which may be avoidable here.

2. Resiliency: a single bad plugin or V8 bug need not take down the entire browser. Just a self-closing tab.

3. Security. (not in this version of the browser) The supervisor proc may be able to set per-process access controls (e.g. RBAC) to keep the tabs in check. E.g. keeping activeX in control, while allowing the in-house controls to function with as much access as they need.

A lot of OS functionality is process based, and using them allows for a lot of open space for new possibilities.


I agree. However, it seems like a fundamentally simpler to build a share-nothing system than a threaded one.


Yes, but "share-nothing" almost always has some sharing. (If it doesn't, it can be run somewhere else, by someone else, at some other time, including "never".)


yeah, chrome tabs are not "shared-nothing".

They have a common document cache, a common cookie store, a common history store, probably a common dns lookup result cache, and maybe some other things. These all need to be carefully synchronized between multiple tabs.

In fact, I would argue that multi threaded would be better than multi process so that even more things could be easily shared. For example, I imagine that css stylesheets are parsed into some big fat data structure inside browsers. If I open two tabs from a website that share a stylesheet it would be optimal if they share this same internal representation (no locking would be required for the sharing since it's read-only). This has the obvious savings of memory, but it also increases speed since the css file only has to be parsed and processed once instead of multiple times. And things like sharing keep-alive connections between tabs are virtually impossible with multi process, while very possible with multiple threads.


Except that with multi-threading you give up the OS-provided process level memory management and are back to having to track down all those memory leaks which at least Mozilla never can seem to fix satisfactorily. Even though Chrome seems plenty fast to me, I'd gladly give up some speed if I never had to close the browser and restart it because it was taking up 3/4 of available memory.


>probably a common dns lookup result cache

Uhh, from the OS they get that for free.


But nevertheless, browsers still maintain their own internal cache. Firefox: http://jelmer.jteam.nl/2006/11/04/disabling-the-firefox-dns-...


Agreed.

But don't undervalue the difference in mindset. At least for me, threading makes me think "what can I peel off of my main task to run in threads" versus share-nothing which makes me think "here are the 3 shared resources that I anticipate will be bottlenecks".


> At least for me, threading makes me think "what can I peel off of my main task to run in threads"

Don't do that.

> "here are the 3 shared resources that I anticipate will be bottlenecks".

Do something like that.

The difference between threads and processes is in the mechanisms for sharing. While those mechanisms affect how you organize computation (different things are cheap), they don't mandate an organization.


And I would argue that this is The Right Way to think about things. When you're working in a shared-nothing world, suddenly the lines you draw between components become much more important. Thinking about these lines and drawing them carefully almost always results in more modular and more flexible software.


"much more important" is actually "much more expensive to cross".

Expensive lines make systems less flexible, not more. They lead to copies for efficiency, aka denormalization, which is another word for "bug waiting to happen". While this can be dealt with, doing so involves lots of nasty tradeoffs, with bugs a common outcome.

More to the point "more modular and more flexible" is neither necessary nor sufficient for producing good software.


"more modular and more flexible" is neither necessary nor sufficient for producing good software

I'm glad I'm not the only one that thinks this way. I wish I could upmod by 1 million.


I would argue that it is necessary, though not sufficient. Writing black box tests for non-modular code is a nightmare.


Multiple threads don't buy you crash protection. If one of the threads crashes, the process and all threads in it are terminated.


Your claim is true only if you assume that there will be no penalty for sharing memory across cores. This is sort of true now with 2-4 core machines, but will probably not be the case as the number of cores increases.


Can you explain? How is sharing memory any slower than each process having its own address space? I'll admit: I don't know a lot about the system bus and how a multi-processor machine accesses the memory.


I believe the GP was not talking about shared memory spaces but shared memory buses. It would not be trivial to have 50 processors sharing a single pool of physical memory even if the processes don´t share any physical memory addresses. Making all of them agree on the contents of a shared memory space is quite a nightmare.


I'm not sure how they're doing it now, but AMD used to design their multi-core processors optimistically. The RAM would be partitioned between cores, and they would only communicate on a central bus if they needed memory that resided on a different core's partition.

This actually works quite well since the OS tends to schedule a process on the same core, so processes tend to always access the local memory partition.


If that's the way they're doing it, then it sounds bad.

Why? Because the "working quite well" is only really valid for single threads with no shared data structures. In other words, when you pretend a thread is a process.

As soon as you have more than one high load thread, the OS will want to split them across multiple processors, which means that you're now trying to share the same chunk of memory between processes. If the OS tries to keep them on the same core, though, then you've got 2 processes competing for CPU time and leaving another core free.

Then again, even turning them into processes on a modern OS wouldn't distribute the memory contention all the time; memory is usually copy on write across forked processes, which means that unless you've written the memory, reads are still contending for the same bus.


Threads are the same as processes to the Linux scheduler.

When the alternative is using a shared bus all the time, it works out nicely this way.

Most programs are single threaded, and even most multithreaded programs don't share that much between threads. Of course the scheduler is going to schedule across CPUs in a reasonable way, but if it makes sense to keep it on the same CPU, it does that. The point is to keep bus contention to a minimum, and this does that in the average case.


Yes, but the typical data access patterns differ between threads and processes.

Again, this architecture makes sense for the "lots of totally independent processes" case. The problem is that this case isn't as common as you'd expect. on Linux, if you fork a process, you're sharing memory between them until you write to it. in threads, you're sharing all read-only data unless you've explicitly duplicated it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: