On NaCl I partially agree and partially disagree. While it's true that the sandboxing is only necessary due to insufficiently flexible kernels, imo the most interesting part of NaCl and asm.js is their use of a portable bytecode that compiles into native code. Portability really is necessary - CPU architectures don't change that often, but if people started distributing websites as native code a decade ago, none of them would have envisioned that a large portion of web browsing is now done on ARM based devices. Yet you cannot make a fast portable JIT. You say that JavaScript doesn't need to be fast, but it's really nice to have a high-level, dynamic language that still runs fast - in fact, it was compelling enough to be one factor in the success of JavaScript on the server, despite the language's weaknesses.
However, there's no reason JavaScript (or some suitably compatible dynamic-language bytecode) JIT couldn't be provided as a fundamental API in addition to the static compiler. Yeah, it doesn't feel like a clean architecture when you want to use Python or Lisp and it almost translates neatly to that bytecode but with little runtime differences that end up adding a lot of overhead... but it's better than nothing.
I think that your hypothetical arrangement would be very cool. I'm not sure that it would actually be better than what we have - for example, writing a screen reader would likely be a nightmare if some random webpage might be using a browser library that didn't support it; good luck implementing anything like user scripts/custom CSS, scrapers, Readability, magic text reflow for iPhones, smooth zooming, etc. Good luck doing something like the transition to hardware accelerated rendering browsers did a few years ago (sure, you could only support it for new sites, but as is I get smooth scrolling for all sites). And since different engines would now be very fundamentally different rather than the usually relatively thin layers over HTML that are currently popular, developers would have to spend more time learning different APIs. If some engine stopped being maintained, then it would be very difficult to retrofit websites that use it to support the newest features. Et cetera. Meanwhile, these days browsers move pretty damn fast, lessening the advantage of non-standardized development - and many new APIs are hooks to the OS anyway, not things that UI layers could implement on their own.
But it would be cool. I don't mean to be too negative: there would be a lot of advantages, and it would be interesting to try out.
I suppose it might happen. PNaCl and asm.js are soon going to be supported in two of the most popular browsers; alternatively, if JS engines get good enough that specific support for asm.js isn't required to achieve performance for low-level code (https://bugzilla.mozilla.org/show_bug.cgi?id=860923), with the competitiveness all major browsers already have on JS speed, the latter will be "supported" everywhere on short notice. It might not be that long until the first serious attempt to make an alternative UI stack for browsers...
NaCl is not a portability layer. It is a security layer. PNaCl is a portability layer, built inside NaCl more-or-less in the manner I just described, but with all the overheads and limitations of NaCl (which are real). So NaCl is in total agreement with me. asm.js is basically a joke, rolling portability and security into one layer, but when you're dealing with the web you take what you can get sometimes.
>Yet you cannot make a fast portable JIT.
So? What difference does sticking this in the browser as a privileged component make? There's no reason google.com can't provide a DOZEN compilations of V8 in the setup I described. The difference is I can write my own portability layer. Maybe some authority can control which portability layers are valid to prevent too much native code. Mozilla is the perfect candidate with their police-the-web attitude.
>However, there's no reason JavaScript (or some suitably compatible dynamic-language bytecode) JIT couldn't be provided as a fundamental API in addition to the static compiler.
Did you even read my list of points? You don't need this! You just give the user access to properly sandboxed native (not NaCl, which has limitations and overhead) and provide portability layers plus the ability to add new portability layers. There is NO reason Javascript needs to be privileged in the manner you're suggesting.
>for example, writing a screen reader would likely be a nightmare if some random webpage might be using a browser library that didn't support it
How is it any different if people start building all their stuff with WebGL? What about when people use tonnes of images without alt tags? Accessibility never works automatically! And it can be provided properly as a browser service, which different renderers hook into. Hell, it could probably even be in userland. Mozilla could even provide disincentives to non-compliant renderers. They love playing the policeman, so why not do it properly instead of doing it by holding technology back as much as possible?
>good luck implementing anything like user scripts/custom CSS, scrapers, Readability, magic text reflow for iPhones, smooth zooming, etc.
Firstly, HTML would still most likely be the standard for most web pages. So there's no need for "luck"; it would be done the same way it always has been. You're trying to set up an opposition between my ideas and HTML. My ideas are opposed to HTML, DOM, Javascript as privileged entities. And they would have to compete with other markups and document models and languages. Just like C++, C# dominate on the desktop, but they have to compete with more specialized languages - to everyone's benefit. And aside from the most basic, unstyled HTML, it has always taken some forethought on the part of the webpage author to get things like accessibility and compatibility with different window sizes to work. I can tell you this because I have terrible eyesight and view many pages zoomed a long way in.
>Good luck doing something like the transition to hardware accelerated rendering browsers did a few years ago (sure, you could only support it for new sites, but as is I get smooth scrolling for all sites).
Why on Earth would this be a problem? Even though the renderer is in user mode it's not baked in statically, or even necessarily linked in at all. It could be spoken to via message passing. First ask the system to give you a shared rectangle inside your tab, then send the handle to mozilla.org along with some web content, saying "please draw this". Similarly for input events etc. And of course, you can have preloads that do all this for you so on the server-side you just send down the HTML in the usual way. These kind of arguments are always such rubbish, just like when Mozilla says binary codes can't evolve as easily as source code. What does that even mean? Source codes ARE binary codes!
>If some engine stopped being maintained, then it would be very difficult to retrofit websites that use it to support the newest features.
Which is why most people would use HTML, and people who are trying to do things that HTML is totally unsuitable for would not, paying the resulting costs.
>Meanwhile, these days browsers move pretty damn fast, lessening the advantage of non-standardized development - and many new APIs are hooks to the OS anyway, not things that UI layers could implement on their own.
The browser is a technological slug. V8, Flash, NaCl and Unity are the only reason we have had any real advancement, and it's an advancement back to decades ago. Web developers just have extremely low expectations and are always trying to resist the approach of superior technologies. I can remember telling people years ago that sockets were needed (there's this wonderful thing called interrupt driven programming you see) and got much the same sort of criticisms you outlined above from all the "web experts". Of course it has since been implemented.
>many new APIs are hooks to the OS anyway, not things that UI layers could implement on their own.
I already said this! Perhaps you missed the point of the post, which is that the point of such primitives is to implement applications (e.g. UI layers). It is a post against the monoliths.
>JS speed
I'm sorry: "JS speed" doesn't exist on current hardware. The reason asm.js was so fast with minimal additions to the optimizer is that the JS optimizers all work best on statically typed code (in other words, not Javascript), which is of zero surprise to anyone who knows anything about compilers or optimization. Essentially, the people working on "Javascript" engines have really been writing optimizers for a small subset of the language that discards everything dynamic. Whether this was intentional or not is irrelevant; that is what they have done. That's how bad Javasscript is for this task, and how GOOD the old, statically typed ideas are: so good they couldn't help but do it, even when they were trying to optimize their "dynamic" language.
However, there's no reason JavaScript (or some suitably compatible dynamic-language bytecode) JIT couldn't be provided as a fundamental API in addition to the static compiler. Yeah, it doesn't feel like a clean architecture when you want to use Python or Lisp and it almost translates neatly to that bytecode but with little runtime differences that end up adding a lot of overhead... but it's better than nothing.
I think that your hypothetical arrangement would be very cool. I'm not sure that it would actually be better than what we have - for example, writing a screen reader would likely be a nightmare if some random webpage might be using a browser library that didn't support it; good luck implementing anything like user scripts/custom CSS, scrapers, Readability, magic text reflow for iPhones, smooth zooming, etc. Good luck doing something like the transition to hardware accelerated rendering browsers did a few years ago (sure, you could only support it for new sites, but as is I get smooth scrolling for all sites). And since different engines would now be very fundamentally different rather than the usually relatively thin layers over HTML that are currently popular, developers would have to spend more time learning different APIs. If some engine stopped being maintained, then it would be very difficult to retrofit websites that use it to support the newest features. Et cetera. Meanwhile, these days browsers move pretty damn fast, lessening the advantage of non-standardized development - and many new APIs are hooks to the OS anyway, not things that UI layers could implement on their own.
But it would be cool. I don't mean to be too negative: there would be a lot of advantages, and it would be interesting to try out.
I suppose it might happen. PNaCl and asm.js are soon going to be supported in two of the most popular browsers; alternatively, if JS engines get good enough that specific support for asm.js isn't required to achieve performance for low-level code (https://bugzilla.mozilla.org/show_bug.cgi?id=860923), with the competitiveness all major browsers already have on JS speed, the latter will be "supported" everywhere on short notice. It might not be that long until the first serious attempt to make an alternative UI stack for browsers...