There's another reason why I want JavaScript in the browser to die:
We haven't had a new browser engine written from scratch since KHTML. Firefox is a descendant of Netscape, Chrome (and Safari) is a descendant of WebKit which is itself a descendant of KHTML, Edge is closed source, but I'm almost sure there's some old IE code in there.
Why?
It's simply too expensive to create a fast (and compatible) JS engine.
If WebAssembly takes off, I hope that one we'll have more than three browser engines around.
The real problem is CSS. Implementing a compliant render engine is nearly impossible, as the spec keeps ballooning and the combinations of inconsistencies and incompatibilities between properties explode.
Check out the size of the latest edition of the book "CSS: The Definitive Guide":
I think Blink's LayoutNG project and Servo both show that you can rewrite your layout implementation (and Servo also having a new style implementation, now in Firefox as Stylo). I think both of those serve as an existence proof that it's doable.
It's doable if you already have a large team of experienced web engine development experts, a multi-million budget and years to spend on just planning.
Implementing an open standard shouldn't be like this. Even proprietary formats like PDF are much simpler to implement than CSS.
'Open standard' has nothing about something being simple and straight forward. What CSS is trying to do is complicated because of a whole bunch of pragmatic reasons.
Last time a browser tried to 'move things forward', ngate aptly summed it up as
Google breaks shit instead of doing anything right, as usual.
Hackernews is instantly pissed off that it is even possible to
question their heroes, and calls for the public execution of
anyone who second-guesses the Chrome team
I never claimed that the existing browser vendors can’t do it incrementally — they certainly can. What I wrote was: “... there's not going to be another web browser engine created by an independent party.”
Even if Grid and Flexbox are awesome and perfect and the solution to all our problems, they don't make everything else in css suddenly disappear, a new layout/render engine still has to implement every bit of it, quirks included.
React Native has what seems like a pretty sane css-like layout system. Maybe this could become the basis for a "css-light" standard that could gradually replace the existing css, and offer much faster performance for website authors who opt in.
I presume parent's point is about how they then interact with other layout modes (what if you absolutely position a flex item, for example), along with the complexity of things that rely on fragmentation (like multicol).
What do you base the assumption on that Javascript is the critical piece of complexity here? (it might very well be, but it's not obvious to me)
At least some of the JS engines are used in non-browser projects (V8 and Microsofts), which at least superficially would suggest you could write a new browser engine and tie it to one of those existing JS interpreters. WebAssembly will gain interfaces to the DOM as well, so the complexity of that interaction will remain.
> Edge is closed source, but I'm almost sure there's some old IE code in there
EdgeHTML is a fork of Trident, so yes. That said, I'm led to believe there's about as much commonality there as there is between KHTML and Blink: they've moved quite a long way away from where they were.
> It's simply too expensive to create a fast (and compatible) JS engine.
I don't think that's so clear cut: Carakan, albeit now years out of date, was ultimately done by a relatively small team (~6 people) in 18 months. Writing a new JS VM from scratch is doable, and I don't think that the bar has gone up that significantly in the past seven years.
It's the rest of the browser that's the hard part. We can point at Servo and say it's possible for a comparatively small team (v. other browser projects) to write most of this stuff (and break a lot of new ground doing so), but they still aren't there with feature-parity to major browsers.
That said, browsers have rewritten major components multiple times: Netscape/Mozilla most obviously with NGLayout; Blink having their layout rewrite underway, (confusingly, accidentally) called LayoutNG; IE having had major layout rewrites in multiple releases (IE8, IE9, the original Edge release, IIRC).
Notably, though, nobody's tried to rewrite their DOM implementation wholesale, partly because the pay off is much smaller and partly because there's a lot of fairly boring uninteresting code there.
Oh, yeah, that definitely counts. I was forgetting they'd done that. (But man, they had so much more technical debt around their DOM implementation than others!)
I completely disagree that the issue is JavaScript here.
In my opinion, the issue is the DOM. It's API is massive, there is decades of cruft and backwards compatibility to worry about, and it's codebase is significantly larger in all major open source browsers out there.
I'm not sure I agree that the DOM is that bad(more that people are using it improperly and for the wrong things), but yeah, modern JavaScript is hardly to blame for anything. The closest thing to an argument I've heard is "mah static typing".
Asking WebASM to be everything, including a rendering engine, is asking for problems at such an atrocious level.
I'm not implying that the DOM is bad (IMO it's one of the most powerful UI systems that I've ever used. Both in what it's capable of, as well as the speed that I'm able to develop and iterate something with it), just that it's BIG.
There's a lot of "legacy" there, a lot of stuff that probably should be removed, or at least relegated to a "deprecated" status.
If people are misusing the DOM API at a fundamental level(to do non-DOM related things), that's not a fault of the API. It's as if everyone has forgotten that DOM means Document Object Model. The vast majority of websites and web apps are not very complicated on the client-side, so I'd say that the DOM API generally does its job well. Trying to do anything that's not about constructing a document or is doing heavy amounts of animation or node replacement using a document-building API is asking for a bad time. It's quite implicitly attempting to break fundamentals of computer science.
Making the API one uses to render pages in a browser a free-for-all doesn't solve the problem, and you end up losing many of the advantages of having actual standards. What would be better is for the browser to provide another set of APIs for things beyond the "expertise" of the DOM. This kind of the case right now in some regards, but there's a reason why React and Glimmer use things like virtual DOMs and compiled VMs. I'd argue that a standardization of such approaches could be a different API that probably shouldn't even be referred to as a DOM because they are meant to take a lot of shortcuts that aren't required when you are simply building a document. In a sense, WASM is intended to fulfill this purpose without replacing the DOM or JavaScript.
Object-oriented programming is quite often misued/misunderstood. Does that mean it's a "bad" concept? I wouldn't say so. Education tends to suck, and people are generally lazy and thus demand too much from the tools they use.
I'm not copping-out because I'm not putting the DOM on a pedestal. Calling it a bad API because it doesn't work well for a small minority of cases is a total mischaracterization. If it was an objectively bad API, it wouldn't have seen the astounding success it has.
EDIT: I'm not saying that programmers are stupid... but that their expectations are sometimes not congruent with reality.
WebAssembly has nothing to do with JavaScript. When people make this association it is clear they are painfully unaware of what each (or both) technologies are.
WebAssembly is a replacement for Flash, Silverlight, and Java Applets.
Flash, Silverlight and Java Applets all provided APIs and functionality above and beyond what JavaScript or the browser DOM provides. WebAssembly is the opposite, as it is much more restricted than JavaScript. WASM is a sandbox inside a sandbox.
I am extremely unclear on whatever point you're trying to make, here, because it really does seem to come from a place of ignorance on WASM and JS. It makes no sense.
It seems like you're claiming, in a really roundabout way, that WASM will never have DOM access, even though it's planned[1]. There are even VDOMs[2] for WASM already. Future WASM implementations that include DOM access can absolutely, and for many folks will, replace Javascript.
> It seems like you're claiming, in a really roundabout way, that WASM will never have DOM access
I am not going to say never. It does not now and will not for the foreseeable future though. I know DOM interop is a popular request, but nobody has started working on it and it isn't a priority.
Part of the problem in implementing DOM access to a unrestricted bytecode format is security. Nobody wants to relax security so that people who are JavaScript challenged can feel less insecure.
Which of the security concerns browser Javascript deals with do you think are intrinsic to the language, as opposed to the bindings the browser provides the language? If the security issues are in the bindings (ie: when and how I'll allow you to originate a request with credentials in it), those concerns are "portable" between languages.
Not sure if this is directly relevant, but there have been all sorts of type confusion bugs when resizing arrays, etc. Stuff in the base language. They exist independent of API, but merely because the language is exposed.
It isn't due to the language but to context of known APIs provided to the language that can only be executed a certain way by the language.
How would a browser know to restrict a web server compiled into bytecode specifically to violate same origin? The browser only knowns to restrict this from JavaScript because such capabilities are allowed only from APIs the browser provides to JavaScript.
I really don't understand your example. Are you proposing a web server running inside the browser as a WebAssembly program, and the browser attempting to enforce same origin policy against that server? That doesn't make much sense.
Yep, it doesn't make sense and that is the problem. There is no reason why you couldn't write a web server in WASM that runs in an island inside the browser to bypass the browser's security model.
It will take more than just DOM access to replace Javascript. Just off the top of my head you'd also need access to the event loop, XHR, websockets, audio & video, webRTC, Canvas, 3D, Filesystem, cookies & storage, encryption, Web Workers and more.
That's an arbitrary distinction that's driven by developer group politics, not a meaningful technical distinction. (Much like the old Rubyist, "It's an interpreter, not a VM.")
Machine languages were originally intended to be used by human beings, as were punch cards and assembly language. There's no reason why a person couldn't program in bytecode. In fact, to implement certain kinds of language runtime, you basically have to do something close to this. Also, writing Forth is kinda close to directly writing in Smalltalk bytecode. History also shows us that what the language was intended for is also pretty meaningless. x86, like a lot of CISC ISAs, was originally designed to be used by human beings. SGML/XML was intended to be human readable, and many would debate that it succeeded.
> That's an arbitrary distinction that's driven by developer group politics
Not at all. JavaScript is a textual language defined by a specification. Modern JavaScript does have a bytecode, but it is completely proprietary to the respective JIT compiler interpreting the code and utterly unrelated to the language's specification.
> There's no reason why a person couldn't program in bytecode.
The point is that a "textual language defined by a specification" can serve the exact same purpose that a bytecode does. And JavaScript is very much on this path.
That is this discussion, because the fact that people program directly in JavaScript does not prevent it from being in the same class of things as a bytecode.
Servo doesn't render major websites properly (last I checked). Their UI is placeholder. Their Network/Caching layer is placeholder. There's no updates, configuration, add-ons, internationalization.
Servo is not meant to be a real browser. That's not a bad thing, but I don't think you can use it as an example of a browser built quickly by a small team.
Chrome's V8 engine was actually written from scratch, unlike Webkit's JavaScriptCore (which descended from Konqueror/KJS, as you say). Google made a big deal about marketing this fact at the time. (1)
And while yes, Mozilla's Spidermonkey comes from the Netscape days, and Chakra in Edge descends from JScript in IE, plus aforementioned JavaScriptCore, each of those engines still evolved massively: most went from interpreted to LLVM-backed JITs over the years. I suspect that no more than interface bindings remain unchanged from their origins, if even. ;-)
If the issue is JavaScript, what explains the explosion of JavaScript engines? I agree that JavaScript is a cancer whose growth should be addressed, but implementation complexity isn't a reason.
If these proposed browsers don’t ship with a JS engine [1], do you also hope to have more than one internet around?
[1] Such as V8, Chakra, JavaScriptCore, SpiderMonkey, Rhino, Nashorn, there is a variety to choose from, also experimental models such as Tamarin, they are almost certainly not the critical blocker for developing a browser.
We haven't had a new browser engine written from scratch since KHTML. Firefox is a descendant of Netscape, Chrome (and Safari) is a descendant of WebKit which is itself a descendant of KHTML, Edge is closed source, but I'm almost sure there's some old IE code in there.
Why?
It's simply too expensive to create a fast (and compatible) JS engine.
If WebAssembly takes off, I hope that one we'll have more than three browser engines around.