It may not have syncing, but Stainless[1] has a very clever notion of bookmarks that are sessions. Makes it ridiculously easy to e.g. have a bookmark for each account you use on site X, and open them all simultaneously. I absolutely loved it. Unfortunately it's a mostly dead project, though it's open source now.
I was certain that I had seen it before - I think it was in Opera. The UX around it was horrible, it somehow got in my way (I can't remember why). I think the more important problem is how to solve sessions from a UX perspective.
Another problem is that when you are building up all of those tabs you are setting up a lot of context in your mind, returning to that context is going to be jarring or impossible. Maybe a replay feature (to show how you created that session, link-by-link) would be a nice idea: "remind me what I was doing."
That's a valuable and thought-provoking comment that deals with the meat of my comment.
To make it crystal clear:
Lets say you have 7 tabs open. Your browser history may include many more websites than were present in those 7 tabs: websites that you skimmed, rejected and immediately closed during your research, or websites you visited while slacking off. The history also doesn't describe how you reached those 7 tabs - it describes everything you have seen. 2 of them could be search results, 5 of them could be pages opened from the search results, and 2 of those 5 could be navigations done without opening new tabs. There may be 20 or so tabs that you opened that are now closed. It is not straightforward at all to manually derive that information from a simple list of sites that you saw. The replay would consist of seeing yourself navigate to that current state (ignoring all the abandoned paths a.k.a. closed tabs) - screenshots with the links you used highlighted as an uninspired example.
Live action replay might have been more appropriate.
It sounds like you mistook my comment for flippancy. I think it's a good idea, and it sounded like browser history was a good conceptual starting point.
Indeed I did. Thanks for the feedback none-the-less, I guess my comment did need clarification. This would be really interesting to try out with that ExoBrowser tech - if I ever had the time.
Basically yeah. I imagine there's more to it (localStorage, other forms of persistent data), but that's the idea.
Importantly (to me anyway), you can have multiple tabs to the same site open at once, each with a different session. Tabs opened from one of those tabs will retain that session, so you can e.g. fire up two Google accounts, open a few YouTube tabs from each, and keep your viewing history separate. It's like Chrome profiles on steroids, with way less memory use, and way easier to use.
I don't remember in great detail (the app still exists, probably works, I/we could poke at it if so), but I think it was an always thing. By making a bookmark you included the session data, so by logging in and making one you ensured you were always logged in after clicking the bookmark.
Yep, still works. And reminds me why I liked it - small and fast :| Chrome has gotten so slow. Firefox is no better.
So, I'd probably recommend trying it yourself too (really, it's a nice alternative browser. many are horrible, this one I used happily for quite a while), but you can only really trigger the behavior by starting a "single-session tab" which is cmd-shift-T instead of cmd-T. Single-session tabs are basically the same as a "private" window, but if you save a bookmark, you save the session.
This is all reinforced by having the bookmarks bar (on the left) primarily populated by dragging tab favicons. It feels like you're literally saving the tab for later, and it behaves exactly like that, unlike normal bookmarks which use whatever global session state you currently have. In Stainless, the bookmark will always bring you to the same page. Elsewhere, a bookmark will take you to the page only if your current user can see it, possibly showing an error, making you log out of your current account....
Honestly, this is one of the few real new-things I've seen in bookmarks/browsing, and I really grew to like it. It's a far nicer experience.
Isn't this similar to what Firefox does - where the browser UI is written using javascript and XML - which is why it's so easy to write add-ons for it?
There are lots of similarities indeed.
Though, the ExoBrowser API goes deeper than the UI and provides much more leeway to implement stuff directly in JS and out of C++.
While the project objectives are very nice, I hope it solves one other problem, the "contentEditable" problem
In the pre-historical days, web pages are static, served as a file blob, in one way, or both ways (FTP)
Then there came server side templates, we now have dynamic pages.
Then CGI and Perl came, we have <form> so user could put their contributions to the server.
Then there's AJAX opened the possibility of "Web apps"
However, all of existing efforts left behind a very basic concept of HYPERTEXT: It is always read-only or partially read-only. The browser is good at display HYPERTEXT or manipulating part of it, but not create those.
We have WYSIWYG editors, and the famouse Markdown (and alternatives), they lowered the level of entry of a "writable" Web.
I hope in one day, users can write hypertext freely, as easy as make an edit on contentEditable, the user input & interactions could become parsable data, and the developer's layout/css tweaking and js debugging could be made from web browser Firebox/DevTool directly back to LESS/Coffeescript.
Browsers, really, should consider shifting from a consumerism tool to an authoring platform. Branched, incremental and versioned.
> I hope in one day, users can write hypertext freely, as easy as make an edit on contentEditable, the user input & interactions could become parsable data, and the developer's layout/css tweaking and js debugging could be made from web browser Firebox/DevTool directly back to LESS/Coffeescript.
If you only care about the content, then a wiki engine with support for flexible styles should be enough to fulfil your hope.
Instead if you want to be able to completely rewrite a page, including its layout and behaviour, then you need to take into account that the page you see is not just "content" but "code + data".
I always wanted to have a "Edit source" button in applications, but then, what does it mean to "edit the source"? Decent web pages are produced by quite deep applications stacks composed of megabytes of running code. To show a link to a certain resource you need, in current frameworks, something like a 20-level deep stack of functions calls. At which of these points would your editor works?
If applications where completely client-side you could think of editing most of it in the browser, but still, which client could handle the client-side creation of a Wikipedia page, for example?
Cool project, not what I expected from "Next Generation Web Browser".
I was hoping for a project to build an HTML/CSS/JS suite using OpenCL and OpenGL directly, in a functional style which can be parallelized and make use of immutable data structures in a sensible way. Backed by a built-in object store database.
I do a lot of OpenCL, so if anyone wants to write this browser, let me know. I will join your mailing list.
Sort of. My ideal is a suite of rendering primitives, html/css parsers, DOM constructors, JS virtual machines etc. that are:
* Functional (OpenCL is a good platform for this, you basically have little choice other than few-to-no side effects)
* Immutable (Everything the user does, sends or receives is tracked and stored in a distributed database)
* Detached from OS level concerns insofar as possible. It should be possible to run with a very minimal set of OS services, mostly the drivers and network stack.
Last but not least:
The HTML and CSS portions are rendered into a homoiconic data form within the language, which is all that programmers and users ever interact with. Any JS is wrapped as a library in that language.
Without wall-of-texting, it's hard to go into the advantages this set of tools would provide. The short story is, I'd like to be able to edit the Internet and have the edits stay up (from my perspective) for as long as possible. Also, expose those edits/annotation to people in my 'network'.
Actually a lot of their ideas are similar emacs. Have a few core pieces written in C and then everything else is written in elisp. Even the stackable navigation and WebViews are the equiv of buffers.
Conkeror already has stackable navigation and can be extended using javascript. In many ways the ExoBrowser is simply taking Conkeror to it's logical conclusion.
Very interesting, I'll be interested in seeing where this goes in the future.
With stacked navigation allowing lots of open tabs, I hope we can also have efficient maintenance of those tabs. I don't need them to keep running javascript in the background. My machine shouldn't sag under the weight of a few hundred open tabs. After they've been inactive for a while, archive their state to disk and replace the view with a screenshot. Pull everything back into ram when I go back to the tab. If I take a second or two to look at the page before I try to interact with it, I won't notice a difference.
Let me whitelist a few exceptions, like gmail.
Built-in buttons to save/bookmark all open tabs would be nice too. And when saving a page, if there's a page title use that as the filename.
Optional feature: insert a meta tag to the html that gives the url of the page, so when I mention it online I can dig up a link to it, without having to bookmark in addition to saving.
Also I'd like a complete implementation of Vim and a pony.
Tabs that are not visible are generally "frozen" (timers limited) by Chrome. I wonder if it's a Chrome or Content API feature. In any case, archiving tabs to disk makes a lot of sense!
A web-based implementation Vim is probably the hardest!
Very interesting, but I can't quite rap my head around the idea. Aren't the protocols and the execution engine the plattform on which a browser is built? I find it very interesting to think about it what the minimal executable code is and how we ship it over the wire. Chrome was originally an OS as well as a browser (that's how I understand the history).
I'm not sure I understand what you mean by "Aren't the protocols and the execution engine the plattform on which a browser is built?". But I'll try to elaborate.
Chrome is built on the Content API. So it's a whole lot of C++ and GUI code on top of an API that lets you create a view and display web contents in it. The ExoBrowser wraps this Content API (part of it for now) into a JS API so that you can do all the GUI and specific browser code in Javascript instead of C++.
Additionally Chrome is not really an OS, Chrome OS is an "OS" (more a window system on top of Linux) based on Aura (View Management, kind of like GTK) and Chrome to run the apps.
Looks like a clone of a project I did in xul a while back. http://nochrome.tp23.org/
Got stacked tabs. And comes as expanded hackable js. Good to see movement in the browser space.
My only idea is a huge pinterest style layout, with horizontal rules, and the ability to give those horizontal breaks big section headings, and then the ability to drag and drop book marks around to different sections.
Every now and then the bookmarks would connect and bring back a favicon and a screen shot thumbnail.
We are working on a solution for this. It is an horizontal layout application were you can create boards (like sections) and drag the tabs to bookmark directly from your synced browser. You can check it out here: http://listboard.it/
I think Exo is more focused on the browsing experience rather than the rendering technology. It's based on Chrome Content API which is based on WebKit/Blink... and have no plan to go deeper than that for now.
Except the cookies, which seems interesting to sync. Additionally we'd like to experiment with pushing your session to any instance of the browser (running on a machine you don't control).
Why not go all the way and move the rendering engine out of the privileged base of the system? Why is all this HTML and Javascript garbage in "kernel land" anyway? Why not make your browser kernel concerned solely with providing asm.js or PNaCL plus WebGL and some other primitive services? Then everything else can be preloaded libraries. And what on Earth does Node.js have to do with any of this? Why not just create some Javascript bindings to WebKit and V8, then.. write the browser in Javascript? The browser needs less of this kind of ridiculous bloat, not more.
> Why not go all the way and move the rendering engine out of the privileged base of the system? Why is all this HTML and Javascript garbage in "kernel land" anyway?
There's a little-known emotional factor in software development at work here, apparently not clearly understood, indeed it might as well be a secret. And that is that kernel development is high-status, and user-space development is low-status. Many software developers want desperately to add to their resumes the fact that they contributed code to the Linux kernel. This is why so much user-space code has made its way into the kernel.
Someone powerful should go through all the kernel-space code and order the deportation to user-space of all the stuff that doesn't belong, that has slipped across the border over the years like phony political refugees. This probably won't happen, but it's uncontroversial that much of the kernel code has no right to be there, and much of it is written so badly that it's dangerous for it to be in kernel space.
You can't make a JavaScript engine run anywhere near fast enough in PNaCl, so you need at least all of V8 in your "kernel". You could run WebKit on top of PNaCl with some magic V8 bindings, but I don't think this has been done before, and considering how big WebKit and NaCl are, it would probably be a lot of work. Useful work to do, but a lot of work...
(And of course, Chromium already supports a process sandbox, so while this could theoretically be a really cool win for architecture, having less platform specific code to maintain, it wouldn't necessarily be much of a security win - the analogy of the kernel only applies so far.)
I'm aware of all this. I'm not saying it would be easy - and the belief that high speed Javascript is necessary is one reason we've been held back for so long. Nobody should be writing a browser in Javascript (when I mentioned this possibility I was simply humoring the poster's ideas). An "Exobrowser" (or perhaps "Microbrowser") is what we should have by now, but the linked project is not it. NaCl and asm.js are only necessary because operating systems have failed to successfully implement process separation (more crap, monolithic design at work), so now it's being reinvented in user land with the resulting performance overheads. Javascript doesn't need to be fast! The whole idea of high performance programming done in Javascript is stupid, especially if it means saying "no" to a better architecture designed around security and speed for better systems languages. Even Mozilla's penny has dropped on low-level code execution - hence asm.js. If operating systems had been done right in the first place then you could have the following arrangement:
- Each browser tab is a separate OS process w/o any access to system calls except for calling browser services
- Processes from the same domain can talk to each other
- Browser comes preloaded with some preferred, but optional portability layers for the processes
- Everything else is libraries, with one domain being able to provide services to another. So mozilla.org could provide its rendering engine either as a library or as a background process (to reduce memory overheads).
And so on. This way there's no more waiting 10 years for Mozilla to implement whatever it thinks you need, with their 640k ought to be enough attitude. Their rendering engine has to compete with others. The most popular engines are most likely to already be in cache when someone visits your site, so there is room for lots of vigorous competition. This is all so painfully BASIC, but it will likely be decades before people get it right, if they ever get it right.
Chrome is going in the right direction, the poster's linked project is not.
On NaCl I partially agree and partially disagree. While it's true that the sandboxing is only necessary due to insufficiently flexible kernels, imo the most interesting part of NaCl and asm.js is their use of a portable bytecode that compiles into native code. Portability really is necessary - CPU architectures don't change that often, but if people started distributing websites as native code a decade ago, none of them would have envisioned that a large portion of web browsing is now done on ARM based devices. Yet you cannot make a fast portable JIT. You say that JavaScript doesn't need to be fast, but it's really nice to have a high-level, dynamic language that still runs fast - in fact, it was compelling enough to be one factor in the success of JavaScript on the server, despite the language's weaknesses.
However, there's no reason JavaScript (or some suitably compatible dynamic-language bytecode) JIT couldn't be provided as a fundamental API in addition to the static compiler. Yeah, it doesn't feel like a clean architecture when you want to use Python or Lisp and it almost translates neatly to that bytecode but with little runtime differences that end up adding a lot of overhead... but it's better than nothing.
I think that your hypothetical arrangement would be very cool. I'm not sure that it would actually be better than what we have - for example, writing a screen reader would likely be a nightmare if some random webpage might be using a browser library that didn't support it; good luck implementing anything like user scripts/custom CSS, scrapers, Readability, magic text reflow for iPhones, smooth zooming, etc. Good luck doing something like the transition to hardware accelerated rendering browsers did a few years ago (sure, you could only support it for new sites, but as is I get smooth scrolling for all sites). And since different engines would now be very fundamentally different rather than the usually relatively thin layers over HTML that are currently popular, developers would have to spend more time learning different APIs. If some engine stopped being maintained, then it would be very difficult to retrofit websites that use it to support the newest features. Et cetera. Meanwhile, these days browsers move pretty damn fast, lessening the advantage of non-standardized development - and many new APIs are hooks to the OS anyway, not things that UI layers could implement on their own.
But it would be cool. I don't mean to be too negative: there would be a lot of advantages, and it would be interesting to try out.
I suppose it might happen. PNaCl and asm.js are soon going to be supported in two of the most popular browsers; alternatively, if JS engines get good enough that specific support for asm.js isn't required to achieve performance for low-level code (https://bugzilla.mozilla.org/show_bug.cgi?id=860923), with the competitiveness all major browsers already have on JS speed, the latter will be "supported" everywhere on short notice. It might not be that long until the first serious attempt to make an alternative UI stack for browsers...
NaCl is not a portability layer. It is a security layer. PNaCl is a portability layer, built inside NaCl more-or-less in the manner I just described, but with all the overheads and limitations of NaCl (which are real). So NaCl is in total agreement with me. asm.js is basically a joke, rolling portability and security into one layer, but when you're dealing with the web you take what you can get sometimes.
>Yet you cannot make a fast portable JIT.
So? What difference does sticking this in the browser as a privileged component make? There's no reason google.com can't provide a DOZEN compilations of V8 in the setup I described. The difference is I can write my own portability layer. Maybe some authority can control which portability layers are valid to prevent too much native code. Mozilla is the perfect candidate with their police-the-web attitude.
>However, there's no reason JavaScript (or some suitably compatible dynamic-language bytecode) JIT couldn't be provided as a fundamental API in addition to the static compiler.
Did you even read my list of points? You don't need this! You just give the user access to properly sandboxed native (not NaCl, which has limitations and overhead) and provide portability layers plus the ability to add new portability layers. There is NO reason Javascript needs to be privileged in the manner you're suggesting.
>for example, writing a screen reader would likely be a nightmare if some random webpage might be using a browser library that didn't support it
How is it any different if people start building all their stuff with WebGL? What about when people use tonnes of images without alt tags? Accessibility never works automatically! And it can be provided properly as a browser service, which different renderers hook into. Hell, it could probably even be in userland. Mozilla could even provide disincentives to non-compliant renderers. They love playing the policeman, so why not do it properly instead of doing it by holding technology back as much as possible?
>good luck implementing anything like user scripts/custom CSS, scrapers, Readability, magic text reflow for iPhones, smooth zooming, etc.
Firstly, HTML would still most likely be the standard for most web pages. So there's no need for "luck"; it would be done the same way it always has been. You're trying to set up an opposition between my ideas and HTML. My ideas are opposed to HTML, DOM, Javascript as privileged entities. And they would have to compete with other markups and document models and languages. Just like C++, C# dominate on the desktop, but they have to compete with more specialized languages - to everyone's benefit. And aside from the most basic, unstyled HTML, it has always taken some forethought on the part of the webpage author to get things like accessibility and compatibility with different window sizes to work. I can tell you this because I have terrible eyesight and view many pages zoomed a long way in.
>Good luck doing something like the transition to hardware accelerated rendering browsers did a few years ago (sure, you could only support it for new sites, but as is I get smooth scrolling for all sites).
Why on Earth would this be a problem? Even though the renderer is in user mode it's not baked in statically, or even necessarily linked in at all. It could be spoken to via message passing. First ask the system to give you a shared rectangle inside your tab, then send the handle to mozilla.org along with some web content, saying "please draw this". Similarly for input events etc. And of course, you can have preloads that do all this for you so on the server-side you just send down the HTML in the usual way. These kind of arguments are always such rubbish, just like when Mozilla says binary codes can't evolve as easily as source code. What does that even mean? Source codes ARE binary codes!
>If some engine stopped being maintained, then it would be very difficult to retrofit websites that use it to support the newest features.
Which is why most people would use HTML, and people who are trying to do things that HTML is totally unsuitable for would not, paying the resulting costs.
>Meanwhile, these days browsers move pretty damn fast, lessening the advantage of non-standardized development - and many new APIs are hooks to the OS anyway, not things that UI layers could implement on their own.
The browser is a technological slug. V8, Flash, NaCl and Unity are the only reason we have had any real advancement, and it's an advancement back to decades ago. Web developers just have extremely low expectations and are always trying to resist the approach of superior technologies. I can remember telling people years ago that sockets were needed (there's this wonderful thing called interrupt driven programming you see) and got much the same sort of criticisms you outlined above from all the "web experts". Of course it has since been implemented.
>many new APIs are hooks to the OS anyway, not things that UI layers could implement on their own.
I already said this! Perhaps you missed the point of the post, which is that the point of such primitives is to implement applications (e.g. UI layers). It is a post against the monoliths.
>JS speed
I'm sorry: "JS speed" doesn't exist on current hardware. The reason asm.js was so fast with minimal additions to the optimizer is that the JS optimizers all work best on statically typed code (in other words, not Javascript), which is of zero surprise to anyone who knows anything about compilers or optimization. Essentially, the people working on "Javascript" engines have really been writing optimizers for a small subset of the language that discards everything dynamic. Whether this was intentional or not is irrelevant; that is what they have done. That's how bad Javasscript is for this task, and how GOOD the old, statically typed ideas are: so good they couldn't help but do it, even when they were trying to optimize their "dynamic" language.
Agreed with direct Webkit bindings in JS... only if you want to handle networking and file storage in Javascript, you'll end up rewriting node.js which is exactly that bindings for libuv... So node.js is not chosen because it's cool or anything, it's just because it's exactly what's needed.
Not sure I see what you mean by HTML and Javascript garbage in "kernel land"?
HTML and Javascript are a horrible basis for a computing platform, which is why in the end they have resorted to providing a tiny, statically typed subset of Javascript plus some graphics primitives. And now the primitives exist, comically, side-by-side in the privileged base of the system with a bunch of application-level stuff (HTML etc) which could be built out of the primitives themselves. Your diagram doesn't indicate that you're just using libuv bindings for portability - it includes a local server.
up to I looked at the draft for $5082, I did not believe ...that...my friend was like actualey earning money in their spare time on their laptop.. there moms best frend has been doing this for less than eight months and as of now cleared the loans on their villa and purchased a new GMC. look at more info big57.com
[1]: http://www.stainlessapp.com/