The most useful pattern I know of for offline web apps is the command queue.
Basically, the rendered state of the client is the acknowledged server state plus the client-side command queue.
User actions don't make a server request and then update the UI. Instead they directly append to the local command queue, which updates the UI state, and right away the client begins communicating with the server to make the local change real.
While the client's command queue is nonempty, the UI shows a spinner or equivalent. If commands cannot be realized because of network failure, the UI remains functional but with a clear warning that changes are waiting to be synchronized.
(The API for connectivity status are useful for making sure that the command queue resumes syncing when the user's internet comes back.)
You still have to handle cases where the server state has been updated (possibly via another medium, event, ...) and when the user's internet comes back it's not a matter of pushing clients' commands anymore. Instead you have to merge changes (either backend or front end side) before fetching the new state. For example, I try to purchase an item on my desktop, but can't because I lost connectivity. So, I proceed to buy it on my mobile. When the net comes back, did I mean to buy this item two times or just one ? In this case, it's easy to find a workable solution (cancel the second order command), but you get the idea: It's not trivial.
CQRS and ES, are wonderful tools in such situations. Past that point, your web app is far away from the little CRUD mashup it was at the beginning.
Conflicts are actually not all that difficult to solve. We had to solve this exact problem with a time entry solution that needed to be mostly available offline.
Taking most of our inspiration from Git, it was simple. Define the atomic unit of conflict, and the definition of a conflict that cannot be automatically resolved, and simply help the end user understand and resolve the conflict.
95% of cases were fairly easy to merge without interaction, once we had defined "conflict." And the remaining 5% just required a little extra user experience design to help surface the appropriate resolution.
The problem I have with this approach is that you don't encode the intention with each change. You just encode the data itself. This makes it a bit of a hack, unfortunately. You can end up in conflict-resolution cases which you cannot always predict beforehand.
Yes, from this viewpoint, Git (and every similar version control system) is a hack. But it works well in practice because humans are running it, not machines.
The intention was "I want to change this guy's hours from 5 to 8." I don't know what else you might mean by "encode the intention."
Unless you're referring to something I'm utterly overlooking, I don't see how you'd get around the invalidated assumptions that came from being offline and having outdated information. Intent doesn't matter if the intentions were invalidated.
When assumptions change, you really have a management problem on your hands. There's more information (sometimes very complicated information, like "the back office changed his hours to at least 8 because of union contracts, so now he has 11") that must be communicated and decided upon, sometimes by more than one person (e.g. foreman and superintendent) to figure out the appropriate resolution.
In version control systems, this is the same, and unavoidable for asynchronous workflows. It's basically optimistic locking, really. You make changes hoping that nobody else has altered the data, then check to see if any of your assumptions (i.e. nobody else is making changes) have held. If they haven't held, then you need to recheck your assumptions and resolve the conflict; there's no way around that.
The intention was "I want to change this guy's hours from 5 to 8." I don't know what else you might mean by "encode the intention."
Suppose you change foo from 9 to 10. Was that because you now wanted foo to be 10 specifically, or because you wanted to increment foo and it happened to be 9 before so it becomes 10 now?
In isolation, these have the same effect. However, one is absolute and the other is relative, so if two of you happen to make that same change at the same time, your intentions matter very much to how your respective changes should be combined.
For example, if your code knows that both changes were intended to set new absolute values, it can automatically determine that the combined effect should also be 10. Similarly, if your code knows that both changes were intended to increment foo, it can automatically combine those effects to get 11. But if all it knows is that two people changed foo in concurrent updates that now need to be merged, that probably results in a conflict that requires manual resolution by a user.
Right, but the assumption of original state is still invalid. An increment from 9 to 10 might be invalid if, say, the guy has already worked 40 hours and by union contract cannot work any more hours. If it had been corrected to 8 and my intent was either "set to 10" or "increment by one," the values of 9 or 10 are both invalid. So, unless we want to attempt to account for all possible rules (and vastly overspend on the project) the best choice when assumptions are invalidated, regardless of intent, is to surface the conflict.
This is very much just a distributed database where we've chosen availability in the presence of network partitions rather than consistency. The end result is that conflicts will inevitably happen, and aside from a hugely complex set of rules, the cheapest resolution is still human intervention.
Intention has no 'from' clause. You express the intent as "I want this guy's hours to be 8."
It doesn't matter if the UI shows 5, and another mechanism has changed the value to 9 in the background, your intent to set the value to 8 is unaffected.
If, however, you assume the "from" clause and do a +3 instead of =8, you get a new invalid state 12.
Encoding intent implies declarative statements as apposed to impaitive statements.
I've once built a system (an outliner app) that actually encodes intent, and uses a command queue ("transaction queue").
Intent could be for example to insert this new item as a first child of that other one, or to move these items to a position right before some other item, or to set font size on this item to 4.
The data structure was a tree of item objects linked via next/prev/children/parent. Automatic conflict resolution works wonderfully in this case.
JSON-Patch [1] can be helpful here. You can maintain a list of changes that need to be applied, such that they only touch the parts of the document that need to change (other changes to other parts can be interleaved). If necessary, you can include tests to assert that some value in the document is what you expect it should be, and the patch can be rejected if that test fails.
Having recently built an offline first app (mobile app I'll admit), we didn't use a command queue, and I regret it deeply now. I advise everyone who's reading this to go for a command queue. Saves a lot of time debugging data sync.
Our biggest challenge was to sync data with relationships, especially data with circular relationships. We couldn't come up with a generic way to sync circular relationships, so we ended up building a very special purpose buffer on both client and server side.
What was the challenge with circular relationships? I work on an offline-first system and all inter-object relationships are managed by GUIDs, so cyclical references don't present any problems - objects are sent and received as one big list keyed by guids, not as a tree.
"objects are sent and received as one big list keyed by guids, not as a tree"
That's an interesting pattern I've not heard of before - any links you could recommend for more details on it? It sounds really useful for moderate sized data sets.
I remember coming across it fairly heavily in react/redux stuff, mostly because it makes dealing with redux's stores a lot simpler if you avoid nested structures like that.
There's a library called normalizr[1], whose purpose is to take nested API responses and turn them into flat structures. Have an article[2] about it.
I'm a big fan of command queues, but if there is a chance of a server-side failure that can't be realized immediately on the client (e.g. an edit conflict with a different user editing a same piece of information), then I find that surfacing those errors in a meaningful way can be difficult and frustrating for the user.
By the time the client has synced with the server, the client could be doing something completely unrelated in an entirely different part of the app. Explaining to them that "the thing you were editing two hours ago has an issue, go here to resolve it" can be tricky.
Treating changes more like emails/outgoing order forms might be useful here.
You could popup a /failure/ message and leave a little indicator icon that takes them to the queue and lets them see the items that failed. If retry is an option they can see it, if retry is not an option they can be told why.
Yeah, it can get tricky, but it's a tradeoff between working nicely in the majority of cases and being accurate in the corner cases.
If you make sure to show the user a clear warning that they're working offline, then they might be more understanding if their changes are rejected two hours later.
What about in cases where the commands may fail because of the actions of other users? For example, say you had a virtual market in a game and someone did a "buy from John" command while offline. they then proceed to use this purchase to beat future levels. However, once the user reconnects online, it turns out that John already sold to somebody else, and thus their purchase and everything after it is invalidated.
In your command queue pattern, what sort of ways do you handle this? Have certain "chokepoints" where the user must be online to proceed? What if you have an app where that sort of chokepoint seems to occur too frequently to make the queue useful?
Edit: I think this is similar to Fiahil's sibling comment, also relevant points made there. Thanks for the quality thoughts everyone!
The real thing is, it is important to synchronize with the server at that point, otherwise something could be invalid.
It may not be possible to make every app work while offline. You'll probably want to potentially disable certain features while offline, such as store purchases (or, at least, queue them up, but don't show them as having been successfully purchased)
That wasn't a big problem in our particular application, so in case of failure we would just report a potentially unsatisfactory error message along the lines of "Your changes could not be saved. Please try again." (with a list of the discarded commands' descriptions).
In a situation where this kind of thing was more important, I would think about how to let the user decide how to reconciliate their changes. It could be that a choice of discarding or retrying would suffice, or something more complex.
The "chokepoint" notion is also useful, and in fact our app did have a distinction between potentially offline actions and necessarily synchronous actions, but our synchronous actions were mostly queries like searches.
Unfortunately, the "command queue" abstraction does not work very well with access control. Imagine that one day the requirements w.r.t. security change. What do you do when some parts of the state may not be viewed by all users? And what if that logic depends on the state itself?
I see management and security on different planes requiring parallel priority paths. Activity on these channels can be intrinsically disruptive and may require clearing queues elsewhere.
Unfortunately not, but if you think of it as using the command pattern in a queue-ish way to get a similar behavior as the command queue of "The Sims", you have the basic architecture, and from there it's just a matter of coding.
The thing about React-like frameworks that makes it very nice is that you can keep the command queue in a separate place and have a root rendering function like this:
1. Set the view state to a copy of the actual state
2. Update the view state according to each queued command in turn
3. Render the view state including a status bubble showing that some changes are not saved yet
And separately from the rendering, you have a worker that tries (and retries) to perform the queued commands. When a command is successfully performed, it's removed from the queue and its effect on the state is saved in the actual state.
Since I can't find anything on Google when searching for "react command queue" it would be cool to write a blog post with a simple example, but I don't know when I'd have time, so I encourage anyone who's implemented a similar thing to go ahead.
That feedback that the commands are queued or can't be processed is key. I encountered an issue on an app I manage at work where the UI showed that a certain state changing action had taken place before it was resolved on the sever. Needless to say it caused a variety of issues as users thought the system had processed a significant action(hiring a candidate) when they hadn't.
Yes, This. I struggled with understanding how I'd deal with a complex offline multi user collaborative web app until I starting looking at in terms of a Kafka queue. Gives you everything, including undo/history.
I implemented such a solution in a React app and I really appreciated the ability to apply pending state changes without mutating the state itself.
I don't know of any open source libraries for the command queue itself. If your state changing commands go through some kind of layer that you control then this stuff is easier. When I implemented it, I first refactored all the commands to go through the same code path, which I could then modify to implement the queue.
The way I've done it is to just have an array of commands and a simple queue thing that keeps retrying its head element until success, triggered by connectivity change or manual user retry.
(This was also useful when our backend had random issues causing 500s sometimes.)
A command has both an AJAX request and a state updating function. It's really easy with a React-like framework because you can just apply the command queue's state changes as part of the main view render, without actually modifying the main state.
Yeah, look into using Promise.all. You're probably looking at a good few hundred lines of code reduction. What you're describing as a "command" (request / state update) is just a promise with a map function applied. Your "simple queue thing" can just be an array that you fire Promise.all at.
If a promise in the array is rejected, Promise.all (which returns a promise) rejects with (supposedly) the first rejected promise, but that's extremely hard to predict if you have two promises which could reject.
Serializing a promise to localStorage; I get what you're trying to do, you want to have a worker pick up exactly where it left off when you left the app. This is where a service worker would help you. I suppose you could write some kind of durable mailbox a la Akka, only running in the browser.
It makes a lot of sense in an event sourcing context because the code used for applying events can be the same in the optimistic case and in the real updates.
In our case, we weren't using an event sourcing architecture, but this client side pattern can be used anyway.
There are a few more patterns as well if you use ServiceWorkers. Jake Archibald has a great "offline cookbook" article that lays out other common use cases and strategies.
I'm not sure I get what you mean by "always connected". Because speaking about how stuff should be written, I would expect developer never goes farther in his assumptions than #3. That is, I would hope my trading terminal still works just fine after the whole office goes offline for 10 seconds (or more, doesn't really matter). Actually, I would care quite a bit more about my trading terminal handling such cases, than, say, messaging client.
There are different types of trading. The last few I did involved phoning the bloke at iDealing so he can call the market makers and get back to me a few hours later.
Thanks, that's a nice breakdown. I suppose some apps will be combinations, e.g. Wolfram Alpha can do simple sums entirely offline, while the more complex stuff is entirely online.
Yes. Yet some applications even have different connectivity models for different modules. Reality is as usual is not that black and white. Truism of course but still.
wholly agreed. If going offline first fits the use case of your app, then go for it. But there are so many "X-first" approaches (e.g. mobile first) that you can't (and shouldn't) over engineer for them from the start.
Wow, cool to see this resurface again! I wrote this ~4 years ago and the web has come on along way in that time. New features such as service workers are definitely making offline a lot easier.
On the subject of progressive enhancement, I'm a huge advocate and believe it is in general the way todo content sites. As has been pointed out above though, some use cases do require a different approach. For context this was post was written after working on a number of HTML5 apps that were wrapped in Cordova/PhoneGap.
Offline is not a mere feature you can bolt on to existing architectures because you are really building a distributed system. Architectures that work well for distributed systems and especially p2p architectures thrive in this environment. Elsewhere in these comments people are discussing command queues, but this idea presumes that servers are somewhat reliable and that all operations need to go through a centralized point of control and failure. Instead, you can take this idea further and implement a kappa architecture (sometimes also called "event sourcing") where you maintain a log locally on every client which is the source of truth, not a server. When a network connection is available, you can replicate the log to a server or directly to other clients. You can build indexes (materialized views) that sit on top of this log to answer queries more quickly than reading the entire log out every time. You can also blow away these indexes and rebuild them from the log whenever your requirements change (migrations).
Unfortunately the web is missing a few pieces that would make it a very good platform for fully p2p, distributed apps. Service workers are a good start, but they have a 24-hour upper cap on max-age of the service worker itself, so users can't trust on first use (TOFU) and then be more secure against kinds of active targeting. The suborigin specification and iframe sandboxes for distributing apps offline and p2p would be much more useful for offline sandboxes if they didn't require that a server send an http header. These will become much more important as the web bluetooth API matures, which can allow distributing user data and application updates in a totally offline environment.
Even without being fully offline, it's very odd that when an automatic update to android or windows comes down the pipe, people in remote areas download the exact same bytes from servers in America over and over again, all over a thin pipe. They could fetch that data from each other and save a lot of money on bandwidth and data caps.
Last, the Safari team needs to seriously get to work on Service Workers. We will see web apps grow by leaps and bounds once the service worker spec is opened up to iPhone users.
These are basically the same rules for any distributed application, like something using microservices. An app is just the edge node of a distributed system.
In any distributed system, the biggest cost is moving data between nodes, and therefore the biggest failure case is when data is moving slowly or not at all. It's a case you should always be prepared for.
If you write your app in a way that assumes the network is bad, which you should always do, whether it's an app or two microservices, then you'll have a more robust system.
> These are basically the same rules for any distributed application
And we've been doing those for like 50 years now. So why is this still so hard? Because new developer were, for all intents and purposes, born yesterday.
Isn't it simpler though to just use static HTML + optional Javascript (like we used to in the 2000's eg. progressive enhancement years)? I mean why use M-V-whatever for content-driven sites at all?
Maybe I'm mis-understanding but it sounds like your talking about something entirely different?
Some sites just wouldn't work well with HTML+optional JS. Google maps (IMHO) would not work well that way, so using an offline-oriented model would make better sense.
That being said, if your a news site, or a blog, yeah, a simple static page is probably a better solution.
Of course true webapps can benefit from an MV* approach (like enterprise-type LOB apps). But gmaps IMHO isn't a good example, as it isn't MV*; rather, it fetches prerendered bitmap or vector graphics from the server.
The article is older but the advice is sound: only reach out to the server when you need to and ensure your client-side state doesn't break when you can't.
It's a little strange that they avoided naming any JS MV* frameworks even though some where out by then -- Backbone, Knockout, Ember, Angular, if I recall. But this article makes the point that all future JS MV* development went on to consider best practice in the years to follow.
Depends, if you are content-driven and can generate and cache static HTML and that's all you need I say hell yes, send that only, give it an e-tag, call it a day. ezpz. I gave benefit of the doubt to the author though and imagined an application that utilizes a decent amount of data that changes fairly frequently. In this case I can see the want for reducing the actual data sent over the wire and moving to a microservice infrastructure for the data with minimal logic server-side.
I didn't mean to refetch HTML partials. You can still re-fetch JSON and render it on the browser eg. what jquery web apps did, and mostly do still (even though less prominently featured on HN).
I would throw in some guidance that says "don't trust anything the client side sends to you". Developers used to server side MVC could expose themselves to client-side manipulation that open up some security issues.
I remember some early ecom cart implementations where you could "name your own price" as an unintended feature.
I sense a trend of trying to cram everything good about native apps onto the web. Do people who do this stop to think whether the web is actually the correct platform for their app?
If you need to build cross-platform apps - and that is mostly the case nowadays - the web is actually not such a bad solution. I mean, what is the alternative?
For mobile, clean separations also helps. It is definitely possible - and less complex than you'd expect - to have core functionality in a shared library, wired up to platform-specific native GUI toolkits.
But your question illustrates the problem. The pervasive presence of toolkits that add layer upon layer to create "cross platform" are the new norm. People are literally losing awareness that other options exist - much to the detriment of end users.
We have built an entire industry around such tooling - and long ago stopped questioning what value it brings.
I'm mainly an app developer that transitioned (a long time ago) from native to web apps, tired of code duplication. I'm well aware of other options.
The web comes with its warts, but I've yet to see an app platform as ergonomic and comfortable for the developer as the web. For 99% of my use cases, anything else is overkill and too much of a hassle. It's not the web's fault that it's a better app platform than actual app platforms.
There is plenty of software that works across Windows/Mac/Linux/Unix-like platforms. They run faster, don't always need an internet connection, are easily portable, and are typically better designed and less bloated than any web-limited cross-platform application. They also don't rely on browser cache or localStorage for settings and I don't need to login daily to access my stuff.
Disable your web cache and try to use a web application daily. I wouldn't use software that constantly resets or removes my config files as a side effect of some other action.
I can think of valid reasons to disable browser cache/cookies/localStorage that are completely unrelated to the storage of web app data. It is a side effect of web apps primarily using those methods to store user data when they don't want to store anything server side. They are designed to use local storage! That's one of their "perks".
Cookies/cache/localStorage works for most users. I am not most users and I recognize that. My criticism is that the primary method of persistent storage is fundamentally flawed and makes most web apps completely unusable for me.
E:
I'm that person who carries a USB drive of portable software customized to my preferences primarily to be used on friends' machines or for setting up new machines. Setup once and use everywhere. Browser-based storage needs to be setup everywhere by design. I need to setup at Work and at Home because I refuse to tie my personal Home profile with my Work profile, so there is no "syncing" my profile across devices.
If you primarily use one device or can sync between devices and allow cookies/cache/localStorage to persist, then web apps won't be a problem for you at all. If any of the above doesn't apply - then web apps are a thorn in the side.
yes. There can be a lot of boilerplate code present, however in terms of speed, using Qt for instance, can be much faster still than a web application that accomplishes the same task even if you get a huge binary after compilation.
Oh and as Nadya said, sometimes this generalization causes issues. Engineering is a game of trade-offs I think :P.
Well to the settings stuff yes, see something like "cookie clicker" for a (IMO bad UX) example. And you can more easily control how much data it stores over a certain point, as all major browsers make you confirm via a dialog that you are giving permission to use the requested amount.
But to keep this from turning ugly, my point was more that you need to take into consideration what you'll need for your app.
If it's an application that basically only exists as an interface for data stored in a backend server, then giving them the ability to exist after bankruptcy is pointless. However if it's something that needs root access and will frequently be used and installed on a system without internet access, native is better.
And saying things like a native app is "less bloated" when it takes literally multiple magnitudes more time to install and run with significantly more permissions to your whole system is silly.
One thing I have noticed in this thread in general is that the web devs are extremely defensive. No, a web app is not inherently bad. No one is saying this.
But cherry picking questions to prove a point is silly. However, to show it's not a gangup on web apps here we go:
Speed of the install process? Probably slower than loading a web page for serious applications
How does the update system work? Depends if you're releasing as a single statically compiled program or using shared libs that can be updated. Also if you have a db to sync this will affect things.
How quickly can you release them to all platforms? As long as it takes to compile to all compatible targets.
Can they be easily customized and modified by the user? in what regard? if you mean configuration then yes. If you mean being able to manually tweak the style of the application like when fiddling around in the element inspector, then no unless you are using a theme parser that lets them adjust the themes.
Can they be easily shared? yes
What's the permission model like? Depends what granularity you want to have. Permissions can be restricted to the action level, user level, group level, machine level, global level, etc. Whatever logic you want to implement really.
How much can that application access? access in terms of what?
And one thing i've noticed is that "native devs" are extremely condescending.
I hear a LOT about how writing an application for the web is wrong (especially on HN), but not much about why it's a good idea. I see comments about how native is faster, "less bloated", portable, "better designed", offline, and more secure. But never any comments on how long they take to install, how difficult it is to use them across multiple devices, how you need to either use an app-store, bundle your own updater (which follows all the best security practices), or rely on a distro to get around to including it for you. I never read discussions on how they tend to be larger, they have more access to the underlying system by default, how they are more difficult to secure, or how if you use the one application across multiple platforms you need to learn multiple UIs.
And while none of that is true across the board, it's stuff you need to spend more time on to get right, whereas you tend to get it "for free" when targeting the web. Obviously things go the other way for some features. Getting high performance out of a web app takes more work, getting "high security" to work in a browser is much more difficult, getting offline takes some consideration (IMO it's not that difficult today, but it does still take work).
It might come across as defensive, but I can't bring up anything web-related on this site without being asked why I didn't make it native, or why I'm using javascript at all, or why I decided to use the web when there are "perfectly good UI toolkits for native app development" while hand waving away all the benefits and reasoning behind my decisions as either pointless or just by saying "you can do that with native too" without going into the mountain of work necessary to get it right. And in that comment I indulged that anger which I don't normally do on this site.
I hear this excuse a lot, but there isn't a single benefit I've talked about which is for the developer only.
Install times are a big one. No user wants to install thing and manage dependencies or manually install updates. The sandboxing is another very pro-user thing as it makes sure my fuckups or mistakes can't easily cause their whole PC to be compromised, and they don't need to spend time making sure they have permissions setup correctly for my application on every device.
And for me, as a user, I greatly prefer web apps because I and many other people live a multi device life. If I have an Android phone, a Windows work PC, and a personal MacBook, I need to learn 3 different UIs for a single application. I need to configure them 3 times, manage their settings in 3 places. With a web app I learn 1 UI, I configure it once, I can login on my main PC or my father's Linux laptop and get the same app I'm used to in seconds.
No worrying about making backups for it, no worrying about the permissions I'm giving it, no worrying about the updates each machine is on, or how much space it might be taking up, or if it's using HTTP connections for updates, or that support for my older OS might get dropped, or that it won't hit my new distro for 6 months, or that it's not available in my package manager, or that it will autostart at boot and be an annoyance, or that uninstalling it will leave a bunch of shit behind, or any other of the things that native applications do that annoy me.
I go to a URL, and I use an app in less than a second on any device I own. And if I want, I can quickly go into the browser settings and wipe that app and everything it's touched from the PC in seconds.
That might be optimising for my wanted experience as a user, but I can't please everyone and I see a lot more multi-device multi-OS users who don't want to manage all the details of a native app than I ever do of users that want the opposite.
But compared to the web, native applications are significantly worse in these areas.
Especially on that first point. I can go to the vast majority of web apps on just about anything with a browser and get it up and running in less than a second knowing nothing more than a domain name.
And there's "cross platform" then there's "cross platform". Something like QT is amazing, but you are still looking at the big-3 desktop OS's, and maybe the big mobile guys if you work for it. A web app includes all of that, plus my TV, my car headunit, and even my damn watch! (I often use a web home-automation app from a browser on my watch, the UI adapts pretty damn well for quick light-flips)
Nothing is perfect for everyone, but just because it's been done since the 80's doesn't mean it can't be improved on. And as always it depends on your actual needs. There aren't any "better" and "worse" architectures.
Yes - totally! You either limit yourself more or double/triple your work making different system API calls depending what platform you're compiling for. Not too different from tripling your workload to support "offline" applications or dealing with IE/Safari/Chrome/Firefox differences.
Most web apps I've come across either don't run in IE or have various bugs/issues in Firefox as most of them are coded on and targeting Chrome due to Chrome's dominance of the web. Which reminds me of people building/testing only on Windows.
I will admit that the comparison I'm drawing are "same but different" problems. Browsers are a lot more standardized than operating systems and fixing a difference between Firefox/Chrome is usually a lot more trivial than fixing a difference between Windows/Mac.
Cannot agree more! However, I can see the appeal of webapps. Getting started in app development using GUI APIs (cross-platform or target-native) can be a bit intimidating, too. And now we have things like electron which... well to me it's slow, but being able to develop a desktop GUI using html5 and css and JS is appealing.
And to be fair, a web app does get the job done, though I have to admit the number of companies that opt for an internal webapp instead of a desktop application is interesting.
As long as we're trending towards web-apps having native apps' functionality then that distinction will soon be meaningless. Native apps are already sandboxed in various ways. Process integrity levels, VM protection, low privileged execution, call gating, ACLs, MAC, etc etc. All these technologies already exist and are already being used in various ways. Any systems level programmer should already be aware of those.
The browser is fast becoming the "OS" and most browsers are several order of magnitudes more bloated than mainstream kernels, not to mention horrendously insecure - if we're worried about security, then browser vendors are the last people I would trust for anything important.
Unless you're running a very unusual OS setup, any native app by default has read and write access to all of your files without asking.
I feel pretty comfortable assuming that www.randomwebapp.com isn't reading and uploading my ~/.ssh and ~/.gpg, otherwise I'd be terrified of using the web at all.
>Unless you're running a very unusual OS setup, any native app by default has read and write access to all of your files without asking.
That's partially true. By default, it cannot access any system files, or change any system settings without admin privileges. Admin access is also required to authorize a firewall exception if it wants to use the network. And you have the choice to arbitrarily restrict a software's read/write access to locations of your choosing. You might call that unusual, but such restrictions are common in managed environments.
>I feel pretty comfortable assuming that www.randomwebapp.com isn't reading and uploading my ~/.ssh and ~/.gpg, otherwise I'd be terrified of using the web at all.
Your comfort is misplaced. There are FAR more browser vulnerabilities (including chrome, firefox) allowing code execution than there are OS kernel and CPU vulnerabilities allowing you to break out of the native apps' sandbox.
There are tons of sandbox type features in most modern OSs to prevent apps from interfering with each other. For e.g..
1) Virtual memory protection (can't access other app's memory)
2) Protection rings (safe transfer from UM to KM for system calls)
3) User interface isolation (one process can't interact with another's UI)
4) I/O privilege levels (prevents one rogue app from causing I/O starvation)
5) Process Integrity Levels. You can run apps under your own identity (be it super user or admin or regular user) but assign them reduced permissions as far as accessing data goes. You can run at-risk apps this way so that they can run without having access to any of your data.
6) You can restrict access to various other things in addition to the data using ACLs (network, device drivers, etc).
7) ABI level isolation using user mode kernels ("Library OSs").
Yes and these protections are only used to their full potential on something like iOS. On Windows, macOS, and Linux these are not used to defend your data or system out-of-the-box like they are in a browser.
A massive wall with an open gate isn't much of a wall.
There are more developers for the web than for C++ GUI applications. Building a cross-platform application using C++ is not a simple task in the slightest and it's even more difficult finding talented developers to accomplish the goal.
It seems like a no-brainer to me, building a cross-platform application use web technologies makes the most sense from a business pov.
There is a huge number of applications for which the web is pretty much the only platform that makes any sense at all.
Compared to making a cross platform native application that works on Linux, Mac, Windows, Android, and iOS, making a web app—even with offline support—is delightful and efficient.
And me too. Meteor becomes my most favourite development framework in 2016. The ability of offline and data sync are some-kind of "by default" once I start a project, or just a simple test on idea.
Plus, packing it into a desktop app is lightening fast. It is a "wow" factor for potential clients.
Totally good point; a quick test of the latest beta iOS, Safari seems to reload some pages and not others (window.focus?) So, it seems to be whatever JS magic the sites are using are handling a reload (latest ads! weeeeeeeee!!!)
Take the tripadvisor app; when you go to the website you are nagged to go to the app, if you click you end up in the google play store and it loses where you were.
And of course it only works online; utterly pointless.
Funnily this site does not wor k well with my crappy mobile connection. It loads 3 paragraphs and strangely the 4th paragraph is cut in half. I waited a minute for it to load.
I feel like commenting on the callbacks and variable-bound-contexts, and saying something about the glorious wonder of modern JavaScripts... better not, tho :)
How about offering a HTML experience in the first place?
I'm really tired of "HTML" websites that are actually a heap of Javascript writing to a virtual DOM, with only a few references to scripts, most of it being for surveillance (e.g. ad trackers, surveillance for profit with the side effect of it being available to intelligence services via subpoenas, gag orders and secret courts).
I get where this movement is coming from but I just fundamentally disagree with it. Making offline first web apps may make sense for certain applications where you expect you users may need to use it offline, but it doesn't make sense for all apps and it can require a fundamentally different way of writing your application which is a waste of time and effort if it's not a likely use case for your users.
It does seem the business requirement for real-time is often ignored in "offline first" write-ups. If the current, up-to-the-second status of a server is being monitored, if vital signs for a patient are being monitored, etc., an app only delivers value if it provides the "now"/real-time data. Some form of websocket calls will be used, not Ajax. Showing something from the past because the data was stored locally is not helpful in assessing the current, real-time condition. There may be some things you can do to cater to the condition of being offline, but the point remains this app delivers almost zero business value while offline.
Of course but if you're writing a web app who's only purpose, for example, is to talk to support staff for your company then having an offline mode is not really useful. If you're offline you can't talk to the staff, so all the app needs to do is fail gracefully. Designing it "offline first" would be silly.
I actually believe that >50% of web apps probably fall into this category, where they really cannot function properly offline because the online-ness s core to their functionality. That's why they are web apps in the first place.
Now if you are designing a web app that is a web version of a more traditional native app like Google Docs or something, then sure offline first makes sense. But I don't think that's the majority of web apps.
Offline first basically means that if you send a message while your network is temporarily down, it's just queued and resent immediately once the link is up again, like your email outbox.
Since network links are always flaky, it just makes sense to do it this way. Since they are also always relatively slow, it makes sense to cache data locally in order to give a faster experience.
Not doing things offline-first in an app basically means that you are introducing synchronous requests everywhere: reading a support reply from yesterday is a synchronous request that fails "gracefully" if your 3G happens to be down, etc.
Telegram's web app is pretty nice. The app code is cached offline with a service worker and updated whenever possible, so it loads instantly. The most recent messages from your contacts are saved in the client as well. I appreciate all that stuff as a user, and the more stuff works offline the better, because it also means it's faster and more reliable.
Dude, I get it. Stop being so condescending. Maybe you're trying to be helpful by including those links but it comes off as incredibly insulting and pedantic.
I was talking about a real-time communication application.
You can make the argument that the application should try to cope with a sporadic connection, but in the real world this is inefficient for both the user and the staff if the connection is going in and out and they are trying to hold a conversation it might be better to show the user a message that says hey your connection is down try later when you have a better connection or try an asynchronous support method like submitting a ticket.
Regardless that was just an example off the top of my head. My point still stands; web applications are web applications for a reason, and "offline first" doesn't make sense for most of them.
Please don't stoop to lashing out personally. Friction is inevitable in an internet forum like HN, so each of us will occasionally be rubbed the wrong way by a comment here. Most of the time this is just a glitch—a crossed signal about intention. But even when the other person really is being condescending and whatnot, it's important for the sake of the community not to make the thread still worse, as you did here.
Your example indicates you don't get it. You mention responses "from yesterday". That's not going to fly in an application that's real time that requires staff to keep the conversation open. You're describing a ticket system, I'm describing a live chat system.
I said "in the real world" as opposed to abstract talking about an application; in the real world a staff member would be stuck waiting for the response from this user where you're trying to keep the conversation going despite their sporadic connection. I guess I could have said "in meatspace" or something.
Links to explanations of the topic being discussed is of course incredibly insulting. Come on.
Anyway, whatever man. You are not pleasant to talk to so I'm done.
In the real world, network links go down all the time, even if just for three seconds.
As for etiquette... I politely explained how I see things different from you and provided a couple of links for reference. You immediately called me "condescending", "incredibly insulting", and "pedantic".
If you want a pleasant conversation, that's not the way to do it...
I see the author's vision, but I have to say that I still favor progressive enhancement over this. Using javascript only assumes a lot about the user even in this modern age of web development.
Also as a nitpick, I would say don't make your data objects SINGLETONS, make them SINGLE INSTANCE.
EDIT: someone deleted their comment but brought up a good point that you can have progressive enhancement with this. Yes, however the author made it clear he favors the JS or nothing approach. And there's even this snippet:
>An offline first approach would be to move the entire MVC stack into client side code (aka our app) and to turn our server side component into a data only JSON API.
Here's your user assumption: 99.8% of browsers have Javascript turned on. Creating a fallback version of your app that works without Javascript is basically like developing for IE 5 -- you are completely wasting your time.
Here's another one: mobile browsers make up a majority of web traffic now, and Chrome and Safari are pretty well split in terms of share. Sending HTML tags over the wire is a waste of your users time. The idea of "progressive enhancement" should be thought of more as, "what can we display quickly on our user's screen while we're downloading and parsing the Javascript to make the application work". Note that this doesn't mean loading screens, just an initial state which shows the user progress in building up the app from that initial http connection. Regardless of the negativity around AMP, Google has made us think about what exactly we need to do in order to provide a good experience quickly and a feature-rich experience within a couple of seconds.
Progressive Enhancement was laid out clearly actually so that basic content should be accessible to all web browsers
and basic functionality should be accessible to all web browsers.
As for google, they also recommend using progressive enhancement and they even removed their ajax SE scheme posting about this wherein they even give a recommended trick for compatibility testing.
I disagree rather strongly with the implicit advice here: that HTML5 "applications" ought to rely on client-side processing first and foremost. This usually translates to "shove more Javascript down the user's throat", which is the opposite of "a better HTML5 user experience" IMO. If I trust you enough to want to run arbitrary Turing-complete code of your making, I'd be more inclined to download a native implementation for my platform; otherwise, I generally have better things for my CPU to do than burn cycles on things that should be done on your own servers.
I do agree, however, that the HTML "app" ought to be treated equivalently to native implementations -- that is, it should communicate over the same API (or perhaps an internal equivalent) as native apps. I like to think of this as "middle-end" web development, with the HTML-generating server working to mediate the browser's expectations (HTML and CSS and maybe some Javascript if it's really needed) with the actual business logic provided by the API.
Basically, the rendered state of the client is the acknowledged server state plus the client-side command queue.
User actions don't make a server request and then update the UI. Instead they directly append to the local command queue, which updates the UI state, and right away the client begins communicating with the server to make the local change real.
While the client's command queue is nonempty, the UI shows a spinner or equivalent. If commands cannot be realized because of network failure, the UI remains functional but with a clear warning that changes are waiting to be synchronized.
(The API for connectivity status are useful for making sure that the command queue resumes syncing when the user's internet comes back.)