I sense a trend of trying to cram everything good about native apps onto the web. Do people who do this stop to think whether the web is actually the correct platform for their app?
If you need to build cross-platform apps - and that is mostly the case nowadays - the web is actually not such a bad solution. I mean, what is the alternative?
For mobile, clean separations also helps. It is definitely possible - and less complex than you'd expect - to have core functionality in a shared library, wired up to platform-specific native GUI toolkits.
But your question illustrates the problem. The pervasive presence of toolkits that add layer upon layer to create "cross platform" are the new norm. People are literally losing awareness that other options exist - much to the detriment of end users.
We have built an entire industry around such tooling - and long ago stopped questioning what value it brings.
I'm mainly an app developer that transitioned (a long time ago) from native to web apps, tired of code duplication. I'm well aware of other options.
The web comes with its warts, but I've yet to see an app platform as ergonomic and comfortable for the developer as the web. For 99% of my use cases, anything else is overkill and too much of a hassle. It's not the web's fault that it's a better app platform than actual app platforms.
There is plenty of software that works across Windows/Mac/Linux/Unix-like platforms. They run faster, don't always need an internet connection, are easily portable, and are typically better designed and less bloated than any web-limited cross-platform application. They also don't rely on browser cache or localStorage for settings and I don't need to login daily to access my stuff.
Disable your web cache and try to use a web application daily. I wouldn't use software that constantly resets or removes my config files as a side effect of some other action.
I can think of valid reasons to disable browser cache/cookies/localStorage that are completely unrelated to the storage of web app data. It is a side effect of web apps primarily using those methods to store user data when they don't want to store anything server side. They are designed to use local storage! That's one of their "perks".
Cookies/cache/localStorage works for most users. I am not most users and I recognize that. My criticism is that the primary method of persistent storage is fundamentally flawed and makes most web apps completely unusable for me.
E:
I'm that person who carries a USB drive of portable software customized to my preferences primarily to be used on friends' machines or for setting up new machines. Setup once and use everywhere. Browser-based storage needs to be setup everywhere by design. I need to setup at Work and at Home because I refuse to tie my personal Home profile with my Work profile, so there is no "syncing" my profile across devices.
If you primarily use one device or can sync between devices and allow cookies/cache/localStorage to persist, then web apps won't be a problem for you at all. If any of the above doesn't apply - then web apps are a thorn in the side.
yes. There can be a lot of boilerplate code present, however in terms of speed, using Qt for instance, can be much faster still than a web application that accomplishes the same task even if you get a huge binary after compilation.
Oh and as Nadya said, sometimes this generalization causes issues. Engineering is a game of trade-offs I think :P.
Well to the settings stuff yes, see something like "cookie clicker" for a (IMO bad UX) example. And you can more easily control how much data it stores over a certain point, as all major browsers make you confirm via a dialog that you are giving permission to use the requested amount.
But to keep this from turning ugly, my point was more that you need to take into consideration what you'll need for your app.
If it's an application that basically only exists as an interface for data stored in a backend server, then giving them the ability to exist after bankruptcy is pointless. However if it's something that needs root access and will frequently be used and installed on a system without internet access, native is better.
And saying things like a native app is "less bloated" when it takes literally multiple magnitudes more time to install and run with significantly more permissions to your whole system is silly.
One thing I have noticed in this thread in general is that the web devs are extremely defensive. No, a web app is not inherently bad. No one is saying this.
But cherry picking questions to prove a point is silly. However, to show it's not a gangup on web apps here we go:
Speed of the install process? Probably slower than loading a web page for serious applications
How does the update system work? Depends if you're releasing as a single statically compiled program or using shared libs that can be updated. Also if you have a db to sync this will affect things.
How quickly can you release them to all platforms? As long as it takes to compile to all compatible targets.
Can they be easily customized and modified by the user? in what regard? if you mean configuration then yes. If you mean being able to manually tweak the style of the application like when fiddling around in the element inspector, then no unless you are using a theme parser that lets them adjust the themes.
Can they be easily shared? yes
What's the permission model like? Depends what granularity you want to have. Permissions can be restricted to the action level, user level, group level, machine level, global level, etc. Whatever logic you want to implement really.
How much can that application access? access in terms of what?
And one thing i've noticed is that "native devs" are extremely condescending.
I hear a LOT about how writing an application for the web is wrong (especially on HN), but not much about why it's a good idea. I see comments about how native is faster, "less bloated", portable, "better designed", offline, and more secure. But never any comments on how long they take to install, how difficult it is to use them across multiple devices, how you need to either use an app-store, bundle your own updater (which follows all the best security practices), or rely on a distro to get around to including it for you. I never read discussions on how they tend to be larger, they have more access to the underlying system by default, how they are more difficult to secure, or how if you use the one application across multiple platforms you need to learn multiple UIs.
And while none of that is true across the board, it's stuff you need to spend more time on to get right, whereas you tend to get it "for free" when targeting the web. Obviously things go the other way for some features. Getting high performance out of a web app takes more work, getting "high security" to work in a browser is much more difficult, getting offline takes some consideration (IMO it's not that difficult today, but it does still take work).
It might come across as defensive, but I can't bring up anything web-related on this site without being asked why I didn't make it native, or why I'm using javascript at all, or why I decided to use the web when there are "perfectly good UI toolkits for native app development" while hand waving away all the benefits and reasoning behind my decisions as either pointless or just by saying "you can do that with native too" without going into the mountain of work necessary to get it right. And in that comment I indulged that anger which I don't normally do on this site.
I hear this excuse a lot, but there isn't a single benefit I've talked about which is for the developer only.
Install times are a big one. No user wants to install thing and manage dependencies or manually install updates. The sandboxing is another very pro-user thing as it makes sure my fuckups or mistakes can't easily cause their whole PC to be compromised, and they don't need to spend time making sure they have permissions setup correctly for my application on every device.
And for me, as a user, I greatly prefer web apps because I and many other people live a multi device life. If I have an Android phone, a Windows work PC, and a personal MacBook, I need to learn 3 different UIs for a single application. I need to configure them 3 times, manage their settings in 3 places. With a web app I learn 1 UI, I configure it once, I can login on my main PC or my father's Linux laptop and get the same app I'm used to in seconds.
No worrying about making backups for it, no worrying about the permissions I'm giving it, no worrying about the updates each machine is on, or how much space it might be taking up, or if it's using HTTP connections for updates, or that support for my older OS might get dropped, or that it won't hit my new distro for 6 months, or that it's not available in my package manager, or that it will autostart at boot and be an annoyance, or that uninstalling it will leave a bunch of shit behind, or any other of the things that native applications do that annoy me.
I go to a URL, and I use an app in less than a second on any device I own. And if I want, I can quickly go into the browser settings and wipe that app and everything it's touched from the PC in seconds.
That might be optimising for my wanted experience as a user, but I can't please everyone and I see a lot more multi-device multi-OS users who don't want to manage all the details of a native app than I ever do of users that want the opposite.
But compared to the web, native applications are significantly worse in these areas.
Especially on that first point. I can go to the vast majority of web apps on just about anything with a browser and get it up and running in less than a second knowing nothing more than a domain name.
And there's "cross platform" then there's "cross platform". Something like QT is amazing, but you are still looking at the big-3 desktop OS's, and maybe the big mobile guys if you work for it. A web app includes all of that, plus my TV, my car headunit, and even my damn watch! (I often use a web home-automation app from a browser on my watch, the UI adapts pretty damn well for quick light-flips)
Nothing is perfect for everyone, but just because it's been done since the 80's doesn't mean it can't be improved on. And as always it depends on your actual needs. There aren't any "better" and "worse" architectures.
Yes - totally! You either limit yourself more or double/triple your work making different system API calls depending what platform you're compiling for. Not too different from tripling your workload to support "offline" applications or dealing with IE/Safari/Chrome/Firefox differences.
Most web apps I've come across either don't run in IE or have various bugs/issues in Firefox as most of them are coded on and targeting Chrome due to Chrome's dominance of the web. Which reminds me of people building/testing only on Windows.
I will admit that the comparison I'm drawing are "same but different" problems. Browsers are a lot more standardized than operating systems and fixing a difference between Firefox/Chrome is usually a lot more trivial than fixing a difference between Windows/Mac.
Cannot agree more! However, I can see the appeal of webapps. Getting started in app development using GUI APIs (cross-platform or target-native) can be a bit intimidating, too. And now we have things like electron which... well to me it's slow, but being able to develop a desktop GUI using html5 and css and JS is appealing.
And to be fair, a web app does get the job done, though I have to admit the number of companies that opt for an internal webapp instead of a desktop application is interesting.
As long as we're trending towards web-apps having native apps' functionality then that distinction will soon be meaningless. Native apps are already sandboxed in various ways. Process integrity levels, VM protection, low privileged execution, call gating, ACLs, MAC, etc etc. All these technologies already exist and are already being used in various ways. Any systems level programmer should already be aware of those.
The browser is fast becoming the "OS" and most browsers are several order of magnitudes more bloated than mainstream kernels, not to mention horrendously insecure - if we're worried about security, then browser vendors are the last people I would trust for anything important.
Unless you're running a very unusual OS setup, any native app by default has read and write access to all of your files without asking.
I feel pretty comfortable assuming that www.randomwebapp.com isn't reading and uploading my ~/.ssh and ~/.gpg, otherwise I'd be terrified of using the web at all.
>Unless you're running a very unusual OS setup, any native app by default has read and write access to all of your files without asking.
That's partially true. By default, it cannot access any system files, or change any system settings without admin privileges. Admin access is also required to authorize a firewall exception if it wants to use the network. And you have the choice to arbitrarily restrict a software's read/write access to locations of your choosing. You might call that unusual, but such restrictions are common in managed environments.
>I feel pretty comfortable assuming that www.randomwebapp.com isn't reading and uploading my ~/.ssh and ~/.gpg, otherwise I'd be terrified of using the web at all.
Your comfort is misplaced. There are FAR more browser vulnerabilities (including chrome, firefox) allowing code execution than there are OS kernel and CPU vulnerabilities allowing you to break out of the native apps' sandbox.
There are tons of sandbox type features in most modern OSs to prevent apps from interfering with each other. For e.g..
1) Virtual memory protection (can't access other app's memory)
2) Protection rings (safe transfer from UM to KM for system calls)
3) User interface isolation (one process can't interact with another's UI)
4) I/O privilege levels (prevents one rogue app from causing I/O starvation)
5) Process Integrity Levels. You can run apps under your own identity (be it super user or admin or regular user) but assign them reduced permissions as far as accessing data goes. You can run at-risk apps this way so that they can run without having access to any of your data.
6) You can restrict access to various other things in addition to the data using ACLs (network, device drivers, etc).
7) ABI level isolation using user mode kernels ("Library OSs").
Yes and these protections are only used to their full potential on something like iOS. On Windows, macOS, and Linux these are not used to defend your data or system out-of-the-box like they are in a browser.
A massive wall with an open gate isn't much of a wall.
There are more developers for the web than for C++ GUI applications. Building a cross-platform application using C++ is not a simple task in the slightest and it's even more difficult finding talented developers to accomplish the goal.
It seems like a no-brainer to me, building a cross-platform application use web technologies makes the most sense from a business pov.
There is a huge number of applications for which the web is pretty much the only platform that makes any sense at all.
Compared to making a cross platform native application that works on Linux, Mac, Windows, Android, and iOS, making a web app—even with offline support—is delightful and efficient.