The power of HTML5 and javascript for creating applications is vastly over-hyped. I get that people are excited by the possibilities, but it still sort of sucks in terms of allowing you to actually take advantage of even a fraction of the power of the machine. (But we can have gradients and bouncy animations now! Weeeeeeeeeeeeee!)
Here's what you can't do in a web app yet: anything that requires serious CPU horsepower or memory efficiency. That's a lot of things! And the people that think that clever interpreters for javascript are the solution to this are nuts -- everything comes at a cost. Even the most clever JIT is probably going to double your memory usage, which sucks for cache coherency, and, well, for memory usage in general.
I see all these news items here that can be paraphrased as "check out this thing we did that was cutting edge on desktops in 1996! Now it's on the web!"
And on one hand: yeah, cool hack; but I mean, you're running the equivalent of what people would have thought of as a super computer 15 years ago, but web standards are essentially confining you to making toy apps.
Performance is one thing, but to me it isn't the big one. I can even get over the crapfest that is HTML/CSS/Javascript if I really have to. The real problem with the web is that it doesn't work for anything serious. It's not really scriptable from the user-end and there are huge problems with data interchange.
On the web I can't grab all of my Facebook photos, zip them up and send them somewhere else unless I write a monstrous pile of hacky code and break Facebook's TOS, but on a real computer it's absolutely trivial.
If I want to slap together a disposable playlist of videos on Youtube I have no idea how. Maybe I have to log in? What if I want videos from a bunch of other websites?
And even watching individual videos is terrible on the web. I either get a crappy website-dependent flash player or a crappy browser player, seemingly having lost the freedom to embed the video-playing application I want. The best user experience for watching online videos seems to be
1. Start buffering the video.
2. Browse to /proc/`pgrep plugin-container`/fd on my hard-drive.
3. Open the video in the player of my choice.
That's insane, but I do it regularly.
When the web lets me grab a text stream or a video or some music and lets me do what I want with it then it'll be ready for serious use. Until then it's just the "iPad" of technologies, a playground instead of a set of LEGO.
But thats what this shift to cloud computing is about. Web applications simply shift all difficult computation to the server and deliver the results straight to the client. As for client-side JS code, your point about the increased resource usage is correct, but moot for the next 5-10 years IMO. Hardware is going to follow Moore's law for a few years at the least, and as long as that happens, the efficiency simply isn't an important factor for most applications. History has shown that (at least as long as Moore's law holds) the tradeoff between developer time vs. running time is constantly shifted to favor developers.
Well the only place where moore's law is still working is in doubling the amount of transistors, not doubling the speed. So you can get multiprocessing, but that's very hard to take advantage of, and as far as I know there's basically no concurrency in javascript regardless.
Anything that takes a lot more time to process than the round trip time, might be worth running on the cloud. CAD operations, for instance are very optimisable this way by having a local and remote version of a function such as a 3DBoolean, where easy operations are run locally and more complex ones leverage the network and run quicker than they would natively on the host machine.
No, but it can remove a lot of load, which gives more horses for the realtime stuff. And when you factor in webGL, for shifting load to the graphics hardware, then there is a hell of a lot that can be done in the browser.
I'm not sure if this article has any merit without actual survey data. Most of us at HN live in this bubble of web-apps, stay-connected, iPhone/iPad/Android world while there are probably many (literally) Mom&Pop shops that still rely on old technology because "it works" (and switching won't increase my revenue [developer tools are the exception]).
Very true. My contention though, is that the bubble we live in is most likely indicative of the future of technology. Especially given the fact that we've been seeing this trend of web-only apps EXPLODING and growing rapidly for a long time now, it looks like its just a matter of time before the average consumer is forced into the same ecosystem.
For example, note that the iPhone and Android ecosystems are very much prevalent amongst the average consumer, and they have already accepted the ecosystem that forces them to pay for digital content that they don't really own, or use Google and Facebook when the information they have at their disposal coupled with their privacy policies are making them increasingly scary.
It's strange, but I think the tech market is indifferent to violation of privacy or ownership rights. The success or failure is determined by functionality, UI etc. but not legal concerns. We have never before witnessed the creation of online ecosystems of this scope, so its understandable, but ultimately a mistake IMO.
Completely agree with you that Mom & Pop shops still rely in technology. They still arent conversant with Dropbox or Google Docs as much as we think they are. In any case they wont be the early adopters of such technologies.
On data
1. on dev stats-about 52K independant devs out there hacking on iOS and Android. Not sure if there organizations out there that have 52K dedicated devs building mobile apps.
Blah blah blah. Everything in this universe is cyclic. First we had huge servers and dumb terminals because CPU time was expensive. Then we had powerful desktops and lean servers because connectivity was expensive.
Now we have powerful desktops and extremely powerful servers because we want to be more connected to each other and don't have the resources to replicate the full set of data on every desktop (i.e. space and processing power (performing operations on that set of data) limitations). Give it some more time and the whole internet will become a peer-to-peer network (Diaspora, who the fuck knows what else). Give it even more time and we'll have huge servers again for unforseen reasons (oh how I wish I was a prophet).
Or maybe we'll realise that we're repeating the same pattern and come up with a completely novel concept upon one of the iterations terminating. It's not really a cyclic universe, it's more like a spiral. I guess that this is where the singularity occurs.
Security. Portability. Privacy. Reliability. All of these are issues that present problems (non-insurmountable in the long run, but problematic in the short to mid-term).
Fundamentally, I'd argue that many users prefer to have their apps and data locally, at the moment.
In many cases, this is due to government or industry regulation – millions of HIPAA covered users/researchers will likely not move their data/apps into the cloud.
With the growing trade of industrial espionage and intellectual property theft, I fully expect that many companies will continue to be reluctant to put their data anywhere that will make it less secure.
Similarly, there will continue to be users who want to work in places where they do not have continuous or fast network connectivity (planes, trains, automobiles, boats, etc.)
Unless browser-based apps can offer local data storage and offline functionality, I'd argue that we're at least a decade away from such a mass migration to online services.
I can see several obsticles that will hamper the development of a fully web native environment. This is no way an exhaustive list, nor is it necessarily complete or correct within each item, so take it with a grain of salt...
1) Security. By default, the web is untrustworthy. This means we have to treat any incoming web page or application as if it were filled with digital anthrax. The browser quarantines the javascript and HTML. Likewise, the server providing the service has to treat the input from the client in the same way to protect against SQL attacks and other hacks. This limits the user to a whitelisted set of features that were deemed safe by the application's programmers. Apps have less restrictions, but the parent platform still imposes significant sandboxing to prevent malicious attack.
2) Platform. While the ecosystem for Apps and web applications is indeed varied, they depend on the complex abstraction of the underlying platform. In the case of the web and app platforms, this is the web browser and tablet/phone OSs. In the case of the latter, that means the developer is beholden to "The Powers That Be" in order to get their app accepted. This works better on the web, but to get that freedom we sacrifice access to the underlying OS because of the security problems.
3) Persistent Connection. There is no such thincluding contenting as an offline web app. While tablets/phones can enjoy a disconnected existence, most rely on the assumption of constant connectivity, which is no way guaranteed. I ran across this issue driving across Texas, where there are HOURS of no signal on the interstate 10.
4) Data ownership, persistence, and portability. This is a nebulous area that is very important for anybody dealing with intellectual property. Most websites TOS claim they own your pictures and content, and many apps either don't save or have a proprietary format that can't be accessed outside of itself. Portability and persistence are linked because we are dependent on these services to be always up. What happens when something crashes or the company goes under?
5) Money. Ultimately, servers and developers cost money. A service is a recurring cost. The user pays for this, one way or another, through subscriptions or advertisements. Apps can be immune to this if they are entirely independent of the internet to function, but there still would be a cost if only to support the developer.
Games you have to install are better than browser games. As long as that's true, computer users will be comfortable with the idea of installing applications, because they did it all the time when they were kids.
Of course, the same thing is true of word processors, CAD applications, IDEs, photo editors, and ... basically everything but e-mail. Why do I want to use web apps again?
What is your reasoning? I personally think it's silly that we still carry around all our files on a disk drive in our hands. If we lose that, we lose everything. If I don't have my laptop with me, I can't do my work.
I do most of my work in the terminal and the browser. If I could somehow use a remote terminal with the convenience of being able to open files with image viewers/etc on my local machine, I have no need for a local disk.
I've been looking at Chrome OS for this very reason. Apparently it includes a terminal app. I almost never have a program other than a terminal and a browser running, so it would work out perfectly.
That being said, since the Chrome OS devices don't have a full suite of command line tools, I'd need Internet access to attach to a VPS. Current cellular data pricing makes this a sketchy requirement for me.
X forwarding is just too slow
I don't want to wait >300ms or so for my input to generate a response. HTML + JS serves the same purpose as X forwarding with your application hosted on a remote box displaying in your local browser with the exception of javascript allowing small amounts of local computation to be performed when it makes sense.
You know what I like about thick apps? They scale like a boss. Add a user, add (at least) one CPU and a whole lotta RAM to the cluster, for free. I spend a lot of time looking at New Relic graphs thinking "why didn't we build this as a thick app"?
Then I remember the recurring web revenue model.
(My bet is on a convergence of web apps and thick apps, but I think it will be thick apps adding cloud-like functionality rather than HTML5+javascript.)
Here's what you can't do in a web app yet: anything that requires serious CPU horsepower or memory efficiency. That's a lot of things! And the people that think that clever interpreters for javascript are the solution to this are nuts -- everything comes at a cost. Even the most clever JIT is probably going to double your memory usage, which sucks for cache coherency, and, well, for memory usage in general.
I see all these news items here that can be paraphrased as "check out this thing we did that was cutting edge on desktops in 1996! Now it's on the web!"
And on one hand: yeah, cool hack; but I mean, you're running the equivalent of what people would have thought of as a super computer 15 years ago, but web standards are essentially confining you to making toy apps.