One of the great things about browsers is the way you can trust them to be reasonably sandboxed from the rest of your machine. Features like this chip away at that trust model and open up new attack vectors against unsuspecting users. I get the usability benefits, but the security implications are scary. How easy would it be to trick my mother into uploading her profile picture and have the site change that file into a malicious executable without her realizing? At a minimum, I'd hope you would be restricted from changing file permissions or editing any executables.
As you surely noticed, for last 10 years the browser is actively trying to replace your desktop OS, so that the browser would become the platform, instead of Windows / macOS / Linux / anything else. Imagine the desktop becoming ChromeOS on any computer, not just Chromebooks.
For this, the browser must make all the traditional OS interfaces available. It has interfaces for accelerated 2D and 3D graphics, audio, USB, process management (workers), networking (including peer-to-peer), and a number of persistence mechanisms. But before it takes over completely, it has to interoperate with the host OS a bit, too, by exposing the filesystem.
(A decade more, and maybe the vision of Inferno [1] will sort of kind of be implemented.)
Am I the only crusty old fart who thinks this a lousy trend?
I worry the next talented generation of developers is being alienated from the desktop, and as a result my computing experience will degrade as Chrome's piecemeal attempts to reinvent the OS leave me with an inferior selection of browser-based toys that are awkward and clunky compared to their native counterparts.
I grant the browser fixes a couple things my OS got wrong: installation/uninstallation and updates. But it feels like it does everything else worse.
You're not alone. In another 20 years, you'll get to laugh tragically as a new generation of naive young techs invent yet another layer of abstraction on top of the browsers and gush about how this time it will finally be a universal platform. And your old 2TB of ram, 50 core cpu laptop might even still be able to run a text editor with only a few seconds of latency.
And it goes without saying that, for your own safety, none of these platforms will run code that hasn't been delivered by a trusted and approved corporate appstore.
No, I hate it too. As a consumer, I wish to do my computing in my machine and not in SaaS provider's machines, having to share with them personal data when there's no technical reason to do so.
I think the main driving forces on this are:
1. It's easier for computer illiterate people to use web services, since they don't have to go through an installation process.
2. It's easier for developers to support their users because they don't have to worry about their users' computer environments, which is largely out of their control.
3. It's also harder/impossible for their users to pirate their software, since they it never runs in their machines.
I think the biggest reason (and the one harder to fix) is the piracy one.
> 3. It's also harder/impossible for their users to pirate their software, since they it never runs in their machines.
I think your second point is actually the dominant one, but you're not wrong about this either. In some sense, we get the computing landscape that we (collectively) asked for.
But why is it always necessarily "awkward and clunky"? As always on network connection becomes more and more ubiquitous, internet connected experiences become better and better. Just look at (well-designed) experiences today like streaming video.
I'm excited about things like WASM that will allow browser-based experiences to close the gap more and more with native apps. The web infrastructure is extremely powerful and has a fairly low barrier to entry so lots of people throw crappy experiences up, granted, but it's generally a good thing to have the resources so accessible.
The web is awkward and clunky because it's a document markup format with bad application APIs layered on top.
WASM solves certain problems like arithmetic being slow. This doesn't fix the web's problems; it merely enables a new class of bad apps to be written on the web.
The best streaming video services are not web based. Thank goodness Netflix and Hulu have native iPad apps to spare me their websites.
> As always on network connection becomes more and more ubiquitous, internet connected experiences become better and better. Just look at (well-designed) experiences today like streaming video.
As not everyone even has a network connection, I find this kind of viewpoint outright dangerous because it leaves out hundreds of millions of people. Streaming video, in particular, seems to choke if you don't if you don't have the absolutely best connection.
Is it really surprising that in the wake of multiple platforms that refuse to interoperate that an OS that runs on top of every other OS is massively popular?
It doesn't matter that it's clunky and bad, CS people love working with hard to use systems, it just matters that it works.
The websites google.com and facebook.com and twitter.com (to distinguish from their mobile apps) are very popular, but the most part these sites just render text and images.
google.com doesn't try to search my local drive. faceboook.com doesn't offer to share my local photos. Their usage of "browser as an OS" is extremely limited.
I worry about the other problem: the browser becomes so ubiquitous the OS disappears, along with any platform conventions and application interop.
I would argue that you could use a combination of local storage and global state in these web apps to approximate a file system already. Not to mention lazy loading and web workers to cache larger files needed to run a web app. As well as parrelism and currency already offered by web workers.
Approximation is not what it's about. It's about integration. Write that google doc and store it as a local .docx or .odc. Edit that 20 MB raw picture in a browser app and store it back to the disk, instantly, then feed it to a traditional desktop editor. Write that Python script in a Web IDE, store locally, and upload it to an Arduino.
All those use cases and more can be accomplished if the Write API is limited to a sandboxed local storage with a configurable size limit per domain. I would hate to see all the gains made by sandboxing the browser fly away by allowing it to access any file on the user's system.
I think the idea here is that you'd whitelist files/folders on your actual filesystem for use with the app. This sounds fantastic for me. There's so many useful web based tools that are a pain to work with because you have to manually sync files to and from them.
I'd agree with the 10 year timeframe. Before that browsers were basically aiming for HyperCard. It was only after the combination of the WhatWG and the emergence of the app store as an ecosystem threat that the browser vendors have pushed the browser towards OS replacement.
Sorry, the 90s were a real time, and the ambition to replace the desktop OS was already there, as per the ~1995 Bob Metcalfe quip (popularized by Marc Andreesen) that the Netscape browser would soon reduce Windows to "a poorly debugged set of device drivers".
There was a bit of a lull in progress in the mid-2000s, after the fall of Netscape Inc, until the rise of the mobile app threat you mention.
Bill Gates did see Netscape as a potential threat to Windows. Java with Sun's "the network is the computer" motto as well. So yeah, people in the 90s were dreaming big. And then in the 2000s people were complaining that Microsoft set computing back by a decade.
You get access to this if you have written a Chromebook app. If you get a user to install that, there is a clear set of permissions they are accepting. See the section on permissions, which is specified in the app manifest. It is not an open hole for anything running on a Chromebook.
> What if websites start requiring specific files to exist before allowing access?
This is certainly going to be immediately abused for encrypted storage of persistent cookies and tracking identifiers.
- If my site generates cat pics, I'll put identifiers in the metadata fields of the image format.
- If my site generates markdown, I'll put identifiers in an alternate data stream (Windows) or encoded in the whitespace.
Since the files exist outside of the sandbox, they'll be outside of the scope of privacy features like clearing the cache or cookies, and outside of the reach of adblocker extensions.
Nobody said the sites would have unfettered access to these files like they do with their cookies. Putting an ID on a photo is useless if you then have to ask the user to read it again the next time.
Also, why would they be outside the reach of adblockers? WebExtensions can already intercept and manipulate the use of certain APIs by sites, the same can easily apply here.
> What if websites start requiring specific files to exist before allowing access?
This isn't just unrestricted filesystem access. The site would need to request file system permission first, _and_ get the user to select the file in a file picker.
You could set the whole directory tree to no-execute via ACLs.
Anyway, more interoperability with the native environment is a good thing. Right now browsers are an alien implant in every operating system that can't communicate with anything else.
> The primary entry point for this API is a file picker, which ensures that the user is always in full control over what files and directories a website has access to. Every access to a user selected file (either reading or writing) is done through an asynchronous API, allowing the browser to potentially include additional prompting and/or permission checks.
I'm glad they are taking a security-focused approach from the beginning of the design phase. I expect no less from Google. However, I worry that the approach of putting the end user in charge of approving/denying access to resources puts an unreasonably high burden on users. Many people simply click to dismiss dialogs without even reading them, let alone thinking carefully about what they are agreeing to. This is asking for trouble. I would rather see a sandbox approach that restricts the locations and types of files that can be read and written.
In the main case, where the user is picking a file, the API as proposed in the examples[1] doesn't seem to suggest any ability to pre-select a file in the displayed file picker. Thus, this is a "dialog" where you literally cannot just press "OK"—you have to either select something to open first, or hit "Cancel"!
On the other hand, the last example (FileSystemDirectoryHandle.getSystemDirectory) might devolve into the sort of "user presses yes before even reading" permissions system that is so troublesome today. But I feel like the kinds of directories exposed through such a mechanism would likely be fairly innocent (the proposed example being the user's set of installed fonts.)
> Not allowing websites to write to certain file types such as executables will limit the possible attack surface.
The problem with that is there is no hard distinction between a data file and an executable. Essentially every kind of file, when opened, causes code to be executed, usually significantly influenced by the content of the file.
This API is moving the security perimeter back, making it the responsibility of every local app that may open a file touched by this API — essentially every file it might open — to ensure unexpected/malicious content can’t do anything bad. That’s not going to happen. Instead, users are going to have to be “sure” not to open files via this API using sites they don’t trust. That’s pretty weak compared to what we’re generally used to today (which doesn’t seem strong enough already).
I realized PWA's potential was on Desktop and not really on mobile.
I came to realize that PWA missed really two things to change Desktop for good :
- Sandboxed FileSystem API
- Standard Operating System Support
Most apps like Slack, Discord or Twitch can barely justify their usage of Electron... beside the possibility to have those apps in a separate OS Window they don't make an extensive usage of Node.JS like VSCode which spawn "child_process".
They just seat there , in a separate OS Window and they can launch at OS startup too..but in order to have those two features you must completely give up on security and give access to almost everything on your computer...
PWA really bring a new dimension to Desktop App, if Windows and MacOS brought native support to PWA ( using Edge and Safari without the need for Chrome ) this would change the industry for ever.
Problem with PWA:
Native access, Browser versioning and seperation of processes.
1. If when you ran a .net application all .net applications shared the same parent process that'd be a significant problem.
2. Many electron clients control their versions because there are bugs, quirks, removal of features, or other changes in chromium versions. Discord last time I checked is still on 56.
3. PWA's do not have native access. Discord requires that to spawn IPC communications with game overalys, which are another native process.
PWA's are meant more for mobile, but even there you are of course, limited.
Firefox and Safari would change the world if they actually cared about doing that. They're about market share, not about the technology anymore. Firefox was in a position of making it possible to simply ship the firefox binary, and you could add your js and css changes, effectively having the full firefox browser as your 'app'. Which means it could be triggered to auto-update like normal firefox, but still have your UI and act as a browser.
Firefox abandoned that route and instead is focusing upon whatever Chrome impliments, like Custom Components. Safari also doesn't care about anything but linux and mac.
Agree that PWAs have a lot of potential on Desktop. But they're also really good on mobile. Both Facebook and Tinder have PWAs that I prefer to their native apps...
Is this API needed? I understand the benefits, but web apps or Chrome extensions that _really_ need file access can already use the Native Messaging API and ask the user to install a native app, like here:
The advantage of an explicit app installation is that users/companies that want to avoid such file access (and the many other things that native apps can do) can simply avoid/block app installations.
In the last two decades users have learned that "as long as it's inside the web browser, it's safe". If web apps themselves become too powerful this security approach breaks down.
I don't know that native apps are the best resolution, but I absolutely agree with the last paragraph. There needs to be really clear user messaging around this type of access. The current state is, IMO, insufficient.
Relying upon browser features for native access will be trouble. Google could revoke the feature from chromium at any time. And in any real application you will need to interact with plenty of folders users cannot be expected to know, like Appdata.
> And in any real application you will need to interact with plenty of folders users cannot be expected to know, like Appdata.
I think a lot of what you'd store in Appdata in a "real" application will remain in browser storage or pulled from the server. A single file or folder with app settings, drafts, etc. should suffice. And even settings are probably better stored server side for user convenience so they can easily switch devices.
3. Native will allow for features from the kernel that
browsers do not impliment.
4. You cannot plan for how people will use such a wide feature as the Filesystem. Saying 'X and Y should fit' requires a significant amount of honest research and testing, with a very good reason as to why the change is necessary.
If they are shiny new features, absolutely. Browser features, especially from Google, are notorious in being experimental and removed when they want to be. Browsers in general will remove things unless their internal teams find use for them. Browsers do not want to support more features than they have to, as can be seen by subscribing and listening in on the mailing lists for Firefox.
However your equivalence in this case is a completely false one, because file system API's are not going to just be 'removed' as that would break literally every application that uses standard fopen. over 20 years of standard is a pretty good boat to stand on unless there's signals that say otherwise.
Well sure you shouldn't rely on anything new for core aspects of your application without a plan for what happens if they go away. Especially when the API in question isn't even implemented yet...
But that's not how I understood your comment above, which was to not rely on any browser features for native access, which IMO isn't good advice.
Sure, you shouldn't go out and build a business around the "Web Writable Files API" tomorrow, but if it gets standardized, and if it gets implemented in a few major browser engines (which is a requirement for standardization in this context), then yeah go ahead and start to rely on it more.
But this has very little to do with these new proposed APIs, and more with general software development.
I agree. This is one of the biggest things some of my applications have needed Electron for. WinJS has a similar API to the proposal here (it's picker-oriented to keep it somewhat sandboxed), which opens up options for Windows-only support in a PWA, but it would be great to see a more web platform-wide supported approach.
This could be a great way to allow Web apps to talk to native apps. At Folding@home our client software uses a Web frontend. This frontend talks to the native Folding@home app at localhost over a socket. The problem is that browsers are starting to show security warnings for any non-https traffic and it's not possible to securely give the native app a valid SSL certificate. Reading and writing shared files would let the frontend talk to the back end more securely. Allowing the Web app access to a directory, with disk space restrictions, would be ideal.
You can use the Native Messaging API. Password Managers used to use a localhost listener, but it's less safe than Native Messaging.
But if you stick with the HTTP listener, as someone else mentioned, while you're installing your app, you can edit the user's host file to add a domain name to the loopback adapter, and then you can also create a one-off SSL cert for the computer.
Hi there. Sorry I'm a bit late, but Firefox definitely supports native messaging with their new WebExtensions architecture; the extension I use with my Estonian ID card uses native message to talk to a native component, which in turn talks to my card.
Since when does a desktop application need to ask the user anything to install a cert programmatically? Enforced desktop sandboxing hasn't really taken off.
But then why do you need the DNS/hosts file hack? You can issue a self-signed cert to localhost/127.0.0.1, put it in the user’s trust store and call it a day.
That bug is stuck on not breaking exactly this kind of behavior. They cite Dropbox in that thread (though I haven't read the followups yet), who has www.dropboxlocalhost.com resolving to 127.0.0.1. It sounds like Spotify does something similar.
It's a special case, but HTTPS specific features don't always work. In this cause it's an HTTPS website talking to a local HTTP service which causes the cross-site errors.
I hope this can be made to work securely. The main thing holding back web-based IDEs is file system access. It wouldn't be horribly difficult to port e.g. Emacs to Javascript (or more likely WebAssembly) if file access was available. I know this sounds ridiculous, but it would be doable.
In a lot of use cases, all you really want is a simple way to read and save a small blob of data.
But to achieve this today you would have to do one of:
- Save it in local storage, but lose their data if they clear it or use another device.
- Build out a REST API backend, but need to also build auth and account management.
- Use a backend as service such as firebase, but still costing you to store the data.
- Use a filepicker api to save to the user's cloud storage. For example Google Drive has a file picker: https://developers.google.com/picker/. But not everyone use the same cloud drive (icloud, dropbox, onedrive, etc).
With local file access, it's more or less the same as the filepicker api solution except they are free to save it to a non-syncing file location too if they choose.
Also it's too opaque so the typical user won't understand that clearing their browser data means losing their work. They have already been trained by other sites to believe that their data should be persisted to the site's servers and that they should see their data even if they open the site on a different device.
Forcing them to choose where to save the data should hopefully clear up where their data lives. If they need multidevice they can sync their filesystem folders with one of many third party cloud storage of their choosing (such as dropbox).
I think it's a perfect fit for webapps like as photopea.com (photoshop clone in the browser) that came up on HN the other day.
This is a great idea. One interesting consideration would be how it would handle symlinks. Would it follow them? Could that be used as an attack vector to exit the user-specified directory?
"The primary entry point for this API is a file picker (i.e. a chooser)."
...so what would be difference to today, where a user can "upload" files to the browser's local storage, and then the web app can work with the file(s)?
I get it that you avoid to have to "re-download" the file, but that seems to be a small benefit for the risks we get.
While I'm concerned the security implications greatly, one should not always assume network access always works or that it is effective at all times. In the United States, for example, bandwidth limitations and caps are rampant far and above what is needed just to saite the greed of our corporations. Given the current political control of the judiciary and executive branches, I don't expect this to change any time soon. We'll likely see more and more limits to our ability to do "cloud" things unless we offer a greater profit than ever before to the incumbent ISPs.
Being able to do something offline is important. I can see how this goes forward with Google goal to get their "chrome is an OS" model.
It's an offline-first app for managing and annotating PDF and web documents.
The idea was that with Electron I can have direct access to the filesystem and be fully integrated into the browser as well.
For the most part it works. The users can easily browse the local filesystem and also access some operating specific features.
But let me tell you -it's rather schizophrenic to use something like Electron. It just doesn't know whether it's a browser or a web server and then there's the issue of dealing with process communication within Electron. Chrome is its own process and node the main process and you have to communicate via message passing - not super fun.
Now we have PWAs and they're far from idea but it might be that an API like this, along with PWAs, and maybe a bit more functionality could replace Electron for building these types of hybrid apps.
I like the idea of a File API, to make it easier to for example search files. But I'm afraid we'll end up with the same problem as native apps, where apps would ask for all permissions, even if they don't really need them.
https://bellard.org/jslinux/index.html, besides the whole kernel, actually includes a functional port of gcc! You can compile C files right in the browser, and then use "export_file [file]" to get it to your machine.
Safari 11 or 12 introduced a new extension system where Safari extensions ship as a macOS app with a Safari extension component that can communicate with the native app. And the macOS app has standard Mac Store sandboxing options (by default, no FS access outside of its bundle).
I've written some extensions for personal use and so far it's been a great trade-off.
I’m wondering how useful it is to save files on the local filesystem in 2018.
I can understand syncing Google Drive and saving stuff there. Or Resilio Sync or any number of other solutions such as Dat.
You Already have filepickers to open and read files. And you can save files as I indicated - via syncing to a service, where you have control of what goes where and it’s still there if you lose the device. And it can be encrypted where the local devices have keys for syncing.
The only new use case I can think of that this enables is overwriting existing files - which is dangerous!
Even in 2018, not everyone is online all the time, or has a connection capable of uploading large files (due to speed or caps). And frankly, it's just wasteful to upload everything.
Depends how heavily you rely on the cloud. I certainly don't have everything there (and don't want everything there!). Plus, going via gdrive is slow compared to say saving the file directly from the browser, and then opening it on a native app instantly.