Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The technology has existed since the 80s, it's just that the stuff people are building today is ridiculously over complicating the solution. MacOS classic had simple single-file programs you just dragged and dropped anywhere you wanted on a disk and ran them. The same was true of DOS (it was a folder, but otherwise the same), RISCOS, NeXTStep and modern MacOS via NeXTStep.

Even Linux has had several: NeXTStep style Application Bundles in GNUStep, AppDirs in ROX filer, and AppImage today. They're just not over-engineered enough for the Linux Desktop community to embrace or something.



What you call “over-engineering”, the developers of Snap/Flatpak call “security.”

Portable app bundles are not the goal here. The goal is portable app bundles that the user can install from arbitrary sources on a whim without putting their OS at risk of malware or their data at risk of exfiltration.

Y’know, like on phones.

But also, since these designs aren’t for consumer electronics appliances with a central vendor, but rather are following the FOSS philosophy, you can’t just use a weak sandbox powered by nominal capability manifests signed by a central code-signer, where the app could do arbitrary things outside of its capabilities and the sandboxing wouldn’t catch it (i.e. the macOS approach.) Instead, you have to have a strong sandbox that will actually enforce the capability manifest to restrict what the app can do.

Oh, and the bundles can update by self-modifying their contents, creating new executable binaries in the process, so your sandbox can’t be based on signing the initial contents of the sandbox, but rather has to sandbox whatever is attempting to run in there right now.

If you can come up with a design that fits those criteria and is simpler than Snap or Flatpak, by all means, share.


" The goal is portable app bundles that the user can install from arbitrary sources on a whim without putting their OS at risk of malware or their data at risk of exfiltration."

This is not now nor will this ever be a thing. If you can run software on the host there will always be a way to compromise the host. It is not even 100% safe to run software in a vm because exploits have been found to break out.

Maintaining security on your phone requires the app store owner to invest time in proactively screening manually or automatically for malware or reactively removing it when found. It requires users to avoid stuff that looks like scammy bullshit or software from unofficial sources.

Both of these layers leak and when they do automated protections usually do as well because malware authors can easily test against existing protections, learn from one another, and distribute what works.


> If you can run software on the host there will always be a way to compromise the host. It is not even 100% safe to run software in a vm because exploits have been found to break out.

That's... inaccurate. More valid statements would be:

- In a sufficiently complex OS, it is unlikely you will be perfectly safe running untrusted executables even with reduced permissions.

- Popular modern OS desktop distributions are very complex out of the box.

- Sandboxing prevents certain classes of attacks effectively, but should not be relied upon as a sole line of defense.

- Vulnerabilities have been found in VMs and containers, but they afford greater protection and isolation than running a process directly on a host system.

IOW, don't go No-True-Scotsman on security. A vulnerability does not invalidate all benefits of an architecture.


> If you can come up with a design that fits those criteria and is simpler than Snap or Flatpak, by all means, share.

So a well packaged app utilising SELinux|AppArmor profiles?


What SELinux/AppArmor profile would allow an application to read exactly the one file in $HOME that the user selects in the file picker, but nothing else?


It isn't really that hard, just give the user the ability to determine if and how an application is sandboxed. Then it doesn't matter if the binary changes during an update, the user's level of trust in the vendor who provided that update has not changed (else they'd have disabled the updates), so there's no reason to change the sandbox permissions. You only really need to sandbox stuff you're unsure of, or stuff that misbehaves but you need to use anyway.

This condescending idea that users can't be trusted to determine this stuff for themselves and therefore we need a centralized signer and a bunch of complicated management framework to deal with it is part of the reason the FOSS philosophy has yet to produce a desktop anyone cares about.


I can determine whether an app is trustworthy, sure. But you know what? Sometimes I actually want to install an untrustworthy app. Sometimes an untrustworthy app is the only app that does what I need.

Your argument, by analogy, is “you should trust people to know not to sleep with people with STDs.” Well, you know what? Some people want to sleep with people with STDs. Sometimes those are their significant others. They still don’t want to catch something.

In both cases, the answer is the same: a condom.

A sandboxed App Store is, basically, a brothel where condom use is enforced. You can meet strange apps, play with them, and not worry about it. Because of the brothel’s policy, nobody the brothel hosts is risky. Your safety is enforced at the level of choosing the source.

Whereas something like Ubuntu’s PPAs, are more like a bar. Who knows what you’ll catch? Any individual app might decide to “wrap it up” with SELinux/AppArmor, but you can’t enforce it at the app-store level.

(Also, completely dropping the metaphor: the iOS App Store is frequently exposed—at least for free purchases—to children or even infants. This is actually a capability people want. This is certainly not a case where the user can determine for themselves whether an app is trustworthy.)


>I can determine whether an app is trustworthy, sure. But you know what? Sometimes I actually want to install an untrustworthy app.

Yeah, that's why I said this:

> You only really need to sandbox stuff you're unsure of, or stuff that misbehaves but you need to use anyway.

I'm totally for sandboxing, at the discretion of the user. It doesn't require complicated infrastructure to do this.


Apple who employs over 100k individuals has employees create automated tests, and manually test apps for inclusion in the store which requires a 99 usd fee for a developer to access.

A potential malware author must pay 99 usd to submit apps and pass certification for its apps. If apps are detected in review as malware that 99 usd is burned and will have to be spent again potentially repeatedly.

This is probably why there are millions of infected androids and comparatively few infected iphones.

Redhat with 10% of the staff and 1% of the annual revenue doesn't require any fee to release applications. This probably doesn't scale to android or ios proportions unless your sandboxing is perfect.

Unfortunately there is no 100% safe way to fuck disease ridden whores and no 100% safe way to run malware ridden apps. This is a dangerous fiction and an unworthy goal.


To be clear: clients run “malware-ridden apps” safely every day. They’re web-apps. Web browsers are actually-competent sandboxes. (Even PNaCl worked fine, despite nobody wanting to use it.)

Likewise, servers run “malware-ridden apps” every day as well. Do you think AWS or GCP is getting its infrastructure infected when customers run their arbitrary code on it? No. Not even on the shared clusters like Lambda/Cloud Functions. These are competent sandboxes.

There are numerous other examples—running everything from user-supplied DFA regexps to SQL queries on shared servers (complete with stored procedure definitions) to arbitrary Lua code, server-side, in an MMO.

We programmers know how to (automatically!) sandbox arbitrary untrusted code. We’ve done it successfully, over and over. We just haven’t done it for GUI desktop apps yet.

That fact is much more to do with the legacy of the architecture of these GUIs, than it has to do with any inherent problem in sandboxing desktop GUI apps.


Hackers bypass the $99 fee by releasing infected Xcode tools and having lots of individual developers unknowingly submit infected apps for approval. https://en.wikipedia.org/wiki/XcodeGhost


The FOSS of today is as much driven by buzzword bingo as the big corporations (perhaps so much of it is made by people working for big corporations now a days). And the really big buzzword in FOSS these days is containers. In large part thanks to the massive presence of cloud derived thinking (i have been told that basically all that mattered was cloud, as that was were the usage was).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: