Since several web features are disabled with Lockdown mode enabled, I wonder what measures Apple is planning to implement to defeat (at least to some extent) fingerprinting attempts to detect the people/devices using Lockdown mode while browsing.
> If you can’t stand the impact on performance or image rendering, well, maybe Lockdown isn’t for you. Apple claims only a tiny fraction of users will need it, though I’d argue an awful lot of users will want it.
Of course, I want it! (I already go through many other inconveniences for privacy and security).
> Should You Turn it On?
> Yes. Seriously. Turn it on when you have a supported OS and don’t look back.
Amen! I’ll be telling some laypeople to turn it on and try it out (along with instructions on how to turn it off selectively or completely).
From that comment: “If Apple is logging if this feature is on and sending it back to Apple, it will result in targeting from nation states even if this feature is “invincible” - which I have no reason it is; basically, nation states demand list of users subject to its jurisdiction.”
Obviously there are likely other ways to fingerprint Apple devices with lockdown mode on, but to me, at the point you need “lockdown mode” likely should realize the doing so will likely make you more of a target.
I think one reason to make this feature public is to get more people to use it, and therefore dilute Lockdown Mode as a signal. As you say, it’s pretty easy for an attacker to detect this mode: with a browser, just check that the Safari version is high enough but that certain features are not available. If even 1% of iPhone users are using Lockdown mode, it’ll far exceed the number of people who really need the feature to stay ahead of nation-state targeting.
If that was the case, Apple would have just offered end-to-end-encryption for iCloud, which they did not; turning off iCloud is also not a default configuration of the new lockdown mode, which it should be.
Issues is automatically targeting users would be easy.
If Apple tracks users that have both lockdown mode and iCloud on, all a nation state with jurisdiction has to do is request list of users with both on; having lockdown mode on might even qualify as justification for a search warrant and legally hack anyone using it, which is already the case for Tor:
Find it horrifying that Apple has this feature, but makes no effort to inform users about the risks of iCloud; in my opinion, if you have lockdown mode on, iCloud should not be option, should trigger an off boarding from iCloud and wiping of any data on iCloud; also pointed this out in comments here:
I think people here are misinterpreting the point of this feature. This isn't a feature for people to gain privacy against the nation-state they reside within the legal boundaries of.
The point of this feature, is to protect you, who live and are an upstanding citizen of [a country that is in the same vague "Western" political network as Apple itself] — but who have something that other nation-states want, like trade secrets — from APTs launched across the internet by cyber-privateers tacitly sponsored by those other nation-states.
Under such a threat model, who cares if Apple has your fingerprint, and if the US government can get said fingerprint? If you're a US citizen and in this situation, the US government probably very likely already have a close working relationship with you, having likely tasked the NSA to work closely with you to ensure that your "key industry" company doesn't suffer any GDP-damaging attacks.
Offer a spoof mode, make the Lockdown mode browser look to external websites like it isn't in Lockdown mode. Tricky but doable with some site breakage that can always be fixed by disabling Lockdown mode for sites a user trusts.
Convince as many people to use Lockdown mode as possible. I, for one, don't see any reason NOT to enable Lockdown mode on all my devices. Do you need iMessage URLs sent by randoms to load remote content without your consent?
Above all, lets begin to consider signed web content..
Have you ever study fingerprinting, read the linked post that’s the subject of this thread, understand how prior advanced targeting attacks using fingerprinting worked, etc?
As is, not even researching it, appears very likely that lockdown mode is easy to fingerprint via a browser from information shared in the linked article. Spoofing if functionality is off is not a common thing and would be very hard to do if not impossible if combined with challenge-response like counter-measure from the attacker to confirm the functionality is actually accessible to the end-user.
How realistic is an "advanced fingerprinting attack", though?
I think the more realistic threat model here is presented by ad networks and major websites doing typical types of browser fingerprinting, like canvas, fonts, etc. as well as possibly some of the techniques mentioned in the article here, like webGL, JIT JS, etc.
In that case of a limited number of trusted sites that we focus on ensuring compatibility with, spoofing is easier, because we can pay a lot of attention to ensuring that our "middleman" fixes the errors introduced by spoofed client-to-server communications.
Some technologies like WebGL will simply never work on a spoofed site, of course. But for the very limited number of sites when users lose important functionality, they can just turn off Lockdown mode.
If a Lockdown'd phone habitually patronizes malicious websites, the protection will never be enough anyway. So we shouldn't worry about protecting against being fingerprinted by a very malicious website - Lockdown users must simply avoid these, with or without a fingerprinting vulnerability!
Sorry, but I don’t understand what technically you describing.
If your suggesting Apple should proxy all internet traffic to devices — that is a horrible idea, incredibly dangerous, and a huge step in the wrong direction. To counter the issues I pointed out, Apple would literally have to be able to decrypt all the traffic and act as if they were the user, which is obviously a insane security issue.
As for avoiding malicious websites, again, I don’t believe you understand what advanced attacks look like. Any site can be hacked and if it is, fingerprinting can be used to only attack a very well defined known list of targets. For example, a very well known CEO of a security startup used a limousine service that was hacked after this was discovered and used to launch at attack against them.
Understand your interested in the topic, that’s great, but try to balance your technical familiarity, familiarity with the topic, and the very real threat security breaches pose to very small subset of the world. These features are not intended to counter AD companies, but attackers that in the worst case situation will ultimately kill the target.
> If your suggesting Apple should proxy all internet traffic to devices — that is a horrible idea, incredibly dangerous, and a huge step in the wrong direction. To counter the issues I pointed out, Apple would literally have to be able to decrypt all the traffic and act as if they were the user, which is obviously a insane security issue.
I wasn't suggesting proxying anything, just that the browser should attempt to correct errors that it introduces into page rendering when it spoofs feedback to the server.
And again, is it a realistic threat model to imagine that a high volume website, trusted enough to be browsed regularly by Lockdown-paranoid users, will be hacked in such a way as to deliver a fingerprinting attack to browsers, and only that?
I appreciate the sense of superiority that you have, but try to follow along.
If I had a sense of superiority, why would I even be taking the time to attempt to understand what you’re saying, makes no sense.
The device has the features turned off because they are know to be hard to harden against attacks or worse, have known vulnerabilities. To spoof them being on, a proxy that isolates requests to the functionality that’s off on the device would have to be sent to another device, but accurately responds as if it was on, including specific designed counter-measuring from an attacker to confirm the end user had real-time control over the proxied system. Just makes no sense to have such a complex system and in majority of situations would require another device that would be vulnerable to attack and always near the target and secured device.
>> And again, is it a realistic threat model to imagine that a high volume website, trusted enough to be browsed regularly by Lockdown-paranoid users, will be hacked in such a way as to deliver a fingerprinting attack to browsers, and only that?
Simple answer is yes. Also, it doesn’t have to be a high volume website, just one the target trusts enough to visit.
>Just makes no sense to have such a complex system
It's not that complex, it really can be reduced to what the browser already does: attempts to render web pages best for the display, without full hinting from the server-side.
In the end, what I'm getting at is that browsers should start viewing any page in an untrusted mode, and this mode should dramatically limit available fingerprint features to the most minimal subset that provides an acceptable user experience.
No. Whole point of disabling long list of functionality mentioned in the article is so that — no - code is executed via that functionality on the device at all. You are suggesting something that go against whole point of turning it off. Browser already operates in “untrusted” mode. Apple’s iPhone systems and hardware are not designed to be separated. Even if the hardware was duplicated and completely isolated, the secure hardware would be in close physical proximity to non-secure hardware and as a result would be vulnerable to side-channels leaks and/or attacks.
You also are ignoring that a challenge-response counter-measures by the attacker would require direct and real-time action from the targeted users; CAPTCHA is a type of real-time challenge-response combined with private information would confirm that the target user is actively using the device being targeted.
If you think you understand something I don’t that’s fine, but I clearly neither understand what you’re trying to communicate, nor agree with what little I believe I do and have repeatedly attempted to explain why and you have repeatedly ignored my points. If I have ignore a material point made by you, please explicitly point it out.
My guy, there's a difference between legitimate confusion and this sort of aggressively refusing to get the point.
Clearly I'm not referring to sandboxing or app privileges, I'm referring to how your browser assumes any site that's able to send it some Javascript should automatically expect that Javascript to run, or WebGL, or WebAssembly, or whatever monstrosity.
Fundamentally the web was built with the assumption that any resource loaded was an intentional act by a user, or by a process directly authorized by a user.
Over time, the internet has drifted to a one-protocol town, and that assumption by the designers of the protocol is breaking down as the protocol becomes everything to everyone. Trust boundaries and user controls are NOT evolving in time with the protocol capabilities, and worse, protocol development has largely become a fox guarding the henhouse as the main browser developers ultimately responsible for defining the de facto protocol suite, Google and Microsoft, each vie for advertiser dollars and market ubiquity by transforming web browsers into operating systems.
No longer replying, if you use Apple products, based on your needs, consider looking for other options.
And no, lockdown mode should not enable or task users with authorizing file-by-file, line—by-line, etc blacklisted technologies; believe some of the off by default functionalities are able to be whitelist per domain, but might be wrong.
Also, you clearly and repeatedly stated you wanted to simulate running the code to “spoof” devices profile to evade fingerprinting — you, not I are the one intentionally causing issues in this thread, by repeatedly changing your stated intent.
> make the Lockdown mode browser look to external websites like it isn't in Lockdown mode.
This will be instantly defeated by benchmarking the js performance. But disabling JIT is a VERY important step to harden your browser. This is one of these things where you have to actually choose between privacy and security
>This will be instantly defeated by benchmarking the js performance.
How common is this behavior for non-malicious websites that a Lockdown mode user is likely to use? It seems to me that if you're loading malicious content from a site controlled by foreign intelligence services, you're probably done whether Lockdown is enabled or not. Preventing more casual profiling from common logs likely to be strewn about in CDNs, etc. is still an important level of protection, I'd argue.
Normal web pages that load ads will attempt to detect "fraud" by connecting back over WebRTC, running benchmarks to see how "valuable" of a user you are (how shit or expensive your hardware is), and running benchmarks to see whether you might be a fake browser/"ad fraud" user running large amounts of sessions at the same time and therefore have slower performance. It's bullshit and should be illegal.
I already dislike webgl leaking the model of my gpu, concurrency leaking memory and cores available, and disk space.
Go visit walmart or really any major site - almost more likely than not it will do this - and watch it attempt to enumerate all of your plugins, connect over webrtc, enumerate performance.* msPerformance, mozPerformance, make a webgl video and ask for unmasked renderer, enumerate thousands of fonts, attempt and fail to spawn piles of ActiveXObject, use "window.msDoNotTrack" as a fingerprinting feature point, enumerate hundreds of browser functions and getters (maxTouchPoints, doNotTrack, hardwareConcurrency, ...) and calling toString() on dozens of specific things like window.RTCDataChannel.toString() and seeing whether it fails in a try/catch, if it returns a function, or if it returns "function RTCDataChannel() { [native code] }" as a string, etc.
Can't edit anymore, but I want to point out that one particularly gross thing I've seen is code that checks how well your device characteristics line up with expectations for CPU and RAM.
The numbers are intentionally imprecise for anti-fingerprinting, but I've seen JS code that treats users as suspicious or bad when your logical core count reports 1-2 but memory is 8+, or a lot of cores and very little memory, or if your device is non-mobile but reporting less than 4 or 8 GB of memory. The assumption is that you are a virtual machine if you're a "desktop or laptop" and have a single or dual core in 2022, for example.
Wow. I had no idea. This bullshit is why I browse with javascript off, and enable it only on a per subdomain basis with uMatrix, and disable all the tracking technologies I can. I probably already stick out like a sore thumb to anyone doing browser fingerprinting.
Not only did the kids fail to get off our lawn, look at this giant hunk of poop they left all over it. Eternal September never ends.
Well, good thing they reverse-proxy the javascript code first party directly on the domain (www.*), and attempt to load multiple subdomains on the primary domain one after another (including randomised CDN paths)
"enable it only on a per subdomain basis" works when the tracking runs off a separate subdomain. Walmart, for example, intentionally proxies the files through their primary domain, the one that you are visiting, to try and bypass this.
--
Other sites and services will also use blocking them as a fingerprinting point. For example, it loads native first-party JS to try and bootstrap the rest of it.
A really simplified example:
Stage 1: on-page script tag, not a separate file, sets up a variable - let's call it "counter"
Stage 2: Load cross-site-tracker.js from obvious-analytics.example.com.
If it fails:
Stage 3: Load QyojK8oIwLjske2JkW9mdJY0Np.js from hqMOBRLccCmEnG9.cloudfront.net; increment a "shady user is trying to hide from us" counter
If it fails:
Stage 4: Load RandomWordsRainbowButterfly.js from N4NqCUJAT9UUXFcwnn.cloudfront.net; increment a "shady user is trying to hide from us" counter
Keep trying this through 3-4 domains, use random s3 buckets, cloudfront hostnames, akamaized.net hostnames. Upload all tracking data as soon as one of them succeeds.
TLS is transport encryption, not a content signature.
Ideally, I'd like to see every resource being served along with a signature verifying its authenticity, origin, and suitability for public consumption.
Users would then be empowered to make the decision whether we wanted to interact with a resource that does not offer these protections, and assume the risk, or simply refuse to load any resource that doesn't positively identify where it's coming from, who made it, and who certifies it as worthy of your consumption.
You have to explicitly opt into any logging in apple apps and the OS itself (iOS or macOS). Apple clearly goes to great lengths to ensure that they cannot access your information and data, and very clearly distinguishes stuff that is inaccessible to them from stuff that is encrypted but that they can technically access decryption keys.
A result of this is of course that we get people complaining about apple not restoring their data.
What you're doing is demonstrating how effective Google, Facebook, etc have been in convincing you that real privacy isn't actually possible, solely to protect it from legislative action, because their business models depend on violating it continuously
Recall that Google deciding to trawl through the content of your email (assuming gmail) is why emails from amazon no longer include any details about the order.
Or how "AI" required Google and Facebook, et al having access to everyone's pictures and information.
The fact that G and FB have taken a "fuck our users" approach, doesn't mean that's how every company operates. The fact of the matter is that >75% of google's revenue comes from selling you out, and >90% of facebook's. >80% of apple's revenue comes selling hardware, the remainder from selling services and I assume store royalties (I'd be interested in the break out). You don't have to invade everyone's privacy to make money, it's just G and FB have chosen that approach every time the option is presented to them.
In fact, if a company can decrypt your data then it becomes possible for a hacker of said company to also decrypt that data - a fairly solid reason IMO for either not collecting, or ensuring only the user can access info, unless absolutely necessary for functional or legal reasons.
What are you trying to prove? that Apple is the exception, that Apple really cares about you?
Apple is not a person, it is a large corporation without any of it's original founders, it has no principles, it's a machine that operates on one metric: it's bottom line. All of it's behaviour is merely a result of profit seeking, public perception and legal limitation. Apple will play the "privacy" marketing tool for as long as it helps their bottom line, but not when it doesn't. Which is demonstrably true by their behaviour in China - they do not care. They also take billions of dollars from Google each year due to their control over the iOS browser... so they are quite happy to support privacy invasion.
No, that apple doesn't shit on privacy - so saying that they all do is BS.
Implementing the features that apple does in a way that's private requires effort and money, it isn't marketing. Safari uses Google as the default search engine, but it also puts a lot of work into fighting Google's tracking, irrespective of what happens in the search field.
We can talk about how US businesses are generally shitty to the end of time, but we don't have to pretend that just because Google and FB shit on everyone's privacy that every corporation does.
iMessage is E2E even in China, apparently. The non-E2E services that apple still has are not-E2E in china or the US or the EU.
"But it’s an admission that the complexity of a modern phone operating system (or tablet, or desktop OS) have just gotten too much to handle, so the best path forward is to offer the option to not do those things."
Looking at non-consumer security mobile phones (like the one from Boeing) or those that are modified to be secure (like the Blackberry used by Obama) they all seem to employ this less-is-more approach to security.
In other words, what's the minimum tolerable feature set we can offer without further compromising security? It follows from the question 'why use a phone at all? If there is a functionality the client can't do without, then how do we provide just that without any security downside?'
It's a sensible approach which means Apple has just entered this market. Not in a big way yet - phones are made in China, modem chip firmware security has a long way to go. But lockdown is just beginning too and it shows Apple understands this is serious.
But all this is just defense. Next step is the entire industry. Finfisher is done - next up: NSO, Candiru and Darkmatter, their investors, suppliers and scumbag employees before they dissolve/rebrand and scurry back out of the light.
So lockdown mode disables any attachment except images on their messaging app, because parsing these has often been introducing exploits.
The fascinating this is that this parsing would happen on a process which even _has_ privileges to trigger any exploits. Parsing a message should be done far far away from the core OS operations, high in userspace, by a sandboxed process that can't break anything.
Based on previously seen exploits, it seems messages are handled by rather privileged processes. I wonder if there's a reason for that (e.g.: special messages can trigger privileged operations?)
Privileged is the wrong word, but GP is not entirely wrong. What you linked to is only the first part of the exploit and analysis.
From the conclusion of the second post, which analyses the sandbox escape:
> Perhaps the most striking takeaway is the depth of the attack surface reachable from what would hopefully be a fairly constrained sandbox. [...] The expressive power of NSXPC just seems fundamentally ill-suited for use across sandbox boundaries, even though it was designed with exactly that in mind. [...]
(The above is severely cut down, reading at least the entire conclusion or even the whole post is worth it)
Getting into the process that does the message parsing is only the first step in a full exploit chain. Usually processes, even the unprivileged ones, have direct access to the kernel. So if there is a bug in there for example, you can exploit the kernel as a second step. Alternatively, you exploit a bug in the IPC interface with the messaging app. Etc.
This is a good writeup! A couple random thoughts that occurred to me while reading through it:
- It would be really nice to be able to disable Lockdown Mode for specific people in iMessage the way you can for specific websites in Safari. I'm guessing you can't because the sandboxing isn't implemented the same way it is in Safari...but maybe that should be fixed!
- Disabling WebRTC in Lockdown Mode is probably an overall win, but it may result in certain web-video-conferencing tools not working. In most cases, the correct answer will be "then install the app for that instead", but it may result in a few issues. On the other hand, users can also disable LM for those sites (and I like that you can do it easily, so I could do it temporarily and then flip it back off afterwards).
- It will be interesting to see if the ability to turn this on is a feature available in MDM. I can imagine companies mandating that users traveling to certain areas of the world must have LM MDM-force-enabled on their phones at all times instead of taking a burner phone.
- I wonder how the prohibition on wired accessories will work if the phone is unlocked when the accessory is plugged in. As an example, with LM enabled I could plug my phone into my car and use CarPlay, but does it then turn off when the phone locks? I'm assuming not, but if you're going full-bore-privacy-protections, there's an argument there that it should actually just disable the port fully when the phone locks (and that's certainly the easier option to code).
> I can imagine companies mandating that users traveling to certain areas of the world must have LM MDM-force-enabled on their phones at all times instead of taking a burner phone.
That only solves a few of the possible issues a content-free burner phone solves, though. I sure wouldn't travel to those bits of the world with a regular device with all my information on it. Rubber hose cryptography is a thing.
Very true, and important to note that ‘rubber hose cryptography’ doesn’t have to mean violence—it can take the form of ‘open your phone and let us dump your data or you don’t get to enter/leave the country’.
Ah, that explains a lot. I do heavy ad and tracker blocking, including blocking loading of all web fonts. I constantly find various arrows and other tiny images not rendering and didn't know why. You'd think for something like a left and right arrow, you could at least set the alt text to the unicode character for left or right arrow, or at least ASCII art (i.e. "->" and "<-"). It would also make it make sense for people using screen readers.
if there's one thing I hate, it's websites "supporting" tor by redirecting from a specific article to the main page of their (in this case non-functional) onion URL.
twitter did this too a while back, they made a big show of how they're supporting tor now, and now whenever i click a link to a tweet via tor, it redirects me to their frontpage.
thanks, can you stop supporting tor now please, so I can use the site with tor again?
You know, I don't think I tested specific pages when I put the Tor meta support in. That's a fairly recent addition I was messing around with.
It's a '<meta http-equiv="onion-location"' tag, and it points to the base URL even on the blog pages. I'll get that fixed to point to the actual page of interest (should be easy enough in Jekyll to just re-render things). It's handled client side in your browser, so you should be able to tell the browser to ignore that.l
But as far as I can tell, the Onion address is up and operating.
Yes, unfortunately this is often the case. people who don't really use or test the site in tor put in some half-baked support and it just ends up making things worse. But my grievances aside (and please don't take this personally, it's just an issue that I've encountered one too many times, so it gets on my nerves), thank you for fixing it, and indeed it looks like the onion URL is now online, it wasn't working for me earlier.
I very much appreciate it - as I said, it was something I'd missed in my dorking around with Tor. No idea why it was down earlier, unless it was just loaded - I haven't changed anything on the server related to Tor in a while.
It seems any time a post of mine makes the HN rounds, I get some other weird corner case of my site pointed out, and it does improve things over time! Jekyll makes it easy to just re-render the site with changes like this too.
I think it's fixed now. The meta onion-location line is now pointing to the specific page, not the base website, and Whonix does the redirect to the proper page now.
I'd missed that in testing - I went to the root domain, and it redirected properly and let me browse to pages, but I never went directly to a post, on a browser that wasn't already aware of the redirect. Thank you so much for pointing that out!
It's not clear to me if Lockdown Mode would have prevented Hermit, the latest mobile APT which targeted iOS via sideloading by enrolling in the Apple Developer Enterprise Program.
The list of lockdown features don't seem to explicitly list that in-house app sideloading is disabled - is it? If not, then this mode seems like security theater from Apple, in that it doesn't actually lock down the parts of the attack surface that are actively being leveraged. How about instead, or better yet alongside this, Apple explains how they granted entry in the Enterprise program to the spyware company, and what measures they're taking to prevent it from happening again.
> The list of lockdown features don't seem to explicitly list that in-house app sideloading is disabled - is it? If not, then this mode seems like security theater from Apple, in that it doesn't actually lock down the parts of the attack surface that are actively being leveraged. How about instead, or better yet alongside this, Apple explains how they granted entry in the Enterprise program to the spyware company, and what measures they're taking to prevent it from happening again.
Im pretty sure that iMessage is one, if not the most targeted parts of the iOS ecosystem for practical exploitation. Disabling link previews and restricting the formats that are rendered likely renders this much more difficult.
The side loaded app would likely have to target non technical people as i'm pretty sure side loaded apps require lots of clicking through and trusting of certificates to get to run on a phone.
> So this would have prevented Hermit as you'd need to install a new configuration profile to allow sideloading of applications from that source.
Are you sure that's true? I haven't seen a Hermit sample firsthand, but from everything I've read about it targets did not need to install an MDM profile, they simply needed to click a link. Looking at Apple's distribution guidelines - https://support.apple.com/en-bw/guide/deployment/depce7cefc4... - MDM is listed as one option, and simply going to a link is listed as another:
> There are two ways you can distribute proprietary in-house apps:
>
> Using MDM
>
> Using a website
It seems like the latter was used, so I don't think installation of a custom profile was required, which brings me back to my original question of whether Lockdown would have prevented it.
An yet I wouldn't immediately jump to the conclusion that it's "security theater" because it only protects you from the vast majority of attacks and it may still be vulnerable to many 0-days. By this definition we have nothing but security theater in everything. And as the saying goes, if everything is security theater, nothing is security theater.
Lockdown is literally presented by Apple as being for people targeted by APTs like those developed by NSO Group, therefore I expect it to prevent attack vectors used by these APTs, like exploitation of the Developer program to facilitate sideloading malicious apps. I don't feel like this is an unrealistic expectation, and not having the mode actually do that amounts to security theater, which is a far cry from decrying everything as such.
> I expect it to prevent attack vectors used by these APTs
It does, it just doesn't close all attack vectors used by APTs.
They say[0]:
> Turning on Lockdown Mode [...] further hardens device defenses and strictly limits certain functionalities, sharply reducing the attack surface that potentially could be exploited by highly targeted mercenary spyware.
They don't say "turn this on and you'll be unhackable". They go on to say:
> Apple will continue to strengthen Lockdown Mode and add new protections to it over time.
So what they released in the current beta is just the start.
They decided that releasing Lockdown mode with only some additional protections would be worthwhile to at-risk users and I personally agree.
It's both true that Lockdown likely helps at-risk users (see reply by _kbh_) and still has lots of room for improvement.
It does, it just doesn't close all attack vectors used by APTs.
It's an ongoing problem with the pathological Apple-haters that they imagine that Apple says or promise something, and spread that falsehood all over the internet, when in realty Apple promised no such thing. They see what they want to see.
In addition to the thread above, another example is the dozens and dozens of times on HN where they claim that Apple promises that its app review process will keep 100% of malware out of the App Store. Apple doesn't make that claim. It says that app store reviews help prevent malware.
It's like discussing politics at the Thanksgiving table. People hear what they want to hear.
> Lockdown is literally presented by Apple as being for people targeted by APTs like those developed by NSO Group, therefore I expect it to prevent attack vectors used by these APTs, like exploitation of the Developer program to facilitate sideloading malicious apps. I don't feel like this is an unrealistic expectation, and not having the mode actually do that amounts to security theater, which is a far cry from decrying everything as such.
These APTs overwhelming use RCE vectors that are less obvious then side loading apps, iMessage is probably the most popular and I would hazard a guess that other popular messaging applications (WeChat, signal, telegram, etc) and safari would be next.
Running an enterprise app still is not a trivial single tap on iOS.
Obviously with the new EU legislation mandating support for unrestricted malware of this kind, that's kind of a moot factor in EU and EU-adjacent markets.
> Running an enterprise app still is not a trivial single tap on iOS.
Yes, but still successful, as Hermit demonstrated. So my question is whether Lockdown mode would have prevented APTs like Hermit which it claims to prevent against. If not, then the move is security theater which doesn't address the actual flaws (like poor vetting into the Enterprise Program) being successfully leveraged in the wild.
I had a more detailed reply to an earlier post you made - but the summary is "What constitutes an enterprise that should be allowed to have 'enterprise apps'"
> "What constitutes an enterprise that should be allowed to have 'enterprise apps'"
Apple has a list of requirements - https://developer.apple.com/programs/enterprise/ - for example, a company needs to have at least 100 employees. The issue, however, seems to be how stringently these requirements are enforced, or whether they are at all. In the case of Hermit, the Italian spyware company seems to have created a fake company and tricked Apple into granting the fake company access to the developer program. Now, the interesting question for me is whether the fake company actually managed to pass all of the requirements, like giving Apple a list of 100 fake employees, and whether Apple actually performed their due dilligence and checked whether the employee list was real, or whether they accepted it at face value, or didn't even require it.
In other words, I think a key takeaway from the latest incident is Apple needs to take accountability and harden their Enterprise program entry requirements, and I haven't seen anything about that being the case.
"What is Apple doing to prevent any government contractor from being able to use enterprise apps?"
Which is what you're actually asking. "Spyware" sounds like you're conflating with its traditional meaning of being a general consumer malware/virus plague. This is software made by companies that provide services and support for [among others] intelligence agencies, etc for actual targeted spying.
If you disagree with that being the actual question, then you're saying that having access to the enterprise is dependent on Apple auditing your entire company, its corporate hierarchy, its owners, and its executives - at least. That isn't going to be cheap, it isn't going to be fast, I'm sure you'd not be happy as a company to find distributing internal apps suddenly requires regular expensive audits, or as an employee to discover your employer now required you to agree to background checks, etc by Apple.
The whole, and it seems only, reason for the enterprise program was so companies ("enterprises" in marketing) could have internal apps that didn't have to pass the App Store review process.
It would have been vastly easier to convince a victim to install a piece of software from the App Store, but that would not have worked because despite naysayers the App Store as a first step in platform security works. Otherwise there would be unending stories of malware on HN :D
> High-level targets (for whom this mode is specifically advertised) are likely aware of the dangers of installing apps.
I firstly don't believe this is true at all, plenty of high-level targets are not tech savvy; but more to the point of Lockdown mode, you could then say the same thing about most of its other features ("High-level targets are likely to already be aware of the dangers of doing $thing_Lockdown_prevents").
The whole benefit of the iOS App Store system is that those apps can't be malicious.
This requires an atypical install/launch process that you'd hopefully trigger some sense of "this isn't right" - similar to the macOS complaints when you choose to run an unsigned app.
The ‘high level target’ or person of interest thing is slightly absurd. Everyone is a person of interest and security shouldn’t be only for the domain of journalists, activists, dissidents etc
Fun fact, the browser limitations used for lockdown mode are very similar to the existing restrictions that Apple already had in place for rendering captive portal screens :)
If I wanted my computing device to be as secure as possible against state actors, I would compile all the software myself, and tweak a few compiler settings for my builds.
It's super hard to make an exploit work when you don't know what options your target was compiled with.
Also, simple things like swapping malloc implementations or changing some parameters of malloc will pretty much make your device immune to state sponsored attacks.
Also, anytime you see an application crash, record all crashdumps - since they will contain evidence of a failed exploitation attempt.
> Apple is previewing a groundbreaking security capability that offers specialized additional protection to users...
That's an amazing marketing spin. It's not their admittance of failure of engineering to make the features secure, no, it's a groundbreaking security capability! To be fair, I do appreciate that they acknowledge the problem in the first place and are trying to do something about it.
A large tech company acknowledging that flashy convenience features can be a security risk is groundbreaking in itself. No need to be so cynical, this is a step in the right direction.
It will be interesting to see how this fits in with Supervised Mode.
For example, I'm assuming "configuration profiles cannot be installed" will only to apply to unsupervised devices. Otherwise it could make Supervised Mode rather, erm, tricky !
Also "Allow access to USB accessories when device is locked" option has already been available in Supervised Mode for years.
So I wonder if Lockdown Mode is more removing some of the "supervised only" restrictions from certain options (e.g. the "USB when locked" is currently "supervised only" option, but it looks like Lockdown Mode will bring this option to all users).
Overall, I think this is a good move by Apple though even if some of the details remain to be seen.
Disabling WebGL will block a lot of HTML5 games. I think there will be a lot of "WebGL not supported" or "browser out of date" messages that will need updating to include "please turn off lockdown mode"...
In practice I wouldn't expect many devices to have lockdown mode turned on, and the people who are turning it on probably aren't also using the same device to play Fruit Ninja in a browser. This is a feature explicitly designed for people who have reason to believe they're being personally targeted by national intelligence agencies, or other extremely well funded organisations.
<Insert rant about how I miss my Windows 8 phone because it had less crap on it here.>
The only thing I saw in the writeup that I can imagine normal people over 25 missing is web font icons, and maybe emailing PDFs around to sign with iMessage. (Though those come in as jpegs from cameras or PNG screenshots half the time anyway...)
The blog says "Should You Turn it On? Yes. Seriously. Turn it on when you have a supported OS and don’t look back." If that becomes the general advice, I imagine it will end up getting more broad use - even if most of the people who turn it on don't really need the extra security.
I am writing to a somewhat technical audience on my blog... but, yes, I don't care if my devices can't play some online WebGL game if the tradeoff is far better security in general.
Also, since you can turn it off for specific domains, it's easy enough to re-enable WebGL for some site, while still having Lockdown mode apply to all the random ad serving backends and such you come across. If you're not someone who might be specifically targeted, I think that's entirely reasonable. Secure by default, drop the security level somewhat, by concrete actions I've taken, for some site I want to do something more on.
At some point, I'd assume attackers will try to get people to turn it off so they can attack, but you've made an awful lot more noise by that point.
I wonder how lockdown mode affects apps that use WKWebView? (Not SFWebView which afaik is supposed to be more like the Safari app with things like password manager support.) Eg would this break a WebRTC meeting in a native app?
I'd love to know if you can still use a third-party browser (e.g., Firefox) and if it would inherit lockdown settings per web page (given that all iOS browsers have to use webkit webview).
This post repeats the false claim that link previews in messages provide attacker controlled network loads.
They do not.
The page preview included in Messages is created on the sender side. On those occasions the sender can't create a preview you get a "click to load preview" message instead of a preview with the url. In other words, nothing more than just sending the url in the first place. I'm curious what "disabling link previews" actually means in lockdown.
When you receive a link that has a preview, at least in Messages, what you get is the true url and an image that was created on the sender side. There is no networking unless you tap the link. If you tap the link then you've tapped the link and of course tapping links loads them.
Hence I want clarification on what is involved here.
I am running Lockdown Mode on iOS and iPadOS right now. Generally I like it, but some web sites don't seem as responsive and the Mastodon web app uses a few web fonts that don't show up.
Here is some irony: the linked article caused Safari on my iPhone with beta iOS 16 and Lockdown Mode to immediately crash every time I visit the page (about 5 tests trying to load the page). I have not seen that problem in any other web site.
Would such a thing be possible in Android world? I wonder since there are so many phone manufacturer and ISP mods that might not be under Google's control.
GrapheneOS[1] includes similar 'defense in depth' mechanisms of which predate and some go above iOS 'Lockdown Mode'. Unfortunately just for Pixel devices.
Android fully supports alternate browsers (you don't have the "skins" for the Apple engine that you get on iOS) so nothing is stopping e.g. Firefox from introducing such a mode.
> But with Lockdown enabled, the list grows. Now, the browser no longer will render TIFF, BMP (24-bit), JPEG 2000, or PDF images.
I am not sure why BMP is excluded specifically in lockdown mode. Isn't BMP 24bit simply a bit chunk of bytes filled with uncompressed rgb pixels? It don't even have any specific logic required to render. All you need is fill the render buffer with pixels.
I wonder if turning off the JIT is worth it? A lot of bugs exist around JavaScript engines, sure, but they tend to be in the interfaces with the bindings for all the html5 features (and corresponding opportunities for memory corruption).
It's been a while since the last bug in the JIT itself - fuzzing tends to uncover those pretty quick.
This can already be done, there are several apps that do more or less this. Now, a GUI to manually block or allow specific hosts without having to go trough a pseudo-vpn would be cool.
I am late to this conversation, but I have a question: both my iPad Pro and my iPhone 11 Pro seem to get slightly shorter battery life between charges. Has anyone else noticed this? Perhaps it is because Javascript runs slower?
Have you enabled Lockdown mode on both devices? Then that's almost certainly the cause. Without the JIT you're going to be burning a lot more CPU cycles running JavaScript.
Aren't configuration profiles necessary for configuring VPN though? For the best security you'd want all your traffic to go through your own server for retrospective analysis.
Depends on the VPN and use case. I don't use a configuration profile for mine right now, but if I wanted to do anything more than manual activation I would need to use a profile to accomplish that.
If it is a format supported by macOS internally it's likely viewable in Safari - webkit basically passes image decoding to the system image decoders (hand wavey here)
Every time someone says the word Android in this discussion, the next reply is that Android allows any <insert software here> you want, therefore it's up to that software to implement such a lockdown feature. Ergo, "lockdown mode" isn't able to be a thing on Android. And following from that, if iOS is forced to have all the same openings, then Lockdown Mode will be just as meaningless.
You're not making any sense. Google could easily implement a lockdown mode on Android in exactly the same way. Sure, you could choose to use a browser that doesn't have a lockdown mode. You could also choose to turn off lockdown mode! It's pretty much the same choice. Having that choice to disable lockdown doesn't make lockdown meaningless. Lockdown is voluntary.
Turn off the phone for how long? And how would one even know if they’re being attacked? Turning off the phone is not an easy option for investigative journalists and activists, especially in today’s world where communicating with people in different geographical locations may be necessary.
Right out of the box, smartphones are more secure than mainstream personal computers (running Windows, macOS or Linux) that are connected to the Internet.
> If you can’t stand the impact on performance or image rendering, well, maybe Lockdown isn’t for you. Apple claims only a tiny fraction of users will need it, though I’d argue an awful lot of users will want it.
Of course, I want it! (I already go through many other inconveniences for privacy and security).
> Should You Turn it On?
> Yes. Seriously. Turn it on when you have a supported OS and don’t look back.
Amen! I’ll be telling some laypeople to turn it on and try it out (along with instructions on how to turn it off selectively or completely).