Apple could have offered APIs for managing 3rd party subscriptions from that screen, but it's more convenient for them to have a closed system, private APIs, and use their own non-extensibility as an excuse for their closed payment system.
It's also Apple's specialty to create false dichotomies and shit sandwich bundles: it's either the 30% cut or daylight robbery. No third option (in reality PayPal is more consumer-friendly and allows managing subscriptions in one place from more than iOS).
The whole App Store model is a false dichotomy between the 30% cut and Disney-like moderation vs raging malware that will take down the whole mobile network. No third option (so you can't have Fortnight, or any app showing a nipple).
If Tim Cook is willing to lie and cheat for extra revenue, I can't trust that Apple is honest about their privacy commitments. Services revenue line must keep going up, and their ad business is a growth opportunity.
> I can't trust that Apple is honest about their privacy commitments
This is a funny comment for me to read. Did anyone honestly think that Apple was touting privacy as anything other than a competitive advantage for revenue maximization? They've had things like iAd, their services revenue has grown massively as hardware sales plateau, and they are nowhere near as "private" in certain countries either.
I agree, but I might phrase it a little bit differently. I recommend thinking about corporate stances as actions and interests, not moral intentions. Don’t expect a corporation to do things for moral reasons. Trust them only to the extent that their actions are in their self interest. To be fair, some organizations do have charters and interests that make them more palatable than others.
One takeaway to startups that hope to stand for something even after tremendous growth and leadership changes: you have to build governance and accountability structures into your organizational DNA if you truly want specific values to persist over the long run.
This is probably a good thing -- faith in such structures was never justified.
Any relationship with a corporate entity is transactional in nature. A great deal of effort is often expended to manipulate us into feeling otherwise, but that is all it is.
Companies don't have feelings. They aren't conscious entities with a capacity for guilt or morality. They are, in essence, software applications designed to execute on systems composed of human employees. In a sense they are the original AI agents.
Yes, OpenAI demonstrated one way not-for-profits can be commandeered. Altman appears to be quite astute at gaining power.
Every organizational design and structure has the potential to be subverted. Like cybersecurity, there are many tradeoffs to consider: continuity, adaptability, mission flexibility, and more. And they don’t exist in isolation. People are often going to seek influence and power one way or the other.
One more thing. Just because it is hard doesn’t mean we should work less hard on building organizations with durable values.
I don't think there are any companies that care one way or the other about taking away your freedom.
Companies are revenue maximizers, period. The ones that aren't quickly get displaced by ones that are.
The simpler test is to stay away from any company that has anything to gain by taking away your freedom. THAT unfortunately is most of them.
The depressing reality in consumer tech is that anything with a CPU doesn't belong to you, doesn't work for you, and will never do more than pretend to act in your best interest.
This explanatory model explains a lot of what companies do but not all. It is a useful first approximation for many firms.
Still, the conceit of modeling an organization as a rational individual only gets you so far. It works for certain levels of analysis, I will grant. But to build more detailed predictive models, more complexity is needed. For example, organizational inertia is a thing. One would be wise to factor in some mechanism for constrained rationality and/or “irrational” deviations. CEOs often move in herds, for example.
> The ones that aren't quickly get displaced by ones that are.
Theory, meet history. But more seriously, will you lay out what you mean by quickly? And what does market data show? Has this been studied empirically? (I’m aware that it is a theoretical consequence of some particular market theories — but I don’t think successful financial modelers would make that claim without getting much more specific.)
iAd is stated as being built differently to how other adtech networks work.
I personally believe that Apple is able to make different (better), choices in the name of a consumer privacy, than Google will.
Android is built from the ground up to provide surveillance data to Google-controlled adtech - that's their revenue model. I don't begrudge them that, people should have choice, etc. but the revenue model is adtech first and foremost.
Apple want services revenue, they like services revenue, but historically they're a vertically integrated tech platform manufacturer whose revenue model is building better platforms consumers want.
It's true that the services model may start to compromise that - and they've definitely started to make some poor choices they might need to pull back on to protect the core platform model - but I do think we're not comparing like with like when we say that Apple is no different to any other company in this space.
> Android is built from the ground up to provide surveillance data to Google-controlled adtech
I've always read this and it seems well accepted. But I'm curious what exactly does it mean? What's Android sending to Google? Surely it's not logging what I click on apps? It's not logging what I click on my browser since the websites themselves send this info for ad purposes. So what's Android doing that let's say my Linux laptop isn't?
Edit: Answering my own question. There is a cross-app unique identifier (ignoring any privacy sandbox stuff) so developers and ad networks can get a consistent id across apps.
I'm guessing the poster is referring to AOSP and custom ROMs. If so, yes, it is entirely possible, but not something I'd expect any normal human being to do.
Not all phones allow custom ROMs and most that do completely void your warranty. Doing it yourself is a complete non-starter for at least 95% of the population.
In practical terms, you can simply not log into a Google account on any Android device, including those made by Google, and Google will get less data about you than Apple does on iOS.
The key difference is user choice. An iOS user has no choice but to send their location data and app usage data to Apple. No such required privacy violations on Android.
Yup exactly. Do people jot remember that Apple never gave a damn about privacy for the longest time, then when Google, Facebook and others' ingestion of "meta data" became the public issue du jour that is when Apple started pushing the whole privacy thing. It's a selling point, nothing more.
>Did anyone honestly think that Apple was touting privacy as anything other than a competitive advantage for revenue maximization?
I think they're more willing to build out privacy enhancing features than other companies that don't rely on surveillance capitalism to make their money. "Small" things like Filevault add up.
I have no trouble believing a gay boomer from the South instinctively cares about personal privacy; he will have spent much of his early life needing to be very protective of his.
I would agree that most people with that exact background would have learned the hard way to care about privacy.
The single example that ascended to be the CEO of Apple though? That selection process would seem more relevant than any personal background.
My base assumption is that any impressions we have about Tim Cook (or any other executive of a company that size) are a carefully crafted artifact of marketing and PR. He may be a real person behind the scenes, but his (and everyone's) media persona is a fictional character he is portraying.
It feels like if you'd expect someone to be something based on their background, _and_ they profess to be that thing, then the onus is on the person disputing it to come up with the evidence contra?
> any impressions we have about Tim Cook ... is a fictional character he is portraying
The relevant ones here are that he's gay, of a certain age, and from the South, and that he heads up a company who appear to invest heavily, over a long period of time, in privacy protections -- these all feel like they'd be easy to falsify if there existed evidence to the contrary.
Their privacy commitments align with their business, not their morals. They don't want an open internet primarily funded by advertising, so they make harder for advertising companies to track their users. What they want is an internet silod into apps you get from their app store, that are funded buy subscriptions and IAPs that they get a 30% cut from.
We can have both, because they cannot kill the web. We can enjoy better privacy in the OS, the open Web, and better controls for the applications that should not be a website (which is still quite a lot of them).
Apple is a for-profit business, and like most such entities, its primary concern is its bottom line. If promoting privacy aligns with that objective, so be it. However, the company does not have an inherent inclination toward acting ethically beyond what serves its business interests.
> “When we work on making our devices accessible by the blind,” he said, “I don't consider the bloody ROI.” He said that the same thing about environmental issues, worker safety, and other areas where Apple is a leader.
> As evidenced by the use of “bloody” in his response—the closest thing to public profanity I've ever seen from Mr. Cook–it was clear that he was quite angry. His body language changed, his face contracted, and he spoke in rapid fire sentences compared to the usual metered and controlled way he speaks.
More broadly, I know that for-profit businesses are concerned with their bottom line, and I know businesses regularly throw other values under the bus in pursuit of profit. But I'm not sure it's possible to build a successful business (in terms of maintaining consumer trust, attracting and motivating decent employees, etc.) without some values beyond what's immediately quantifiable on the bottom line.
Belief within limits, yes. At least, I can only think of a couple of possible explanations for the event:
1. Cook only cares about pursuing profits, but at a shareholder meeting where shareholders were pressuring him to pursue profits, he lied to them (and had the presence of mind and acting chops to pretend to be uncharacteristically angry about it), because he believed that the story would get reported on and Apple fans would want to hear it, and he made the calculation that that would be more beneficial to his bottom line than being honest (or at least more politically neutral) with his shareholders.
2. Cook really does believe about accessibility, environmental issues, and worker safety, and he tries (or at least likes to think that he tries) to take steps toward those causes at the expense of profits, but he's also a complex and flawed mixture of motivations and is capable of compromising his values (and/or of letting those under him compromise their values) to varying degrees in the face of financial rewards or the pressures of the capitalist system.
#2 seems more likely and is more consistent with my view of humanity in general.
It's also worth noting that the meeting in question was in 2104. That's over a decade ago now.
It's entirely possible that Cook was fully sincere then, but that over the subsequent 11 years, marinating in the toxic stew that is the upper echelons of American industry has eroded his principles and he is now more willing to listen to the voices pushing for money over all else (whether those voices are outside or inside his own head).
#2 seems more probable to me for any given human being selected at random.
#1 seems more probable given a human being that has been selected to head one of the most valuable companies on the planet. That's his entire job -- to play a carefully crafted role for the public, the share holders and the media. He isn't paid to stand up at a shareholder meeting and let any sort of genuine feelings slip through, unless those feelings happen to be the right ones for that role at that moment.
Pretty sure this is not about revenue but profit margins, since the Services line was under heavy surveillance by markets back then.
Though that's the core issue, margins on services are just too addictive for big tech. Not sure Apple can keep its recipe for success with both services and hardware.
Most likely it does what their other apps do: opens URLs in an in-app "browser" WebView, which is then injected with a ton of trackers that have unlimited access to everything you browse in their app.
iOS apps are allowed to add arbitrary JavaScript to any page on any domain, even HTTPS, as long as it's a WebView and not the standalone Safari app.
This is generally worse UX vs. just opening Safari. There have been exactly zero times where I was happy that a link opened in an app's WebView, instead of in Safari or the appropriate external app.
Why does a seemingly privacy-focused Apple create the compromisable WebView system for apps? Is there some weird edge case for apps that they need this, for a non-evil reason?
They don’t allow third party browser engines. If they didn’t allow web view they are effectively banning third party browsers completely. I can’t imagine that would make their anti trust problems any better.
Although, it does seem like they could get more granular in app approval, which I am sure iOS devs would not like, but users would. For example, "If your app's primary use case is navigation of the open web, you may use WebView to handle 3rd party links. However, if that is not the primary purpose of your app, web links must open in iOS."
Either that, or give me a setting for each app, which the dev can set the default on. "Open links in Safari."
There’s a permission for Location at least, “In App Web Browsing” can have that permission disabled. Web Views don’t seem to have similar treatment otherwise, afaict. I’d sandbox them aggressively if I could .
I use Adguard which has a Safari integration that appears to apply to Web Views (based on the absence of ads), though I don’t have proof of that.
Well, just off the top of my head, an epub is basically HTML and is simple to implement with a web view. Nice when the OS has a framework that provides one.
There's a harmless "vulnerability" that some automated scanners keep finding on my website. I've deliberately left it "unfixed", and block everyone who emails me about it.
You can still get delta updates with Sparkle in an electron app. I am using it, and liking it a lot more than Electron Updater so far: https://www.hydraulic.dev
GC isn't something to be afraid of, it's a tool like any other tool. It can be used well or poorly. The defaults are just that - defaults. If I was going to write a rhythm game in Unity, I would use some of the options to control when GC happens [0], and play around with the idea of running a GC before and after a song but having it disabled during the actual interactive part (as an example).
In absolute terms yes, but relative to the CPU speed memory is ridiculously slow.
Quake struggled with the number of objects even in its days. What you've got in the game was already close to the maximum it could handle. Explosions spawning giblets could make it slow down to a crawl, and hit limits of the client<>server protocol.
The hardware got faster, but users' expectations have increased too. Quake 1 updated the world state at 10 ticks per second.
Indeed there are people who want to make games, and there are people who think they want to make games, but want to make game engines (I'm speaking from experience, having both shipped games and keeping a junk drawer of unreleased game engines).
Shipping a playable game involves so so many things beyond enjoyable programming bits that it's an entirely different challenge.
I think it's telling that there are more Rust game engines than games written in Rust.
I'm in that camp. After shifting from commercial gamedev I've been itching to build something. I kept thinking "I wanna build a game" but couldn't really think what that came is. Then realised "Actually it's because I want to build an engine" haha
There are alternative universes where these wouldn't be a problem.
For example, if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass (less involved than a full VM byte code compilation), then we could adjust SIMD width for existing binaries instead of waiting decades for a new baseline or multiversioning faff.
Another interesting alternative is SIMT. Instead of having a handful of special-case instructions combined with heavyweight software-switched threads, we could have had every instruction SIMDified. It requires structuring programs differently, but getting max performance out of current CPUs already requires SIMD + multicore + predictable branching, so we're doing it anyway, just in a roundabout way.
> if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass (less involved than a full VM byte code compilation)
Apple tried something like this: they collected the LLVM bitcode of apps so that they could recompile and even port to a different architecture. To my knowledge, this was done exactly once (watchOS armv7->AArch64) and deprecated afterwards. Retargeting at this level is inherently difficult (different ABIs, target-specific instructions, intrinsics, etc.). For the same target with a larger feature set, the problems are smaller, but so are the gains -- better SIMD usage would only come from the auto-vectorizer and a better instruction selector that uses different instructions. The expectable gains, however, are low for typical applications and for math-heavy programs, using optimized libraries or simply recompiling is easier.
WebAssembly is a higher-level, more portable bytecode, but performance levels are quite a bit behind natively compiled code.
> Another interesting alternative is SIMT. Instead of having a handful of special-case instructions combined with heavyweight software-switched threads, we could have had every instruction SIMDified. It requires structuring programs differently, but getting max performance out of current CPUs already requires SIMD + multicore + predictable branching, so we're doing it anyway, just in a roundabout way.
Is that not where we're already going with the GPGPU trend? The big catch with GPU programming is that many useful routines are irreducibly very branchy (or at least, to an extent that removing branches slows them down unacceptably), and every divergent branch throws out a huge chunk of the GPU's performance. So you retain a traditional CPU to run all your branchy code, but you run into memory-bandwidth woes between the CPU and GPU.
It's generally the exception instead of the rule when you have a big block of data elements upfront that can all be handled uniformly with no branching. These usually have to do with graphics, physical simulation, etc., which is why the SIMT model was popularized by GPUs.
Fun fact which I'm 50%(?) sure of: a single branch divergence for integer instructions on current nvidia GPUs won't hurt perf, because there are only 16 int32 lanes anyway.
CPUs are not good at branchy code either. Branch mispredictions cause costly pipeline stalls, so you have to make branches either predictable or use conditional moves. Trivially predictable branches are fast — but so are non-diverging warps on GPUs. Conditional moves and masked SIMD work pretty much exactly like on a GPU.
Even if you have a branchy divide-and-conquer problem ideal for diverging threads, you'll get hit by a relatively high overhead of distributing work across threads, false sharing, and stalls from cache misses.
My hot take is that GPUs will get more features to work better on traditionally-CPU-problems (e.g. AMD Shader Call proposal that helps processing unbalanced tree-structured data), and CPUs will be downgraded to being just a coprocessor for bootstrapping the GPU drivers.
> There are alternative universes where these wouldn't be a problem
Do people that say these things have literally any experience of merit?
> For example, if we didn't settle on executing compiled machine code exactly as-is, and had a instruction-updating pass
You do understand that at the end of the day, hardware is hard (fixed) and software is soft (malleable) right? There will be always be friction at some boundary - it doesn't matter where you hide the rigidity of a literal rock, you eventually reach a point where you cannot reconfigure something that you would like to. And also the parts of that rock that are useful are extremely expensive (so no one is adding instruction-updating pass silicon just because it would be convenient). That's just physics - the rock is very small but fully baked.
> we could have had every instruction SIMDified
Tell me you don't program GPUs without telling me. Not only is SIMT a literal lie today (cf warp level primitives), there is absolutely no reason to SIMDify all instructions (and you better be a wise user of your scalar registers and scalar instructions if you want fast GPU code).
I wish people would just realize there's no grand paradigm shift that's coming that will save them from the difficult work of actually learning how the device works in order to be able to use it efficiently.
The point of updating the instructions isn't to have optimal behavior in all cases, or to reconfigure programs for wildly different hardware, but to be able to easily target contemporary hardware, without having to wait for the oldest hardware to die out first to be able to target a less outdated baseline without conditional dispatch.
Users are much more forgiving about software that runs a bit slower than software that doesn't run at all. ~95% of x86_64 CPUs have AVX2 support, but compiling binaries to unconditionally rely on it makes the remaining users complain. If it was merely slower on potato hardware, it'd be an easier tradeoff to make.
This is the norm on GPUs thanks to shader recompilation (they're far from optimal for all hardware, but at least get to use the instruction set of the HW they're running on, instead of being limited to the lowest common denominator). On CPUs it's happening in limited cases: Zen 3 added AVX-512 by executing two 256-bit operations serially, and plenty of less critical instructions are emulated in microcode, but it's done by the hardware, because our software isn't set up for that.
Compilers already need to make assumptions about pipeline widths and instruction latencies, so the code is tuned for specific CPU vendors/generations anyway, and that doesn't get updated. Less explicitly, optimized code also makes assumptions about cache sizes and compute vs memory trade-offs. Code may need L1 cache of certain size to work best, but it still runs on CPUs with a too-small L1 cache, just slower. Imagine how annoying it would be if your code couldn't take advantage of a larger L1 cache without crashing on older CPUs. That's where CPUs are with SIMD.
i have no idea what you're saying - i'm well aware that compilers do lots of things but this sentence in your original comment
> compiled machine code exactly as-is, and had a instruction-updating pass
implies there should be silicon that implements the instruction-updating - what else would be "executing" compiled machine code other than the machine itself...........
I was talking about a software pass. Currently, the machine code stored in executables (such as ELF or PE) is only slightly patched by the dynamic linker, and then expected to be directly executable by the CPU. The code in the file has to be already compatible with the target CPU, otherwise you hit illegal instructions. This is a simplistic approach, dating back to when running executables was just a matter of loading them into RAM and jumping to their start (old a.out or DOS COM).
What I'm suggesting is adding a translation/fixup step after loading a binary, before the code is executed, to make it more tolerant to hardware changes. It doesn’t have to be full abstract portable bytecode compilation, and not even as involved as PTX to SASS, but more like a peephole optimizer for the same OS on the same general CPU architecture. For example, on a pre-AVX2 x86_64 CPU, the OS could scan for AVX2 instructions and patch them to do equivalent work using SSE or scalar instructions. There are implementation and compatibility issues that make it tricky, but fundamentally it should be possible. Wilder things like x86_64 to aarch64 translation have been done, so let's do it for x86_64-v4 to x86_64-v1 too.
that's certainly more reasonable so i'm sorry for being so flippant. but even this idea i wager the juice is not worth the squeeze outside of stuff like Rosetta as you alluded, where the value was extremely high (retaining x86 customers).
hm. Doesn't the existence of Vulkan subgroups and CUDA shuffle/ballot poke huge holes in their 'SIMT' model? From where I sit, that looks a lot like SIMD. The only difference seems to be that SIMT professes to hide (or use HW support for) divergence. Apart from that, reductions and shuffles are basically SIMD.
It's also Apple's specialty to create false dichotomies and shit sandwich bundles: it's either the 30% cut or daylight robbery. No third option (in reality PayPal is more consumer-friendly and allows managing subscriptions in one place from more than iOS).
The whole App Store model is a false dichotomy between the 30% cut and Disney-like moderation vs raging malware that will take down the whole mobile network. No third option (so you can't have Fortnight, or any app showing a nipple).