> "Apple's ultimate plan is to introduce the technologies on its iPhone, which is its key revenue source and has much bigger volume, to justify the investments over the years," said one of the sources who has seen samples of the company's microLED screen.
One thing I really appreciate when it comes to Apple is that despite all their shortcomings they don't "sleep on the job". Over the years they never hesitated to cannibalize their own product lines (iPod), or go through several tough technological transitions (68K/PPC/x86/ARM) if it meant building a better product. And bigger profits of course.
For example I'm sure Google would just rather stuff your phone to the gills with ads in order to drive revenue than actually bother with a coherent investment in a long term strategy to make a better product. And bigger profits of course.
Hmm. Over in the browser space, I'd say Google's shown significant, long-term investment in Chrome/Blink, especially compared to Apple's relative disinterest in putting significant engineering resources on WebKit.
Perhaps instead of this being about Apple vs. Google (or others), it's about companies investing in their core competencies and areas where they have a business interest.
I think people underestimate Safari. IIRC "back in the day" they were the first to get 100% in supporting all of ES6. Conviently, they were first in supporting the things I care about - like backdrop-filter and css scroll snap. Safari/Webkit/JSC is faster at executing javascript as well.
ARM Safari is an absurdly energy efficient browser as well.
They’ve definitely made some significant missteps over the last few years, mostly around breaking storage APIs and being slow to fix it due to being part of the macOS release cadence.
Battery life is the key in why Apple doesn't want third-party browser engines. On Mac, they have to babysit the user by saying "your battery is dying because <x> app is using too much energy" because otherwise the user will bring it into the apple store complaining about their battery life. Forcing all apps to use WebKit has likely saved many MWh across the iPhone's lifetime. The only downside for Apple when they'll be forced to allow other browser engines is that they might need to make a more active alert for "using the most energy: Chrome".
There are plenty of apps on iPhone that suck battery. Those that make use of the camera and depth sensors, for instance. Or games.
Apple doesn't want other browsers because they don't want runtimes. They don't want a way for someone else to build a marketplace on top of iPhone or to escape Apple's stringent review process and deploy on their own timelines.
If there are 1 billion iPhones running Safari for 2h/day, and the power efficiency saves 200mwh (just a random number), then Safari saves 400MWh/day, or ~150GWh/yr. This saves ~.1% of all the power used by the world.
The issue with Safari is that too many features just don’t work 100% correctly. Like WebRTC mic support has a number of bugs. Calling captureStream on a canvas produces a media stream that cannot be played back in Safari. Playing a lot of audio tags produces an audible popping sound on play. And there any many other issues I can’t think of off the top of my head.
You could swap “Safari” for IE6 and tell the same story. I don’t know what it is about OS dominance that drives such disinterest in upgrading browsers outside of wanting to push native apps for that App Store revenue.
You could also say that Chrome had a huge _interest_ in pushing for browser-alternatives to native Apps because of Chromebooks. Some of that died down once Android support was added. The position on many of these by other browser vendors were that they were harmful for the web. You will likely never see implementations of these non-standard API outside chromium-based browsers.
The other thing which they are very slow to support is new codecs on the video tag. Presumably this is because they want to keep their "video playback" power metrics. the WebKit team is incredibly metrics-driven, which is one reason for their performance and power-efficiency.
They did indeed fork khtml because they needed a web browser for OSX and renamed it WebKit. I’m not sure I would call that initiating the investments but I guess if you squint hard enough you could call it that.
It's not that Apple is disinterested in WebKit now or just doesn't want to spend money on it. They intentionally want to keep it bad, so that Web apps won't be a viable alternative to native apps on iOS.
Chrome fits the long term and Better, but apart from Chrome, What else?
Not Android because it has a direct competition that is arguably doing much better ( More user switching to Apple than Android ) and is forced to improve in other to compete.
The other side of that coin is that apple navel gazes, shunning the rest of the world.
I think the most recent apple golden age was around 2007 when they switched to intel. This allowed folks to use apple hardware and macos, but also run windows on their systems. mac os was more open. The mac pro embraced the PC ecosystem, hard drives, pcie cards and more.
Nowadays the pendulum is swinging the other way and though it might be good for apple's bottom line, I don't think it's better for customers (or apple long term)
I think the question whether it is a great laptop has already been answered; it is a great laptop given all the rave reviews. Speaking for myself, the issue is integration and the walled garden. Because beyond the performance, a big point of getting into the Apple ecosystem is seamless integration which means opting into multiple Apple devices (unfortunately Linux/Android are sadly not there yet when it comes to integration, it is ok but not amazing). Which brings the main concern - what if I get buyer's remorse? Now all my data is stuck inside, and based on my experience with iPhone 5S many years ago, getting data out of Apple devices is harder than pulling teeth. Linux/Android are miles ahead there so if I don't like a particular device, I can easily switch to another. For me, that is the biggest thing - I don't enjoy being a hostage (even though most people using Apple devices are happy willing hostages and I get why that is).
Another way to look at this is - if Apple is so confident that their setup is absolutely incredibly good - why not make exiting with the data super easy? After all, if it is so good, people just won't leave even with those options available, right?
I daily drive a 16" M1 MBP but I'm very much outside the Apple walled garden - I've never used an iPhone, no airpods or Siri or even the apps like Contacts or iMessage or iCloud.
I can still say, the new Apple Silicon Macs are one of the best computing devices I've ever used.
Even though i think your tone was unfriendly, which is rare and frowned upon, you still had interesting points and got my upvote. Someone else might have the guidelines handy.
Assume a positve intent, was a valuable lesson for me.
I am not confused at all. I can certainly just run MacOS and run everything else Android. BUT that takes away from one of the main value propositions of Apple ecosystem - integration. So to get the full benefit, I migrate my phone to iPhone and that is where it begins.
Your argument is basically "macOS doesn't lock up your data but it integrates so well with iOS which means I can't help buying an iPhone. And now I have buyer's remorse because iOS DOES lock up my data". You don't need to use these integrations and you don't need to buy an iphone if you don't like your data taken hostage.
In the long run, though, it will be hard to repair and the parts generally aren't swappable or upgradeable. When Apple was a participant in the PC ecosystem you had more options for repairs and upgrades. So 5 years down the road people's feelings on the M1 may change, even if they're amazing machines now.
> 5 years down the road people's feelings on the M1 may change, even if they're amazing machines now
I'm 3 years in and have never needed servicing. If I do, I'm fine having to go back to Apple. It's not a good fit for everyone. But it is for my niche. (Also, Apple stand out in having long first-party support runways for their products.)
Right, but if you want to keep it running, you probably need to replace the battery and may need to replace failing storage. I have an 8 year old laptop that still works but the battery is in terrible shape, and it's not an easy one to repair so it's not realistic to fix it. The machine otherwise is fit for purpose so it's a real waste.
Few people were repairing their Intel MacBooks, even if they were more serviceable. It's not really typical of Apple users to expect to have to open their devices (me included, and I've been using MacBooks since the PPC days)
It's not bad to want to pull apart and service your own laptop, but for me it's a tool to get a job done and if it breaks I'll send it to Apple to fix
My previous Intel Retina MBP lasted from 2012 to 2020, and now my dad uses it because it still works fine. My younger brother uses my ~2008 17" MacBook Pro (it only runs on wall power, though). Repairability has just never been much of a concern in my use of Apple stuff
The 2013 Macbook pro 13" has been my longest running laptop for work, and swapped it for the M1 two years ago because of the reviews.
One of my kids is still using it for school.
I never had a laptop for that long before. And just had to replace the battery once, by an after market one.
The keys on the keyboard disappearing is the only notable aging symptom.
Maybe an outlier, but based on that, I'm confident the M1 may last long too.
Speaking as someone in tech, honestly I don’t care what happens to it in 5 years. It will be outdone by then by much better models and I’ll be upgrading, because $3k every three years for a device that I use everyday is pittance.
otoh i hope to be burried with my m1 airbook (for home use) -- i hope never to buy another notebook class device, and just keep replacing the battery every 5 yrs.
It's an amazing machine, to be sure (I'm typing to you from one), but it's going to be so far behind in another 5-10 years. Its 8-16 GiB of RAM is already dwarfed by the 64 GiB options that Apple is selling today and the M1's speed just isn't going to be competitive to keep up with apps' demands in 2033. I do really like mine though. There's an X61 Thinkpad I felt the same way about that I still miss. Hm maybe I should dig that out of storage.
Yep, my M1 macbook pro is the best laptop I've ever owned too. And if asahi linux is anything to go by, its a surprisingly open system from a software perspective.
But I also think the abusive, controlling behaviour they exhibit when it comes to their hardware is shockingly bad. (See: any of Louis Rossman's videos).
I also think its awful how controlling Apple is over the software that runs on iPhones. For example, apparently the city of SF needs to pay Apple millions of dollars a year to allow iphones to be used as clipper tickets. Why is apple paid money for this? Users have already bought the NFC chips in their phones from Apple. Installing themselves as an unaccountable, monopolistic middleman is disgusting behaviour.
I genuinely don't understand how you can describe a company as abusive, controlling and disgusting and yet give them $1000+ dollars of your money for a computer. Why would you support a disgusting abuser?
How can I buy their products? Unfortunately its the easiest thing in the world to drop a bunch of money on a beautiful product that I love to use. Of course I feel gross for doing so. If someone else made a product which was as nice to use but without compromising my ethics, I'd buy it in a heartbeat. But that product doesn't exist.
Apple is the the only big tech company which at least nominally cares about user privacy. Google, culturally, doesn't really make any products which don't depend on their cloud services. Last time I owned an android phone, I tried saying no to most of the modals google throws up asking permission to upload my personal data. Some features broke in horrible ways - presumably the devs didn't actually test what happens when people refuse consent dialogs. And lots of features become flat out unavailable. For example, their AI assistant. Despite all that, they still ended up storing more data on me than I'm comfortable with. I think android is only a viable OS if you let google track everything you do. I don't want that, either.
On the computer side, the framework laptops look lovely. I've moved to linux for my desktop workstation and my next laptop might be a framework laptop. But there are absolutely compromises. Battery life isn't as good. Linux isn't as cohesive as macos - keyboard shortcuts are all over the place. I tried to configure linux to feel like macos but some apps won't let you use the meta key as a modifier key. Linux doesn't integrate with an iphone at all. Being able to copy+paste from laptop to phone is an absolutely killer feature. Or take a photo and airdrop it across. Or stop my phone alarm by talking to a siri smart speaker elsewhere in my house.
There are no options for everyday computing that I'm entirely happy with. For now, like it or not, Apple has my business.
Because the abuse is obviously not the whole story, The product is still the best laptop you can buy in spite of the abuse, control and disgusting behavior of Apple.
Apple doesn’t charge you transaction fees when you use Apple Pay. They absolutely charge your bank.
Systems using your NFC chip like clipper don’t work through Apple Pay. My understanding is organisations building apps using the iPhone nfc chip like that have to pay apple millions for the privilege. On android the equivalent api is free and open.
They do use Apple Pay and say so on their own web site. Apple Pay is a payment network, just like the credit and debit card companies. They all charge fees, typically 0.2% for cards, and 0.15% for Apple Pay. But somehow the card systems are fine, but Apple Pay is extortionate? OK.
Apple users pay for the integration in the Apple ecosystem, lack of spyware, and quality of the physical device. They aren’t looking to run windows or Linux or anything like that.
I believe apple phones home quite a bit. The good thing is that on macos, you can launch a terminal and run "ps" to see what is going on, or use "ls" to see what apps and files exist (unlike ios).
Note that back when macos first launched, if a program crashed wanted to send information to apple, you would get a popup, see what would be sent and decide whether to send it or not. Nowadays all kinds of things are just sent, like every time you create an account, even a local one or one for work, it phones home.
The crash popup still asks if you would like to send a report, along with an optional comment. I have used it a number of times to describe the app state prior to the crash
> Apple users pay for the integration in the Apple ecosystem
> quality of the physical device
Agreed. I really like their hardware but I still want to be in control and running my own software on it.
> lack of spyware
Doubt.
> They aren’t looking to run windows or Linux or anything like that.
Asahi Linux compatibility will absolutely be a major factor in my willingness to purchase Apple hardware. I just purchased a phone because GrapheneOS recommended it. This stuff matters.
Unlike Microsoft, it’s not been reported anywhere that Apple has spyware built in. Props to Apple’s PR team if they’re so competent they can pull the wool over everyone’s eyes for years.
I don't see myself running it for years, until it becomes the only way to get security updates to continue to use the hardware.
Apple has virtualization and paravirtualization built into the OS to run Linux and Windows as ARM VMs. They even have support to let ARM Linux use Rosetta 2 to run x86_64 ELF binaries.
I can already run Linux in a supported manner. I don't see a reason to run boot linux until my current hypervisor is no longer getting support.
The only thing Bard shows is that Google doesn't want OpenAI to eat into their Search. There's no indication it was ever planned to compete with Search or that they had a clear plan for it before OpenAI walked that path.
When you have all you need theoretically in house but your moves are rushed and come only as a reaction to the competition you are the very example I was trying to make, of milking your cash cow and waiting too long. Google had to change the strategy [0], reorganize [1], and struggle to launch your product in an emergency as a response to competition. This is the very definition of not having a well thought out long term strategy, isn't it?
Google owned Brain and DeepMind as rival divisions since before OpenAI existed. They didn't cooperate, didn't launch any revolutionary product, and didn't compete in any way with Google's main offering. They were left to their own devices like side projects until the competition dashed in front and hit Google's most sensitive area and dictated their strategy.
I don't know if Bard is a good or bad product. But we may have different definitions of a coherent investment in a long term strategy to make a better product. None of the above shouts "coherent", "long term strategy", or even necessarily "better product".
> There's no indication it was ever planned to compete with Search or that they had a clear plan for it before OpenAI walked that path.
Google Voice Search cannibalized search since forever, without being monetizable. Google realized the importance of ML/AI early, and has had a coherent AI strategy that predates most AI companies - I would argue Google's strategy precipitated the existence of those companies by performing and publishing foundational research in the field.
Pixel phones are a masterclass of applied ML in a very humane way one might expect from Apple- e.g. call screening with realtime transcription.
I disagree on this. The iPhone was a potential iPod replacement when it launched, but the choice was selling $200 iPods or $500 iPhones. The iPhone felt like a leap of several generations, but the hardware and business model were still similar.
Where Apple actually cannibalized something was Apple Music and the iTunes store, but they drug their feet on it.
> but the choice was selling $200 iPods or $500 iPhones
I don't think we are in disagreement. But the fact is very few companies do this regardless. When it comes to the product that's their bread and butter they tend to put it on a pedestal never to be touched until it falls into irrelevance.
Apple didn't launch an improved iPod and retire the old. They didn't try to sell a phone and a music player by artificially differentiating them, or by sabotaging one of them. They didn't try to make $250 off the iPod and another $250 from a phone that "shouldn't compete too well". They launched 2 product lines made by different parts of the company, knowing that the iPhone may undermine the iPod userbase and then still fail, possibly souring those customers.
They wouldn't necessarily do this today. iDevices now are clearly (and artificially) differentiated from the Macs: different OS, no options to use them in a way they truly compete despite having such similar hardware theoretical capabilities.
Again to give the Google example, when they tried to have competing lines (Google/Nest) they managed to fail at both and lose the customers completely.
I disagree somewhat with the characterization of the differences as artificial. I think Apple genuinely believes that, despite their excellent touchpads, a cursor-driven interface is not the same as a touch interface.
They’re gradually bridging them (Catalyst, and adding cursor support to iPadOS), and speaking as a fan of both product lines I’m glad they’re trying to make them each more powerful without just slapping them into a single UI.
Could they do a better job of supporting external devices on iPad? I imagine so, but there’s such a long road from the extremely-protective/simplistic/sandboxed iOS 1.0 to a full-fledged workstation operating system I have to think it’s not all that easy.
One of the classic examples I've heard was how Apple cannibalized their iPod (classic) with the iPod Mini and the iPod nano. These were smaller and cheaper due to the flash storage, but did not replace their more expensive cousin.
The iPod Classic got relatively little updates toward the end of the iPod line because while it had a dedicated fan base due to its ability to store truly massive music libraries, it just wasn't selling much.
This was also the first time we really saw the scale Tim Cook could push for - competitors had difficulty going up against Apple because they hadn't just bought out all the available flash in the channel - they had funded building new factories for building denser/cheaper flash, in return for their entire initial production.
I think this had more to due with negotiations with the record labels[0]. All the major record labels have investments in Spotify[1] which I imagine lends more credence to Spotify being able to negotiate good deals with the labels, since they can act more like partners
I think their point was, would Google move forward if they weren’t forced by Apple?
Google News took over and they gave up. Gmail got huge and they stopped doing much other than adding ads. Maps slowed down, other than adding ads. Google search keeps getting worse, they can’t seem to fix the SEO/spam problem. But they can add more ads.
Left to their own devices, Google seems happy to collect rent.
Apple, historically, makes their money by selling new hardware, and that requires improving things enough people want to buy it.
You compare services to products here.
The market doesn't appreciate disruptive changes to services, hence all improvements done in services of both Apple and Google are iterative and subtle, additive not disruptive.
Such changes are usually also not newsworthy, unless one has to do leaps to catch up with a competitor (looking at you, Apple Maps)...
Google only has a few consumer products, but they made quite a few leaps in the past years for similar reasons (Pixel Phone has to catch up with Samsung, Pixel Watch has to catch up with Apple Watch), while Google/Nest Home is competing mainly with Amazon in an ecosystem war I'd say.
One could argue that the iPhone is not moving forward much in user-experience for a few years now, it's mainly iterating to achieve further cost-reduction/profit and enhance additional revenue streams of Apple.
- Apple Watch is mainly getting enhanced to create additional (healthcare/insurance/services) business for Apple.
- The iPad is under no stress to evolve other than cost-reduction in HW/SW development by harmonization with the iPhone, main strategy seems to be to push customers towards more expensive variants.
- The Macbook was indeed revolutionized, by applying the same supply-chain practice of the iPhone. Lots of leaps forward here in terms of tech, something only Apple could do. But Apple was also not leading this segment and is under strong competition, so they were forced to do something or dissolve into oblivion.
- The Desktop-Mac trajectory seems to be to become a MacBook without battery.
There's an easy explanation for that UX stagnation in iOS: iOS-products are actually not classical "products" anymore, but equipment for consuming services. The devices come bundled with a few free services from Apple and encouage you to consume other services from affiliates.
Apple's main incentive is then not to sell new hardware, but to rollout new enabler-HW for selling additional services.
And they are in an increasing struggle to keep consumers "entertained" to pay for new enabler-HW, so apart from keeping control over their supply-chain they now need to become more exclusive in HW-components to lock-out competitors. Of course that can all be beneficial to the consumer, but would Apple move forward if they weren't forced by their business model?
> Over the years they never hesitated to cannibalize their own product lines (iPod)
The iPod was subsumed by the iPhone, not cannibalised. Parents were obligated to buy their kids a device with tracking/communication capabilities, so iPods became irrelevant. Now everyone buys the 2x more expensive device.
Now, if Apple were to merge the iPad and Macbook, I would agree with you readily.
For the ones that used later gen Nanos (same form factor as early Shuffles) as music devices during running/jogging, the Watch supplanted that role with the added benefit of combining the Nano and fitness bands (GPS, heart rate).
I agree, a full phone was annoying large and hard to carry (tried forearm straps at one point) and often ran without one. Though I know people who still ran with a phone anyways.
I bought specific athletic shorts or shirts (cycling jerseys with back pockets did great) so that I could run with my phone. Stopped working so well when the iPhone 6 Plus came out.
>actually bother with a coherent investment in a long term strategy to make a better product.
Pretty much every company on planet earth other than Apple. There are actually Three key points here, long term, correct strategy and better. Most company would do pretty well if they even have just one of the three.
One could argue Intel didn't have any of the three for about 5 - 8 years.
> Over the years they never hesitated to cannibalize their own product lines (iPod), or go through several tough technological transitions (68K/PPC/x86/ARM) if it meant building a better product. And bigger profits of course.
Oh the mighty power ipad with M2 chip and 5G modem but a crippling ipadOS that prevents one from making good use of the CPU power would like a word to argue with you :-)
I cannot think of any reason for Apple to not allow ipadOS to get proper multi-tasking than "oh this will start eating into our profits from the MacBook Air/Macbook Pro lines". I'm not advocating for MacOS on a touchscreen mode. I would have chosen the ipad Pro in a heatbeat if it could have real multi-tasking with a keyboard and a touchpad plugged in. I mean the chip is already there twiddling its thumbs..
I think they know this and may already be working on it, but if not, it’s likely they’ve done the math that the engineering investment to flesh out the iPad experience wouldn’t be worth the potential payout - people can always get a MacBook, as you rightly point out.
I'm not sure that's true overall. Consider their strange design decisions with iPhone screens:
- They waited years longer than the competition to switch to larger screens. They only really got to it with the iPhone 6.
- They completely slept on OLED and stuck to LCD until the iPhone X, when all mid to high range Android phones had already used OLED for years. Their iPads still use LCDs for apparently no good reason (lower price presumably), around ten years after Samsung sold their first OLED tablets.
- They slept on the screen-to-body ratio. They were years behind their Android competition when they finally changed their course with the iPhone X.
It’s just a different set of values. If you want the latest tech specs, like OLED or whatever else, then you’ll find that in other brand.
But if you prefer a system that just gets out of your way so you don’t need to think about it, and just use it as a tool to do something else then Apple is nearly unbeatable.
And I say this as a recent Apple convert. I reluctantly started using their tech a few years ago when a Macbook was the laptop given to me at a new job. But since embracing the idea of “meh I’ll just use the defaults and let the system get out of my way”, I feel like I get to spend more time on things where I add value.
I switched to an iPhone for the first time in 2020 with the SE, and it is still running fine today with no problems, on the latest software update. I don’t care about the screen-to-body ratio, I just want it to work well as a tool
I'm not denying they are great at other aspects, but that's a point which is independent of the points I made. Them using outdated design approaches in the past didn't make their products more reliable.
Fair, I guess I forgot to mention my point, so sorry about that haha
The point that I was trying to make is that a general Apple consumer doesn’t really care about those things, but does like consistency. So Apple waits until they are pretty sure that something is actually valuable to customers, before releasing it.
I can’t speak to screen sizes. I’m still not sure what the deal was there.
There was a reason for OLED. The first Android phones with OLED looked hideous. The saturation was all wrong and they used pentile subpixel arrangements. Further more I remember seeing pictures of burn-in on some (cheaper/off brand?) OLED phones.
Apple has their priorities. And display quality is one of them. Look at professional reviews of their monitors and other displays like the iPhones and iPads. Whatever people think of the cost/features they tend to be be impressed with the display accuracy out of the box.
They waited until they could get enough OLED screens to handle their demand. But they also wanted certain levels of color accuracy, I assume non-pentile, and something that could handle avoiding burn in for the expected lifetime of the phone.
That must have taken time.
Screen to body: I suspect they knew they were getting close on the iPhone X design and wanted it to really stand out, so they stuck with the forehead/chin longer than others. Total guess though.
> They waited years longer than the competition to switch to larger screens. They only really got to it with the iPhone 6.
There's a lot of people upset they switched to larger screens! I suspect that Steve Jobs really wanted the phone to continue to fit in his jeans watch pocket.
Android phones generally had to be larger for engineering reasons. But larger phones sell well.
> They completely slept on OLED and stuck to LCD until the iPhone X, when all mid to high range Android phones had already used OLED for years.
The iPhone has always had great screens. A lot of the early OLED screens were simply not good enough, cheap enough, and/or available at high enough production.
The iPhone X was more expensive and meant to be a smaller run compared to the main consumer line. It needed a screen with a concave cutout. It makes a lot of sense that this was their first OLED iPhone screen.
IIRC correctly, the iPhone X had much higher demand than Apple anticipated, and there were shortages due to the OLED screen.
Generally, android phones compare favorably because people will pick and choose the winning premium features and the lowest price from a set of Android phones, to compare against the single flagship iPhone. It's rigged.
Eventually people will be talking about how late Apple was to get into folding phones, when nearly all Android phones sold are still non-folding phones today, years after the first folder launched - and folders typically sell at over $1k premium.
> They slept on the screen-to-body ratio. They were years behind their Android competition when they finally changed their course with the iPhone X.
Is replacing hardware buttons with an always-displayed button bar really changing the screen-to-body ratio?
Isn’t it the scale they need to supply it in. It’s next level. I mean they don’t need to just implement a new tech but implement it in 200million devices every year.
LCD is still superior for durability and max brightness. I'd still prefer LCD for expensive tablets because it will be used over 6 years, unlike phones. Early OLED phones were bad for burn in. I heard that iPhone X OLED has more chance to get burned in compared to later models.
Burn in for smartphone OLEDs has long ceased to be a problem. It certainly wasn't a problem anymore for the iPhone X, even if later models are even more durable.
Regarding of tablets: I have a 2017 Galaxy Tab S3, very heavily used. The screen is still basically flawless. Physical wear is only visible when the whole screen shows a very dark grey uniform image, which happens approximately never.
The brightness isn't an issue either, in my experience. LCD does use less power on average, particularly on websites (which are typically bright) or PDFs, but smartphones have rather small screens anyway and tablets have a quite large battery.
It helps when you have so much power over suppliers they can't complain or cut you out when you overtly threaten to replace them with vertical integration.
I keep wondering what would happen if Apple were to license iOS. I know that's a sore spot (Jobs is dead though), but could they actually fuck over Android if they did?
A whitebox iOS could be downgraded enough that they could still service the snob market, but maybe undercut Google completely. I mean, how much does it even have to cost per unit when each sold drives traffic to their own app store?
Now I kinda want to see how badly Apple would reinvent androidx libraries. NSPermissionCompat would be a beauty to behold in 2045 when they finally decide to do it. Unless they don't and leave it up to the iOS community to (badly) do it.
And that’s a huge part of the problem. Any move like that would tarnish iOS in everyone’s eyes. Apple wants you to buy a real iPhone, even if it’s the model from 2 years ago, instead.
If BMW started selling a ton of $18,000 cars do you think that would help or hurt sales of the M3 and the 5 Series?
That bit then with the clones. The competition undercut them. You know what that high Apple price does? Help pay to develop iOS.
They would have to charge enough to make up for the money they would lose on the list phone sales. And seriously risk their reputation in the process.
Apple has been through a lot in their 40 years. They have an atavistic reaction at this point against anything that prevents them from controlling their own destiny. It’s hurt them too many times.
I suspect that’s why they’re doing this. They don’t want to be beholden to Samsung, LG, or anyone else for displays.
Despite what you're sure of, it's Apple that's stuffing your phone with more ads, and it won't let you use alternative apps without those ads on "your" phone.
I'm sure the CTA to pay for upgraded storage included in that warning is just a convenient coincidence, eh. I don't like the dark-patterness of it - it feels like blackmail.
Is MicroLED the be all end all for display technologies? Perfect blacks, high brightness, no burn in, no bezels, thin. It's hard to imagine another display tech displacing MicroLED once it takes hold. Does it have any theoretical downsides other than cost?
In theory, yes it's about as perfect as a screen technology can get. I'm sure earlier implementations will have quirks that make it a dealbreaker for particular applications/users, but that'll work itself out in time.
I'm looking forward to it. I'm happy with the OLED panels in my phones but with how long static elements are visible on my computer monitors and TVs I haven't been able to justify buying OLED for those with how even the best can still exhibit burn-in. Once there's microLED monitors and TVs with performance equivalent or better than current QD-OLED panels I'll be buying them immediately.
> I haven't been able to justify buying OLED for those with how even the best can still exhibit burn-in.
My oldest OLED TV will turn 6 in August and has exhibited no burn-in.
I think it ultimately comes down to what you're watching. You're likely to experience burn-in if you watch something with a persistent banner like 24 hour news.
For example, my 2 year old LED VA Panel monitor already has burn-in where the task bar is displayed.
Yup. When the kids were younger, we always had Disney junior on. Mostly because I always like to have a bit of background noise (this kids didn't actually watch it that much) and it was kid friendly. The Disney junior logo burned into the screen.
The risk is lower but still present on my TV, which spends a large percentage, maybe even a majority of its powered-on time playing games with static HUDs. What I'm really worried about is computer usage… the Windows taskbar and macOS menubar specifically. Both can be set to auto-hide but I'd really rather not have to do that, particularly for the Mac global menubar.
And yeah, I've experienced image retention on non-OLED screens. the LG-made 2560x1440 IPS panels that used to get used in 27" iMacs would after a few years start exhibiting image retention, though it'd at least fade if the static elements that sat long enough to cause retention were hidden. I think this was caused by the heat generated by the computer part of the iMac though, because the Apple Thunderbolt Display that used the same panels never develop the issue even after a decade+ of usage. I also haven't seen it happen on any other IPS panels I've owned.
I haven’t experience any burn in issues with my OLED or even image retention which I sometimes see on my Plasma.
I think for normal TV usage it is a non issue. The only times I have heard it being a problem is when someone leaves their TV on for a week for their cat or something.
It's a whole area of the screen. Shifting by a few pixels doesn't do anything for a whole area. It'll help only with thin text, but that's the element that tends to mostly change on its own anyways.
If you want to rotate the task bar or dock or menu bar around all four edges of the display every day then maybe, but that'll be hell on your habits and muscle memory.
I actually think there are no major issues apart from the enormous costs. But costs don't always get outweighed by higher performance. The reason why OLED-Displays were (and are) so successful wasn't just that they had better contrast than LCDs, it was also that they were not massively more expensive.
Compare that to Intel's Optane, the elusive PCM memory technology which finally arrived to take out NAND Flash. Sure, it was faster than Flash, but it was also massively more expensive. It wasn't worth it for most people. So Intel recently discontinued it.
The same could happen for Micro LEDs. It isn't clear whether their quality advantage over OLED is worth a much higher price. Their main advantage over OLEDs (higher max brightness) doesn't seem too relevant anyway.
> Compare that to Intel's Optane, the elusive PCM memory technology which finally arrived to take out NAND Flash. Sure, it was faster than Flash, but it was also massively more expensive. It wasn't worth it for most people. So Intel recently discontinued it.
Intel Optane is barely faster than flash — roughly 3x (for 4k random read) compared to the fastest Samsung SSD [1].
Optane was a failure because it failed to deliver the promised result. Intel would have never poured money into a new technology only three times faster than an existing technology. I recall initial promises were RAM-like speeds.
If Apple is able to spit out 200 million of these screens in a year, I have a very hard time imagining what ingredient could go into the production causing a greater per unit price than a Samsung display.
> Intel Optane is barely faster than flash — roughly 3x (for 4k random read) compared to the fastest Samsung SSD [1].
... and the maximum brightness advantage of micro LEDs may be similarly (un)impressive.
> Optane was a failure because it failed to deliver the promised result.
No, it only failed because the performance/price ratio wasn't good enough compared to NAND flash. That's exactly what I was saying: Both performance and price have to be considered.
> If Apple is able to spit out 200 million of these screens in a year, I have a very hard time imagining what ingredient could go into the production causing a greater per unit price than a Samsung display.
Replace "Apple" with "Intel", "screens" with "Optane disks" and "Samsung Display" with "Samsung Semiconductor", and you see that this argument doesn't work.
uLED will have one major victory over OLED - longevity of the display.
Source: Where I work is tooling up to start producing uLED products for SLA 3D printing. UV uLED + LCD filter all in one. I'm having to teach them how to utilize LIFT.
Higher max brightness is very relevant. most Oled tvs in recent years max out at something like 800-1000 nits. Dolby vision content can be mastered at up to 10000 nits I believe and microled tvs could provide that brightness.
There's something incredibly realistic about highlights being super bright, I think it's going to look more like a window than a TV.
HDR LCD TVs also had significantly higher maximum brightness than OLED TVs, but that advantage apparently didn't matter much compared to the finer contrast and lower black levels of OLED. I think smartphone LCDs were also brighter than OLED displays. It then questionable whether the better maximum brightness of micro LEDs will outweigh a much higher price.
More "if" rather than "when". It's far from obvious whether they will be successful. They just have a higher maximum brightness, and a large cost cut is not guaranteed.
The only technology I've heard of that might outperform it is directly-driven nanorods, something Samsung has been experimenting with. These are basically the same thing as "quantum dots", except that instead of small round crystals they are rod shaped. Apparently if they're aligned and excited by an electric field, they glow with a pure colour. This makes them very similar to microLEDs, but potentially much cheaper to manufacture.
However, other than some breathless press-releases a few years ago I haven't heard anything happening recently, so it's possible the technology didn't pan out.
Super cool, I asked Master GPT-4 to teach me about these nanorods and they sound super interesting. Exciting that they could be used for flexible/transparent screens and are lighter on battery, that would make AR devices much more seamless.
Why can’t they reduce voltage to each pixel to reduce brightness? I don’t see this as a fundamental limitation of the display technology, PWM is just easy and cheap to implement.
LEDs tend to change emission wavelength when changing the current. This is quite an issue if you want to combine them in an RGB display, because the human eye is extremely sensitivty to relative color variation.
I'd love a source for this claim (not sarcastic, I really would). I've have done a lot of testing of LEDs for scientific uses and my experience and what I've read show that temperature is what effects the center wavelength of LEDs. Not current/voltage. The reason for this, is that in monochromatic LEDs (so not white LEDs which have a phosphor coating) the emission wavelength is defined primarily by the bandgap in the semiconductor material. This bandgap is the difference in energy between the valance electron band and the conducting electron band (and this band "gap" is the reason for the "semi" in semiconductors).
This bandgap corresponds to the photon energy of the emitted light as electrons get excited due to the applied voltage as electrons are excited to the conduction band and then relax back to the ground state giving off light.
The bandgap energy changes as a function of temperature. The primary reason for this is that the lattice constants increases as temperature increases. This causes the bandgap to decrease, meaning the energy of the photons is less giving a longer wavelength.
The opposite effect is also true, cooling a LED will lead to a shorter wavelength. Here is a cool video showing the effect![1]
Increasing the current through the LED may change the temperature by a little bit but you need large temperature changes to have any effect.
The temperature has a much greater impact on the intensity of light emitted by the LED. I have seen a typically 1% decrease in intensity per degree C for the LEDs I have tested. This is the effect that matters most when using RGB leds as if the red led gets dimmer cause it is hot, than the green or blue, it will be seen as a color change, even though the center wave length of its emission is unchanged.
I mostly just wanted to share things I have learned about LEDs over the past year or two and your comment gave me a good opportunity!
While not a source I would wager that the LEDs are already driven close to the bandgap (for efficiency's sake) and meaningfully lower voltages would cease to produce any light output - necessitating PWM to control brightness.
I could plausibly see some color shift at close to bandgap voltage if there isn't a perfect uniformity in bandgap across a diode, inconsistent or even just gaussian distributed doping would result in some holes being preferentially excited if there isn't a sufficient surplus V?
> Digital pulse-width modulation is well-suited to driving microLED displays. MicroLEDs experience a color shift as the current magnitude changes. Analog schemes change current to change brightness. With a digital pulse, only one current value is used for the on state. Thus, there is no color shift that occurs as brightness changes.
I was commenting based on my recollection of this article. Granted it's for AMOLED, so it may not apply to microLED.
In this regard, AMOLED displays have a strong disadvantage. If you feed less voltage to the organic diodes, not only do they limit their brightness, but their color also changes, so that there might suddenly be visible differences in the color reproduction.
Is there a reason they couldn't just jack up the PWM frequency? Most OLEDs seem to be under 500 Hz[0], which leads me to believe that there's something limiting them from performing any faster. Basic LEDs on the other hand can easily operate at frequencies in the 1-5+ kHz range and can be pushed very far if the entire system is designed well. A display running at 10 kHz might not be noticeable to even the most sensitive people.
LEDs central wavelength changes with current. Some flashlight enthusiasts will not touch current regulation and prefer to use PWM because that means the color output does not change. From what I understand you can even use this effect to calibrate a diode laser to a specific wavelength(within reason).
This is not really a problem, you can away calibrate the differences on a pixel-by-pixel basis on each frame with modern mobile GPUs.
The bigger challenge here is pixel architecture, but if apple is actually slicing up wafers into a couple million pieces to build these displays, they are already sort of moving away from the typical TFT architecture and may be able to integrate more complex pixel drivers, potentially including things like touch sensors directly onto the pixels.
It is not possible to calibrate it away without knowing what the central wavelength of the LED is. That would require a spectrometer and if you manage to build one on chip per pixel which is currently not possible/practical.
I don’t understand what you mean with the GPU. It is has no information about the exact color of the LED.
This is equivalent to saying that you can't build a color-accurate display at all because you don't know the central wavelength. Not only is this inaccurate (LEDs are binned for exactly this purpose), but brightness variations are by far the greater contributor to display inaccuracy.
The shift in wavelength is primarily determined by temperature and current, and they work in opposite directions so sort of cancel each other out. And in any case, we're talking about well-characterized shifts on the order of a few nm over the operating range. The eye's cones are broadband, so you're not going to notice wavelength shifts, especially compared to the brightness variations over the same range.
This is a big deal for white LEDs because you have no control of the resulting color temperature (the phosphor emission and blue component wholly determine the output), but for an RGB structure, you have pixel-level control over each component.
I am sure your right about the eyes. But I would posit that most people wouldn’t know uncalibrated from calibrated anyhow. So seems like a moot point.
As to binning LEDs that works because it is constant. You can calibrate it once and done. But if you change the brightness by changing the current, it means your calibration is out of wack. Perhaps you can make a calibration at multiple current settings, but that seems inconvenient when using PWM will achieve the same thing.
I'm sure you could calibrate a compensation in theory (don't know about in practice), but that would also necessarily decrease the display color gamut, no? It's not like you can produce "all" the colors from any three primaries -- they have to be very specific.
So if the color shift is noticeable enough to require correction, then it's definitely enough to substantially decrease the color gamut as well. And so a range of wider-gamut colors simply can't be compensated for at all.
> manufacturing limitations mean you can't easily have those per-pixel while still keeping the whole thing cheap
“can’t easily” seems to imply it is possible. If that’s true, Apple, with its deep pockets, should be able to do it.
Also, I don’t think Apple will be bothered much with “keeping the whole thing cheap”. They will want to prevent it from getting expensive, but likely will accept intermediate costs if the result is much better.
Seems to be a limitation of our modern times rather than the display technology. I'd hope to see PWM free displays in the future as our micro LEDs improve.
For VR headsets the real buzz is around MicroOLED not MicroLED. Samsung just bought a small MicroOLED manufacturer with a technology that offers much higher brightness than other methods, which is important for virtual reality headsets.
I’ve read somewhere that one potential issue is that due to microLED color primaries having more narrow spectral bands, individual variations in color perception become more pronounced, leading to differences in how people perceive the colors of microLED displays.
In addition to flicker (discussed in sibling comments), I imagine they have problems with power + glare just like any other non-reflective display technology.
OLEDs have almost perfect Pixel Refresh Latency (< 1.5ms). MicroLEDs are still just LCDs with a better backlight but LCD Tech has evolved quite a lot and pixel refresh time (< 3-4ms) is not much of a problem anymore.
You may be confusing MiniLED and MicroLED. MiniLED still uses an LCD with multiple backlights and local dimming algorithms to improve contrast. MiniLED is pretty impressive and the quality of it is generally 1-to-1 with the number of backlights/dimming-zones.
MicroLED is more akin to OLED where each pixel is self emitting. MicroLED is almost the holy grail of displays and will more than likely obsolete MiniLED.
> MicroLEDs are still just LCDs with a better backlight but LCD Tech has evolved quite a lot and pixel refresh time (< 3-4ms) is not much of a problem anymore.
You're thinking of MiniLED, which is just better LEDs behind an LCD.
MicroLED is a whole different ball game, with no LCD at all. It's just red, green, blue LEDs, one for each pixel.
One exciting thing about microLEDs is they can achieve much higher pixels per inch. Before Apple’s big Retina push, PPI was around 80-120. It went to around 200-300 with Retina. Researchers making microLEDs in the lab have gone as high as 6000 PPI, and something in the low thousands seems quite plausible for commercial technology. Apple could conceivably bring in “Double Retina” at 800 PPI, or they could go straight to “Double Double Retina” at 1600 PPI.
It’s often claimed the human eye can’t see above 300 PPI; this is even more misleading than the claim that the human eye can’t see above 30fps (we all know there’s a distinct visual difference between 30fps and 60fps, and most people are able to detect the difference between 60fps and 144fps as well). The PPI limit of the eye depends on how brightly lit the subject is and how far away one is viewing it; for a real upper limit you might use something like the maximums of https://lobsangstudio.com/ls_pixel.cfm which says at 6 inches viewing distance under 10,000 lux (bright sunny day), the eye could go as high as 1200PPI. The fact that microLEDs offer the potential to go beyond even the maximum of generous estimates in best conditions is very exciting, it’s like Formal Proof version of Retina rather than the current Good Enough.
The point of Retina is that the pixels aren't visible at normal viewing distances. You really shouldn't be using any (non-VR) screen 6 inches away. At 18 inches the effective PPI would be 400, so not much more than retina even at insane brightness.
The Retina value is “for a specific visual acuity (usually 20/20), at a specific viewing distance (usually 18 inches), at a specific brightness (usually indoor office lighting), individual pixels cannot be distinguished”. If you have better than 20/20 vision, you need a higher PPI to achieve “Retina” (individual pixels not distinguishable). Chuck Yeager, the fighter pilot guy, famously had 20/10 vision, and it’s estimated up to 1% of the population might have 20/10 vision as well. It is apparently quite common among elite athletes to be 20/15 or better.
Viewing distance also varies a lot - I recall reading somewhere the empirical observed viewing distance for smartphones is between 6 and 15 inches sitting and between 4 and 9 inches lying down. Environmental brightness also varies, obviously.
Just to give a personal example, my near vision was measured at 20/16 with correction, I am currently holding my phone 7 inches from my eyes, and I have my room lit with four 1500-lumen bulbs (to simulate daylight). I don’t know what the correct PPI would be there, but it’s obviously going to be higher than the usual amount.
In addition, it’s not clear that “cannot distinguish individual pixels” is even the same thing as “no more visual quality can be gained with higher pixel density”. At 30fps the human eye can’t distinguish individual frames, but 60fps still has visual benefits over 30fps, and many people are able to consistently detect the difference between 60fps and 144fps. Past 144fps is when you get to “no more visual quality can be obtained by higher frame rates” (as opposed to proxies like “individual frames are not distinguishable”). I suspect a similar thing is true for pixel density.
But as I said originally, I’m excited that microLEDs might make it viable to have screens with a pixel density not just “beyond what’s reasonably good enough in most circumstances”, not even “beyond what’s detectable by most people in all but the best circumstances”, but in fact well into “provably beyond the limits of what the best human eyes can detect in the best possible conditions at impractically close viewing distances”.
> It’s estimated up to 1% of the population might have 20/10 vision as well. It is apparently quite common among elite athletes to be 20/15 or better.
On the standard eye test chart where 20/20 vision is the ability to read the bottom row, I could usually read the copyright fine print below the 1-inch high block letters!
To me it always felt like that is normal vision, and 20/20 is taking an average that includes a bunch of people with poor eyesight. Like... the average person has less than 2 eyes, less than two legs, etc...
The history of 20/20 being considered normal is quite interesting, essentially it comes down to being simply what Herman Snellen defined as normal (on the plus side, he had a few years practice as an ophthalmologist before he invented his chart and it was based on standardizing several other charts in use, so it was based on a lot of experience; on the minus side, it was invented in the 1860s).
It’s been a while since I read up on it but my recollection is that the healthy eye in a young patient (“normal” in the sense of “lacking any deficiencies whatsoever”, not in the sense of “average”) should be around 20/12, with natural variation between 20/10 and 20/16, and 20/20 indicating (extremely minor) deficiencies. The theoretical perfect eye from an optical standpoint, something like if you were to grow an eye from the same materials as a real human eye but you could specify exact dimensions etc., is something like 20/7 (as in, you can’t get better optical resolution than that given the optical properties of our lenses and the distribution and size of our light receptor cells).
Eyes are extremely complicated structures that are either very sensitive to environmental conditions, or that the body is very willing to cut corners on given the slightest provocation, so few people get that “perfect” vision.
There is one very good reason for considering 20/20 to be “normal”, though - since it has been accepted as the standard for over 150 years, most of the human-designed environment has used it as a yardstick for designing visual elements such as roadsigns. The size and shape of roadsign lettering and their placement relative to the road feature they’re describing is all according to a standard that was worked out on the assumptions of (some multiple of) “normal” 20/20 vision, for instance. Likewise for just about every other thing you have to read. So it makes a lot of sense to use that as the threshold. In this sense perhaps it helps to read “normal vision” not as “a normal eye” or “the normal acuity among average healthy humans” but rather “the vision threshold needed to interact normally with the human world”.
Assuming 20/16 is distance low-light you may be over correcting. Your eye needs to accommodate away that correction during close vision, which can cause eye strain and (EndMyopia conspiracy theorists claim) axial elongation and worsening vision.
Full disclosure: I’m an EndMyopia conspiracy theorist.
I hadn’t heard of EndMyopia but it sounds plausible to me. I have been myopic around -6.5 diopters left eye and -5.5 diopters right eye (with astigmatism) for almost 2 decades now. I discussed eye health and preventing worsening / potential training to improve vision naturally with the optometrist (“consulting” optometrist i.e. not part of a glasses/spectacles store, their end product is the prescription values for your eyes). We settled on 20/16 near vision and “20/30ish” far vision (20/30 line easy, 20/20 all correct but with slight effort / hesitation) as a good compromise for my lifestyle - essentially, something that would give me adequate vision for driving and optimal vision for working at a computer. I use soft contact lenses because I was told these use more of the eye muscles for natural accommodation than hard lenses or glasses, thus keeping the eye healthier. I was told to keep a close eye (heh) out for any fatigue or eye strain and I haven’t experienced any of that (although I also didn’t experience it with glasses, so it might not mean much). I had four visits in total over about five years and we saw no change so the optometrist said she was comfortable that my eyes weren’t deteriorating. All of this is over a decade ago and I’ve just been using the same prescription since with no issues, so my memory might be incorrect or some of the science of what she told me might now be out of date, but I haven’t experienced any issues so it seems to be working for me. Not medical advice etc.
How did I just randomly find this? For the best conspiracy theories EM is based on, Google Scholar for clinical research. scholar.google.com, type in 'pseudomyopia', also try 'NITM' and for the most fun, 'lens induced myopia'. Doesn't make EM right, but the optometrists don't tell you what actually cases myopia.
>The 10600 pixels over 20 inches corresponds to 530 pixels per inch, which would indeed appear very sharp. Note in a recent printer test I showed a 600 ppi print had more detail than a 300 ppi print on an HP1220C printer (1200x2400 print dots). I've conducted some blind tests where a viewer had to sort 4 photos (150, 300, 600 and 600 ppi prints). The two 600 ppi were printed at 1200x1200 and 1200x2400 dpi. So far all have gotten the correct order of highest to lowest ppi (includes people up to age 50). See: http://www.clarkvision.com/articles/printer-ppi
I have no idea why everyone forgot how shit 300dpi printers were and how much crisper 1200dpi looked.
> the last two generations of Mac Pro systems have been built in their own plants
As far as we know, Apple doesn’t own any of its manufacturing directly. The Mac Pro plant in Texas is operated by either Quanta or Flex[1], depending on who you ask.
You could argue that it’s largely a semantic difference, though, because those facilities are dedicated entirely to Apple and they’re usually the ones funding their construction.
Sorry for my naivety, but how does a company keep memory of the processes and oral knowledge if they don’t own the factory? Is everything specified enough in requirement documents and process sheets that “factory providers” have a clear guide to implement their factory?
And does that really save on CapEx given that Apple funds the construction anyway and secrecy is absolute as well?
This situation let's the contract between Apple and the fabricators be results based and Apple can focus on Apple things and the manufacturer has the burden of spending the factory budget and getting results.
This helps keep crisp lines of delineation between design and production.
That is my understanding.
The processes that Apple is outsourcing are considered disposable as the technology changes. The oral traditions and cultural knowledge are in the software/hardward/design ideals.
Which step are you counting as "manufacture"? They will have a lot of partners here in the supply chain of this component; ams-Osram, LG, and TSMC according to the article.
In terms of Apple being "hands on" in manufacturing I don't see this as too different from the Apple Silicon case, where they design but work with partners in Taiwan+China to manufacture.
I see the big news here as "moving away from Samsung", where they would likely have to continue to invest in a competitor (money and IP) in order to get microLED technology through them.
Continuing to work with Samsung (or LG, etc) means as soon as Apple gets a cool new display tech, so does everyone else. Because they don’t control it.
If Apple controls it all and they get it working, they can have an advantage until everyone else figured it out on their own without infringement. Even a 2-3 year advantage (like the A series gave them) would be huge.
Good time and place to start too- the smallest screens of any product they produce and only the high end (and margin) models. Always better to start small. Can work out the kinks before going wide.
Apple probably has several engineers embedded in TSMC and Samsung's teams. You may not have a fab, but I don't think that it is likely that you can build an advanced CPU in the newest process without having a few teams directly involved in the manufacture.
They have designed their own SoCs, own BLE radio and now working on their own cellular and Wifi radios.
I understand displays is another beast but apple is doing complete vertical integration like no one else. Anytime they notice a dependency on specific vendor, they try to move out.
The sapphire glass used on some devices was reportedly made in an apple owned factory, according to Apple terms, using equipment Apple paid for. The actual production was handled by a third party using that space that went bankrupt, but I'd say it counts.
This would be the most significant component they've manufactured in-house though.
That doesn't count because the other company had designed the process, but didn't have capacity to produce at Apple-level volumes. Apple paid for the expansion,and then ditched the company's product (which led to the bankruptcy)
They definitely have assembled them but I don’t think they ever felt the need to manufacture because the infrastructure for doing that for various types of parts is so strong. This is probably just another step in vertical integration and hedging against their supply chain being vulnerable.
I believe the supplier for iOS displays in general has been Samsung for a while (majority) with a few other suppliers mixed in. Maybe they want to keep going with this technology and Samsung is moving away from it.
They’re not going to be fabbing their own chips so maybe it’s an attempt to shore up other areas that are less moated.
At this point there isn't any significant infrastructure for building microled displays anywhere. Why invest in a competitor when they would do the same exact thing you would, but they would own it?
"The company was founded in November 1990 as Advanced RISC Machines Ltd and structured as a joint venture between Acorn Computers, Apple, and VLSI Technology."
All latest node tech from TSMC was co-financed by Apple who in turn had the first-user right.
But the ARM architecture was originally an Acorn internal project (it was originally the Acorn RISC Machine), with the first Acorn Archimedes systems released in mid-1987. As you noted Apple did invest in the project later, when ARM the company was founded, but they also bailed out in the late '90s when they had their own financial issues.
> The company was founded in November 1990 as Advanced RISC Machines Ltd and structured as a joint venture between Acorn Computers, Apple, and VLSI Technology.
The company was renamed to Advanced RISC Machines to placate Apple - the first ARM processor was the Acorn RISC Machine, which tells you it's true heritage.
Sure if we just ignore all the billions that the Tawiwanese government poured into TSMC, and all the stock investors' money, and ignore the other clients TSMC had, and the fact that they got to 20-nanometer without any funding from Apple as far as I know, then I guess you are right.
They don’t like being beholden to anyone. It’s why they their own chips. Their own chipsets. Their own storage controllers. They’re working on their own radios. Their own fingerprint sensors. Their own facial recognition. Their LIDAR. Their image processing chips attached to the camera.
They often contract out actual manufacturing. But they design and control everything they can, being very hands on with manufacturing.
On the iPhone the screen and the camera sensors are the two big things they don’t control.
The Samsung angle absolutely doesn’t help. But if they were using LG I think they would still do this. It’s the direction they’ve been heading in for years.
This is history repeating itself; like the "frowned-upon", complex structures of conglomerate in Japan and South Korea. Samsung is an example most are familiar with. At least gen Z and millennials get to see how one is borned.
Putting this facility in Taiwan seems risky, considering the political situation. I heard Warren buffet just pulled his tsmc stocks out of caution over it
I still don't understand why China would possibly want to try and invade Taiwan? The chip foundry's would immediately be scuttled, and whatever would be left would be a nice, but simple island.
In exchange for taking that land mass, their entire international trade business which mainland china is extremely reliant on, would be in jeopardy.
It would almost be political suicide to invade as far as I see it. But I'm no expert.
There's a lot of people who ask questions about history or geopolitics that amount to, "why would <country> pursue <goal>?" when <goal> is a central ideologically motivating factor for <faction-that-controls-country>. The classic example being Germany's 1941 invasion of Russia--often derided as a strategic blunder in hindsight, except the rulers of Germany in 1941 had the conquest of Russia as their primary geopolitical goal in the first place. Likewise, the existence of Taiwan outside of CCP control is contrary to the goal of unifying all of China under CCP control.
> In exchange for taking that land mass, their entire international trade business which mainland china is extremely reliant on, would be in jeopardy.
The thing is most of the world is also pretty reliant on international trade business with mainland China.
The question is really about how willing the world is to disrupt their own economies in order to disrupt China's.
Trade barriers against North Korea and Cuba are basically free for the other country, restricting access to energy imports from Iran or Russia imposes a real cost on the country raising the barriers, but even after decades of grumbling it's hard to find a supply chain for manufactured products that doesn't have a significant dependency on China. That makes it hard to fight a war with China, trade or otherwise.
> hard to find a supply chain for manufactured products that doesn't have a significant dependency on China. That makes it hard to fight a war with China, trade or otherwise.
You don't ban those products, you put big tariffs on them. That way, anyone who has the option to source stuff elsewhere does so.
And you deliberately act erratically with tariffs - adding and removing them at short notice - so that lots of companies think it is just too risky to trade with China, and move manufacturing elsewhere as insurance.
How do you deduce that the foundry's will be scuttled? I'm not saying they won't be. But I can foresee a world where too many people who care about these foundry's regardless of which political party is in charge, would prevent them from being scuttled. And also, China might have insiders in Taiwan that might prevent the scuttling and make sure they are preserved.
It doesn't matter how many people don't want the foundries to be "scuttled"; if enough people do, and they have the means to do so, it will happen. It might be the Taiwanese military, or an allied military, that actually does it.
It wouldn't take many disgruntled employees to scuttle it themselves. It's much easier to ruin delicate expensive equipment than to preserve it in working order.
These workers have opinions and their opinions might not be the same as yours. They might not be as pro-western as you may believe. They also have imagination and can imagine a world where they are better off under China with this equipment working then under China without this equipment working. Either way, the point is that its not a sure thing that this equipment would be scuttled.
> better off under China with this equipment working then under China without this equipment working.
This. If an enemy is invading your country and looks likely to succeed, then the important thing for you as a citizen isn't defending your homeland - it is making sure you have a future in the new country under new governance. That might well involve being the guy with all the knowledge to get the chip fab going again.
The US hasn't lost many wars in their homeland, so most citizens haven't really thought of their options...
There aren't any such guys. Running TSMC is a stack of many different levels of sole suppliers from different companies (US, Netherlands, Japan, even sort of Ukraine for neon.)
It wouldn't take any. Military action against Taiwan would lead to sanctions at a level never seen before. Without Western support in the form of raw materials, spare parts, and intellectual property, the hardware at TSMC is fit for nothing but whatever the modern equivalent of Weird Stuff Warehouse is.
I guess they could raise a few yuan by selling the DUV and EUV machines on eBay. "Removed from a working environment." That'd be about all they could do with it.
It would be hard for China to maintain and the whole time they are trying to source parts and find people able to maintain it they wouldn’t be able to get the newest equipment so start falling behind again.
The US could even bomb the factories to deny the resource.
Taiwan relies on external materials as much or more than Manhattan. They have at best under a month's worth of food on the island, and other raw materials supply is measured in days.
One thing people miss with the china/taiwan discussion is that China doesn't need to make a single violent move towards Taiwan to subdue it. A naval blockade would force Taiwan's capitulation in almost no time at all.
Even a forceful blockade wouldn't be totally necessary, if China just passed any sort of sanction stopping the export of materials or food to Taiwan, prices would immediately become unsustainable on the island and it would also have to capitulate.
Assuming the Chinese manage to occupy Taiwan, they would no doubt want to take over TSMC’s facilities and hold them as a bargaining chip to normalize relations with the West. They’d swiftly run out of essential components such as silicon wafers (from Japan), but the CCPC might just try their luck.
There's small matter of the gigantic trove of treasure.
The Kuomintang took huge amounts of priceless Chinese artifacts when they fled to Taiwan, which are now in the National Palace Museum[1]. There's a case to be made that the CCP was on an iconoclastic spree and there was good reason to take the artifacts to Taiwan for safekeeping, but the fact remains that vast amounts of priceless Chinese relics are now in Taiwan. China would like them back and is willing to do more than ask politely. Taiwan can blow up the fabs, sure, but they're probably not going to blow up the relics. Chinese leadership would gain a historic amount of face by bringing them home.
I'd say a reason, not necessarily the reason. Taiwan rubbed China's face in it for decades, it seems like a long-held grudge:
> During the 1960s and 1970s, the National Palace Museum was used by the Kuomintang to support its claim that the Republic of China was the sole legitimate government of all China, in that it was the sole preserver of traditional Chinese culture amid social change and the Cultural Revolution in mainland China, and tended to emphasize Chinese nationalism.
Even though, if you read further down in the article, there's not as much acrimony about it in recent years. Still, China has had dreams to invade Taiwan for decades, mostly for nationalist reasons, and this is likely one of the reasons that's higher in the list.
If China managed to have their oceangoing trade sanctioned by the US and their allies, who are they trading with? Russia? The Middle East? Both of which the country shares enormous land borders with. Not to mention that this would involve the US crushing its own economy in order to create the sanctions. About the only thing that could trigger a response like that would be an invasion of an allied country...
As far as a casus belli it is a stretch at best. The kind of thing you have to put forward because the population at large wouldn't accept your real reason.
The fact that Taiwan exists disrupts the official narrative that there is only one China and its ideology is Maoist communism. Reunifying China is thus kind of a priority.
No idea why you being downvoted. I as owner of Russian passport even been to Kiev in December 2021 and talked with a lot of friends and just normal people there on gamedev conference. There been a lot of tension for 8 years since 2014 and even more during 2021. Yet no matter political views no one in either Russia or Ukraine seriously considered full scale war possible.
Conclusion: there no way to know what is inside a head of aging autocrat.
> no matter political views no one in either Russia or Ukraine seriously considered full scale war possible.
Plenty of experts have been calling it for a decade now... EU/US influence has been slowly creeping east for 30 years. Russia had no other options really - as soon as a pro-west leader was elected to Ukraine, war was inevitable.
And the same will happen in Belarus when the leadership changes there too.
Have you visited any countries bordering Russia - Ukraine, Georgia, Azerbaijan? If you do, you see quite odd things...
You see children's playgrounds that have a big sign saying 'Proudly sponsored by the USA development fund", or "This bus route is made possible by the EU funding.".
Why would the USA be using US taxpayer money to build children's playgrounds in Azerbaijan?
It's all about a long term battle for hearts and minds... Be the 'good guys' to a 5 year old child now, and in 30 years that child is a voter and you can have influence over the whole country.
Pretty smart moves... But I can see how Russia isn't happy about it.
Now do Yemen, Iraq (first and second time), Pakistan, Libya and Syria. Then do Vietnam, then consider the support and armament of Israel and finish up with decades of CIA meddling and installation of brutal dictatorships all around the globe.
We have politicians here that voted in favor of those war crimes, demanding that Russians are punished for theirs. Let's start at home if war crimes are something we care about if not we can shut our hypocrite mouths.
The end of the Nazis and the Holocaust doesn't become a bad thing just because the Soviets were involved. Helping Ukraine resist a clear war of aggression doesn't become a bad thing just because the US is part of that helping. Hell, it's nice to see most of the world chip in on a more morally black-and-white scenario than is typical over the last 70 years.
If you think Russia and the US are on equal moral footing with regards to the Ukraine question, you're the @dril tweet.
It can be accurate but she is also totally within her rights to not listen to him. Do as I say not as I do has never been effective. If we want less war we need to set an example not raise the share price of our private defence contractors at the expense of civilians while wagging our finger.
Also the powers that be in the US want a war in Ukraine as much as Putin does. Probably moreso. They are selling a lot of killing machines.
> It can be accurate but she is also totally within her rights to not listen to him.
She might not be wise to. An obese doctor advising an obese patient to lose weight because of health problems still has a potentially valid point. The obese doctor might even have more useful perspective than another doctor on the issue.
> If we want less war we need to set an example...
I'm pretty sure that's what we're doing in Ukraine. China certainly would be wise to consider what a Taiwan invasion would look like if the US and allies decided to assist in a similar fashion.
> Also the powers that be in the US want a war in Ukraine as much as Putin does. Probably moreso. They are selling a lot of killing machines.
Sure. Ted Bundy and I both like to eat; that doesn't automatically make eating bad. This is precisely why we have to evaluate the actual specific scenario in question on its merits.
> I'm pretty sure that's what we're doing in Ukraine.
If we wanted to set an example we would declare war on Russia. Instead we send billions of dollars into a the unaccountable void that is the most corrupt country in Europe.
Look man, if really believe in this stuff the military is really struggling to find recruits right now. A lot of people are disillusioned with the forever wars as their nations infrastructure crumbles around them, costs rise to unsustainable levels and their streets are full of homeless. They watch the VA fail to serve veterans and remember the lies they were told by the elderly politicians who still haven't been replaced. You on the other hand appear to have strong convictions about this conflict, rather than costing my daughter her future via debt spending are you willing to be the boots on the ground to make sure justice is served?
I agree. Once Putin had Trump there in power he sensed that the country was weak and decided to take action thinking that the Americans were too preoccupied with trying to be some semblance of government and international relations back to order once Biden won the election.
Problem was that the Biden admin actually has experience and capabilities, and the Ukrainians decided to fight for their country and now Russia is finding out that they made a pretty poor move based on faulty assumptions.
> We have no leg to stand on
1) At some point we get our legs to stand on back - supporting Ukraine actually has gone a long way in the eyes of Europe and allies in the Pacific.
2) Not having a leg to stand on doesn't matter because we can do wrong things and then also criticize others when they do wrong things.
3) EU member states and other nations have legs to stand on and they also support the American position on Ukraine.
I live in Ukraine and never have seen a US-funded playground. As a father of two, I’m a frequent visitor of playgrounds. Also have never seen a “funded by EU” sign. I’m sure there are some cases with EU-funding, as I’ve read about a couple of cases in the news, but it’s not widespread at all. I’ve only seen such signs in EU proper (e.g. Lithuania).
The whole premise “someone funded a playground in my neighbor’s backyard, I have no other options. Bomb them NOW!” is beyond ridiculous.
They could also fund our playgrounds, if it’s as effective, as you claim. In any case that would be muuuuuch cheaper than starting a war with hundreds of thousands dying, sanctions and all that.
That’s the thing, taking Ukraine gives Russia several advantages, Ukraine produces a lot of the agriculture in that region, so taking that gives them another large bargaining chip in their sphere.
Not to mention, it’s a large landmass that offers a huge buffer between the NATO countries and Russia.
And it’s got another 40 or 50 million people for Russia, which has a population problem.
And it’s got iron & Steel exports…
Taiwan has TSMC. And I suspect the US has a bunch of fueled up C-17’s ready to take all the important infrastructure to the US and plans in place to effectively destroy the rest at a moments notice.
I agree I don't think it'd help Putin or anything, I'm just saying that if they took Ukraine, he'd add another 42 million people to the 133 million he already rules over. That's about a third more people to rule over, and he's not concerned about the later years of a peasant Russian. I assume he's concerned with more able bodied young men, which, ironically is a cohort he lacks while he tries to capture the country he's trying to obtain to help with that problem.
Okay I see that majority of comments here on HN including yours have one big flaw. So I'll just try to share perspective as person who was actually born in Russia and lived there good chunk of the year till February 2022.
For some reason people from outside of ex-USSR usually suppose there is at least something rational about this war. But USSR does not exist for 30 years and there are no massive political machine. It's just a autocracy / kleptocracy where all decisions taken by a very small groups of incompetent people whos only goal is to preserve their power.
Neither there is any real ideology in Russia. Putin is a much closer to The Dictator movie than to Stalin or Hitler. He care of his giant palaces and yachts far more than about gaining 42M extra subjects to rule. This whole war is just a presidental PR campaign 2024 gone wrong.
PS: Yeah, that's it. It's not some great powerplay to control the world. Not a war with NATO or US. Not even a war over reasources or territory. It's just aging kleptocracy that stumbled and now is killing 100,000s in pointless war.
I do agree that there is a part of this is a Presidential campaign gone wrong. I don't know if I'd submit that as 100%. But Putin does like looking like a strong man. And if he took Ukraine in a week, as his original plan looked it(?), he would have definitely been able to look like the strong man. I do think there is some aspects of resources and territory though. More access to warm water ports and controlling essentially the bread basket of Europe. Lets not act like Ukraine isn't strategically important for Europe has a whole. Taking Ukraine along with having Belarus in his pocket also significantly shrink the amount of border they actually need to defend because of the mountains.
I not arguing there are no strategic importance. If he'd managed to successfully assasinate or overthrow government in 3 days then it's would be the way how Kremlin will talk of their great victory.
The deal is: neither Putin nor population of Russia give a damn about this territory control. If Ukraine will free Donetsk and Luhansk tomorrow and Crimea in a year no one in Russia will care about it. Except of course people living on occupied territories, but FSB will shut them up.
On 30 September 2022 they announced annexiation of Kherson. Claimed it's will be part of Russia forever. Just a bit more than month later on 9th November 2022 it stopped being part of Russia and no one cared about it since. And too talkative members of occupation government just died under strage circumstances.
It's really odd to fathom who someone would not understand why Xi wants to take Taiwan, and 'chips' has nothing to do with it.
People need to take a much, much broader mindset. China views Taiwan as part of China, and that's that.
I personally don't, and the Taiwanese population ought to chose what they want without material interference, and 'we' should respect that by recognizing their sovereignty and statehood if that's what they want, much like at this point we ought to be recognizing Palestine as a state, if that's what they want.
Russia's invasion of Ukraine is pure classical Imperialism, Ukraine is a 'sub Russian vassal' and Russian Imperialism is part of the Russian identity - it was 'secularized' during the time of USSR, now it's just old school.
If you look at geopolitics from the 19th century it's all very clear.
That said, it is odd Apple would do this in Taiwan, and, it's a much bigger jump than people think.
This is a bold move and will reshape what Apple is, and maybe what a true international megacorp of the future looks like.
It could also make them more fragile and if they start to falter, it could really fall apart. You need a lot to work well for this to be pulled off.
I don't think the political situation has changed that much since 1949, except for minor updates in 1971 and 1992. Besides, as the sibling commenter mentions, Apple (and every other consumer electronics manufacturer in the world) is in deep trouble if war breaks out.
If they de-risk by doing this in the US or India, what are they going to use the screens for?
Sure, I was being a bit flippant. The world is different in many ways, including more interdependence between Chinese and Taiwanese economies. Without getting too deep into it, the issue mostly comes down to whether the US decides to intervene.
Everybody suffers massively if there's hot war in the strait. The main point is that unless supply chains are unwound from both countries, it doesn't matter that much where you put a new screen factory.
Of course it matters… It’s completely absurd to think it doesn’t matter …or are you being flippant again. I don’t have time to burn so say what you mean. If you really don’t understand, I can explain the obvious advantages to not placing a factory in a potential war zone.
I'm not being flippant, and I think my meaning is clear. You're avoiding the original point, which is that if the majority of your factories are already in a potential war zone, the risk calculation doesn't change by adding another. What do you think happens to Honhai's Chinese facilities during this hypothetical invasion?
Arguing over US strategic ambiguity is probably too off topic for this thread. I live in Taiwan, so I guess I've put my money where my mouth is.
“Since the beginning of the 2020s, TSMC has expanded its operations outside of the island of Taiwan, opening new fabs in Japan and the United States, with further plans for expansion into Germany”
Now, would building the only plant that makes your displays in a potential war zone increase your risk? Common sense is all you need to answer the question.
majority != all, and my argument is based on the idea that:
(1) Invasion is very unlikely
(2) If there is an invasion, it's very likely that the US and Japan have something to say about it.
Anyway, I wonder if there's other reasons for it to be in Taiwan. The article isn't very substantive, but it links to another one that says:
"European company ams OSRAM will be the primary supplier of microLED chips for Apple, but there is potential for Taiwanese suppliers like Epistar to become additional suppliers of microLED chips by 2026-2027..."
Maybe there are some partnerships aside from TSMC that are driving the decision. The article says the process will be developed in Longtan (Taoyuan), which happens to be where at least one other company is setting up microLED production:
Ukraine had the third largest nuclear weapons arsenal. Now they are fighting for survival.
What the US, and how much the US, will do, with respect to Taiwan depends on politicians in Washington D.C., public sentiment, etc. and that changes over time.
Naval landings are never easy. Suicidal if you don't have complete air dominance. In World War 2 the Nazis were able to conquer all of western Europe in weeks, but were completely stopped by the mere 50 miles of English Channel. And this was in the days before Radar was well understood and deployed and before guided cruise missiles were a thing.
There is only one scenario where the invasion of Taiwan isn't a long and drawn out affair that costs the Chinese hundreds of thousands to millions of troops, and that's the one where they start off by nuking it flat in a massive surprise attack and end up winning only a radioactive wasteland for their troubles.
Apple for long time tried to buy JBD Display, they sent their due-diligencers, which inspected everything millimetre by millimetre, and then poached their process head.
JBD then armed itself with lawyers, and for long tried to frustrate Apple from reproducing the technology.
The headline that would bring tears to my eyes is "Users dive into Linux to cut reliance on big tech"
The endless cola versus pepsi discussion is not really going to move the needle and the users pay with inferior experience.
Apple has been playing its cards well and in a sense its good that it exists. It shows that different business models are possible but thats a far cry from it being a functioning market that delivers for users.
Ultimately you want multiple manufacturers, low switching costs between platforms that wont lock-in users, 100% stop with the abominable surveillance adtech, extensive recyclability and reparability.
These are not absurd demands that cant be delivered, these are very basic aspects of a sane personal computing landscape. But we will not get them if all the ink is spent on which of the two options is the least bad one.
low switching costs between platforms that wont lock-in users, 100% stop with the abominable surveillance adtech, extensive recyclability and reparability.
Who pays for that?
What you want is less "pollution". A polluted market where you can't switch from apple to Samsung. A polluted privacy environment online. A polluted world with a phone that's just full of glue.
Companies, and humanity in general, just pollutes as much as their local environment allows.
What your saying is possible to do, but doing those things means less money for the owners of the companies and less features going out the door (as money and time are spent on non-profit expenses).
No one will pay $$ to switch between apple and google. Why would either company make it easy to lose monthly app subscriptions? They'd be paying money to shoot themselves in the foot.
What needs to happen is more regulation (just like physical world pollution) but the USA culture is very anti-government so nothing will happen.
"Reduced competition is the best way to ensure real choice doesnt exist."
Your asking phone and app manufacturers to increase competition - which goes against their own reduced competition interests. Each individual company has no incentive to increase competition, locking customers in even more is the way to profit.
No, I am not that naive. Of course it has to happen through competing interests (and regulation).
Its true what you say that in the US they don't like regulation, but on the other hand they like consumption. If you want to have a diverse menu of products to consume you need to ensure its a competitive market. You can't have it both ways.
One thing I really appreciate when it comes to Apple is that despite all their shortcomings they don't "sleep on the job". Over the years they never hesitated to cannibalize their own product lines (iPod), or go through several tough technological transitions (68K/PPC/x86/ARM) if it meant building a better product. And bigger profits of course.
For example I'm sure Google would just rather stuff your phone to the gills with ads in order to drive revenue than actually bother with a coherent investment in a long term strategy to make a better product. And bigger profits of course.