Commenters here seem dubious. I’ll take the contra-position. This feels to me like it’s going to be great; a big win for consumers and developers.
Current A12z chips are highly performant; Apple is roughly one chip cycle ahead on perfomance/watt from any other manufacturer. I presume their consumer hardware will launch with an A13Z, or maybe an A14 type chip.
Apple has consistently shipped new chip designs on time; Intel’s thrashing has cost them at least two significant update cycles on the macbook line in the last six years. Search this fine site for complaints about how new mac laptops don’t have real performance benefits over old ones —- those complaints are 100% down to being saddled with Intel.
Apple has a functional corporate culture that ships; adding complete control of the hardware stack in is going to make for better products, full stop.
Apple has to pay Intel and AMD profit margins for their mac systems. They are going to be able to put this margin back into a combination of profit and tech budget as they choose. Early days they are likely to plow all this back into performance, a win for consumers.
So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+, and 20-30% faster. Alternately a Macbook Air type with 16 hours plus strong 4k performance. You’re not going to want an Intel mac even as of January of 2021, unless you have a very unusual set of requirements.
I think they may also start making a real push on the ML side in the next year, which will be very interesting; it’s exciting to imagine what Apple’s fully vertically integrated company could do controlling hardware, OS and ML stack.
One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64. I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen — I wonder if systems developers will get on board, deal with slower / higher battery draw intel virtualization, or move on from Apple.
Languages like Go with supremely simple cross architecture support might get a boost here. Rust seems behind on ARM, for instance; I bet that will change in the next year or two. I don’t imagine that developing Intel server binaries on an ARM laptop with Rust will be pleasant.
> So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+, and 20-30% faster.
I'm predicting the opposite: you won't actually see any difference.
Once you look closely at power profiles on modern machines you'll see that most energy is going into display and GPU. CPUs mostly run idle. Even if you had a theoretical CPU using zero energy, most people are not going to get 30% battery life gains [1]. Not one thing that they demoed requires any meaningful CPU power.
Similarly, while ARM parts are more efficient than x86 per compute cycle, it's not a dramatic change.
The big changes, I think, are more mundane:
- Apple is going to save $200-$800 cost per Mac shipped
- Apple can start leaning on their specialized ML cores and accelerators. They will probably put that stuff in T2 for Intel Macs. If they're already shipping T2 on every machine, with a bunch of CPU cores, why not just make those CPU cores big enough for the main workload?
Doubling CPU perf is meaningless if you can ship the right accelerators that'll do 100x energy/perf for video compression, crypto and graphics.
[1] for a regular web browsing type user; obviously if you're compiling stuff this may not apply; if that is true you're almost certainly better off just getting a Linux desktop for the heavy lifting
Apple can start leaning on their specialized ML
cores and accelerators
Thank you for mentioning this. I feel like many have missed it.
I think Apple sees this sort of thing as the future, and their true competitive advantage.
Most are focusing on Apple's potential edge over Intel when it comes to general compute performance/watt. Eventually Apple's likely to hit a wall there too though, like Intel.
Where Apple can really pull away is by leaning into custom compute units for specialized tasks. Apple and their full vertical integration will stand alone in the world here. Rather than hoping Intel's chips are good at the things it wants to do, it can specialize the silicone hardcore for the tasks it wants MacOS to do in the future. It will potentially be a throwback to the Amiga days: a system with performance years ahead of competitors because of tight integration with custom hardware.
The questions are:
1. Will anybody notice? The initial ARM Macs may be underwhelming. I'm not sure the initial Mac ARM silicon will necessarily have a lot of special custom Mac-oriented compute goodies. And even if it does, I don't know Mac software will be taking full advantage of it from Day 1. It will take a few product cycles (i.e., years) for this to really bear fruit.
2. Will developers bother to exploit these capabilities as Apple surfaces them? Aside from some flagship content-creation apps, native Mac apps are not exactly flourishing.
1. If done correctly, non-Apple laptops may become significantly less attractive. Just like Android phones.
2. Intel may be in for a tough time, especially with AMD winning big on the console and laptop fronts recently.
3. AMD and Intel may have to compete for survival and to save the non-Apple ecosystem in general. If AMD/Intel can consistently and significantly beat Apple here, it may mean that the non-Apple ecosystem survives and even thrives. It may even mean that Apple looks at Intel/AMD as an option for Pro MacBooks in the future. However, this does seem a little less likely.
4. This could also herald the entry of Qualcomm and the likes into laptop territory.
Looks like a very interesting and possibly industry changing move. This could potentially severely affect Intel/AMD and Microsoft. And all these players will have to play this new scenario very carefully.
But isn't it just a matter of time til the novelty of smart phones wear of, they stop being tres chic and the cheap ones becoming 'good enough'? It might have taken decades, but eventually Ford bought Cadillac, Fiat bought Ferrari, VW bought Porsche (and Bugatti and a few more).
Big difference is Ford, VW, et al had local dealer networks that not only fixed the cars, but turned the lessons and data learned in the fixing back into engineering improvements upstream. The net result of this is over a span of years Ford and VW buyers would see the product get better each time they bought a new one.
Android will always be a low budget product as a market, because it's run by Google. Google doesn't care about its customers at all, but for the data they generate and its impact on ad sales.
Every time a user opens the Google app store, they can expect it to be worse than the time they opened it previously. Every time an Android user buys a new device, it's a crap shoot what sort of hardware issues it will have, even if it's Google or Samsung branded.
Market share and attractiveness aren’t necessarily related. A Kia isn’t as attractive to its target customer as a Mercedes but outsells it because of price.
Much more interesting would be the CVS gearbox which is THE Mercedes advantage, the TCU, the shifter or the ECU.
100x better but also 100x more expensive. Will not happen. Worked in F1.
I do hope AMD does well here as Apple's chips with all their custom silicon, T3 etc, mean the death of booting anything but apple signed OS images on that, forget Linux.
And that's not the future I am willing to buy into.
Thank you for expressing this. As much as I like Apple and the wonderful approach they have to design, something felt amiss. This is what I wanted to express.
I'm somewhat confused by this rhetorical question since the microcode of processor is vastly different than the userspace & kernel of Mac OS. Running an OS bare metal versus in a VM on top of Mac OS are different across a wide array of things. At a minimum performance is lower and less predictable on the VM; you now have two different OS's updates to worry about breakage with on top of their mutual interface (ask anyone who's done serious Linux dev work on a Mac); you have two different sets of security policies to worry about; the low level tools to debug performance in a VM don't have the level of access they do on bare metal; and if you're working with hardware for linux servers & devices in a VM you are going to have to go bare metall sooner or later.
The abstractions are leaky, the VM is not a pristine environment floating on top of some vaguely well-defined architecture. The software in one has two extra layers (VM software & OS) between it and the actual platform and all this is before you start hitting weird corner cases with cpu architecture differences in the layers.
Hmm, really? Since Windows 10 your desktop runs in a guest domain, while the kernel running the drivers is isolated.
Since about 5 years Apple provides this kit: https://developer.apple.com/documentation/hypervisor Yeah, you got to use the hardware drivers from Apple unless it also supports PCI pass through, not sure but with the current user base I guess nobody would do that anyway.
I expect Apple to eventually run their ring-1 off the T chip, with everything else from a VM abstraction. It’s just the natural evolution of the UEFI approach, and Apple being themselves they’re doing it “their way” without waiting for the crossfire infested industry committees to play along.
Nope there was no mention of booting another OS. Craig talked about native virtualization that can be used to run Docker containers and other OS runtimes.
Do many people care about phone CPU performance? Sure, it needs to be good enough, but after that it's really far down on the list of things that matter.
What matters to everyone I know is screen size, camera quality and that a really small selection of apps (messaging, maps, email, browser, bank app) work well. Raw CPU performance is only a very abstract concept.
Raw CPU performance, perhaps not. But people definitely do care about a specific set of user-facing, ML-driven functionality - think speech recognition, speech synthesis, realtime video filtering, and so on.
Many of these are only barely possible on "pre-neural" mobile ARM CPUs, and at a significant cost to power consumption. Developing for newer devices is like night and day.
Google's speech recognition is damn impressive, but I'm talking performance/power consumption, not "quality". Sticking a 2080 into an iPhone won't give you better speech recognition results, but it will give you bad results faster.
> > Many of these are only barely possible on "pre-neural" mobile ARM CPUs
> Speech recognition on my old Pixel 2
I don't think the Pixel 2 can be called "pre-neural". "[...] The PVC is a fully programmable image, vision and AI multi-core domain-specific architecture (DSA) for mobile devices and in future for IoT.[2] It first appeared in the Google Pixel 2 and 2 XL [...]" https://en.wikipedia.org/wiki/Pixel_Visual_Core
When speech recognition starts understanding European Portuguese without me playing stupid accent exercises, and mixed language sentences as well, then I will care about it.
> only one camera, just a 4.7-inch display, and less than Full HD screen resolution
cpu selection is likely coming from industrialization concerns, less production line to maintain, less price per unit at volume etc, but they're going to beat that drum loud and proud for all it's worth, meanwhile the phone is cheap in area that in 2020 _do_ matter.
I know a couple people trying to port an ML app to iOS. It sounds like the interfaces are a bit of a nightmare, and the support for realllly basic stuff in the audio domain is lacking.
I don't know the dev ecosystem for apple broadly, but this doesn't bode well for people "bothering to exploit" the hardware.
#1, who can say. #2 might be side stepped by the compatibility they with iOS apps they will gain? (making it so all those iphone/ipad developers can ship their apps to macs, too.)
> I fully expect any reduction in costs for Apple will get sent to their shareholders, not the consumers.
Apple's margins are consistent, if their costs go down significantly, pricing comes down or features increase. The iPad is a perfect example, for years it was $500 and they just kept increasing the feature-set until eventually they could deliver the base product for significantly less.
Shareholders benefit from increased market share just as much as they do from increasing margins, arguably more. The base iPad and the iPhone SE both "cannibalize" their higher end products, but significantly expand their base. I wouldn't be surprised at all to see a $800 MacBook enter their lineup shipping with the same CPU as the iPad.
Considering they're selling a device with a 10.5" touchscreen and an A12 SoC for $500 today, I think they can go even lower than $800 for a device with only a slightly larger LCD and no digitizer.
While they won't be competing with Chromebooks for general education use cases, I could very well see Apple trying to upsell schools on a $599 alternative that happens to run GarageBand, iMovie, and even XCode.
Eh I don't see Apple selling their cheapest education MacBook for $600 instead of $900 simply because one of many components suddenly got significantly cheaper.
I can see them doing that for big volume buys for education. I don't see why they wouldn't just pass on the entire Intel margin to them, getting students using Apple products young has value.
Chromebooks are doing well in education at the moment. If Apple launched a product in that space, they could easily claw half of that back overnight. The ability to run real software is huge, especially for subjects like graphic arts and engineering.
> Considering they're selling a device with a 10.5" touchscreen and an A12 SoC for $500 today, I think they can go even lower than $800 for a device with only a slightly larger LCD and no digitizer.
While there is no digitizer, there is a keyboard and a touchpad. Also, I expect Apple is going to try to keep a gap between the base Mac and the iPad price-wise so they would add to the base storage and maybe RAM.
Then again, considering the pricing on the base iPad, maybe they will bring it down to $600.
Maybe if they take a bet on (or force) the App Store to be the primary method of obtaining software. I’d expect Apple forecasts some amount of yearly revenue per iPhone/iPad and a lower amount per MacBook.
Why do we need to buy so many devices anyway? Why can't I just plug my iPad or iPhone into a dumb dock with a laptop screen and its own storage and battery, and use the CPU and GPU from the phone/ipad?
I don't need VSCode, Docker, or node.js on my phone. I don't want all the clones of the various repositories I'm working on on my phone. Even the best phones lack the RAM, high capacity drives, and video card my computer has. Nor does it have a keyboard or trackpad.
If your phone is good enough to take care of your day to day computing, you can probably get by with an inexpensive all-in-one computer and save the headache of docking.
You'd be surprised how many people would like exactly this, interestingly. There are certainly enough to quite literally pay real money for a somewhat lousy facsimile of the real thing; I know from experience.
Then what is the point in docking at all? Now you have to keep track of what's on the dock and what's on the phone. Plus, by the time you integrate all this into a dock, you basically have something that costs as much as an inexpensive PC, so why bother?
You'll need something to connect all those dock components together so you don't have to run several cables to the phone. Something like a motherboard. So you'll have a full computer sans a cpu.
The Surface Book is exactly this: A (x64 Windows) tablet with a laptop dock that contains a stronger GPU and battery.
One problem is that people expect the CPU power of a laptop, which requires much more power and cooling than the typical tablet. As a consequence in tablet mode a Surface Book has about two hours of battery life.
So far: different architectures. But with this announcement it would make running macOS on a future (or even current) iPad quite feasible, so your kind of dock might become true soon. Apple's new magic iPad keyboards use a lot of weight to balance the heavy screen - might as well make that a battery.
When looking for IDEs or tooling on iOS I still have not found anything remotely professionally usable... (I mean Visual Studio + Reshaper like, not VS Code...) but perhaps somebody could enlighten me...
Because a general purpose device is not good business sense for a company that sells devices. The more they can artificially specialize each thing, the more things you need to buy, and the more money they make. This is a much larger phenomenon than just Apple, or even computers.
An iPhone is a general purpose device compared to an iPod. But maybe Apple has lost the willingness to cannibalise its own sales for the sake of creating stunning new product categories.
You can plug in a USB dock into a lot of Android phones, and if you get a DisplayLink dock, you can add 2-3 monitors. Keyboard, mouse, sound, Ethernet all work with it too.
Unfortunately for high priced premium products, the increase in quality of basic products forces premium products to be better or fail.
Related to your example - $1 burgers are increasingly better, than you would expect. The difference between McDonald's midrange line and, say, a burger at a restaurant for $18 is negligible in flavor. I can no longer justify going to a restaurant and pay $18+tip for a burger.
Sure, to you. There's a whole lot of not you out there for whom the distinction is worth the price differential. That's true in both hamburgers and hardware. Needs, goals, and use cases differ significantly among people.
I'd argue that the functional difference between a Honda Fit and a Tesla is less than the difference between the best McDonald's hamburger and an $18 hamburger. That's why I drive a Honda Fit. In the face of Tesla's increasing sales it would be pretty strange to assert that my taste was somehow universal.
I would argue a many, perhaps most, people that won't eat a McDonald's hamburger because of some perceived lack of quality probably haven't had one in many years and are instead working off public perceptions and status indicators about what they think it represents and must be like.
And then we've come full circle to Apple products.
I'm a classically trained chef who tends to specialize in bar food. I know more about the marketing, creation, and perception of food than you do— you're wrong.
McDonald's has very high quality preparation standards. Their ingredients and techniques were constructed to facilitate their high-speed, high-consistency process, but prevent them from incorporating things that the overwhelming majority of burger consumers prefer.
For example, the extremely fine grind on the meat, the thin patty, the sweet bread, the singular cheese selection, the inability to get the patty cooked to specification, the lack of hard sear or crust and the maillardization that accompanies it, etc. etc. etc. At a minimum, people prefer juicier burgers with coarser, more loosely-packed texture, usually cooked to lower temperatures (though this depends on what part of the country you're in,) and the flavor and texture differential from a hard sear, be it on a flat top or grill, and toasted bread.
For consumers who, at least at that moment, have a use case that requires their food be cheap, fast, and available, well we know who the clear winner is.
In my new career as a software developer and designer, I use apple products. I am willing to pay for the reliable UNIXy system that can also natively run industry-standards graphics tools without futzing around with VMs and things, and do all that on great hardware. There will always be people who aren't going to compare bits pushed to dollars spent and are going to be willing to spend the extra few hundred bucks on a device they spend many hours a day interacting with.
This isn't about perception at all— Apple products meet my goals in a way that other products don't. If your goals involve saving a few hundred bucks on a laptop, then don't buy one. I really don't understand why people get so mad at Apple for selling the products that they sell.
> I know more about the marketing, creation, and perception of food than you do— you're wrong.
I don't doubt you know more about food. If you applied that knowledge to my actual point instead of what it appears you assumed my point was, this assertion might have been correct.
That's not entirely your fault, I was making a slightly different point than the exiting conversation was arguing, so it's easy to bring the context of that into what I was trying to say and assume they were more related than they were.
The belittling way in which you responded though, that's all on you.
> This isn't about perception at all— Apple products meet my goals in a way that other products don't. If your goals involve saving a few hundred bucks on a laptop, then don't buy one. I really don't understand why people get so mad at Apple for selling the products that they sell.
My point, applied to this, would be to question what other products you've tried? My assertion is that people perceive other products to be maybe 50%-70% as good, when in reality they are probably closer to 85%-95% as good (if not better, in rare instances). That is a gap between perception and reality.
As applied to burgers, I was saying that people that refuse to eat at McDonald's because of quality probably have a very skewed perception of the actual differences in quality in a restaurant burger compared to a McDonald's burger.
I'm fully prepared to be wrong. I'm wrong all the time. I also don't see how anything you said really applies to my point, so I don't think you've really proven I'm wrong yet.
So you're creating metaphors that don't make sense using things that you have a limited understanding of to describe something you think you might be wrong about and getting annoyed that everybody else isn't following along with your deep conversational chess. Right then. I'm going to go ahead and opt out of this conversation.
Feel free. I simply made an observation that was loosely connected to the existing converaation and noted how it seemed to parallel something else.
I wasn't annoyed by you misunderstanding, I was annoyed by you misunderstanding, assuming you understood my position completely because it would more conveniently fit with your existing knowledge, and then using that assumed position to proclaim your superiority and my foolishness.
It's not about deep conversational chess on my part, it's about common decency and not assuming uncharitable positions of others by default on your part. A problem, I'll note, that you repeated in the last comment.
Just the mere perception of quality will increase your satisfaction levels. The perception of lack of quality will reduce you satisfaction levels.
Thus I still maintain that your "perfect" $18 burger is only marginally better than McDonald's midrange burger. The fact that you actually spend time on making that burger more appetising - is proof that the low cost foods are getting better and better.
While focusing on my analogy, you literally prove my overall point.
30 years ago you weren't necessary, as low cost food wasn't nearly as good as today. Now - you have to exist to justify that premium.
I think you're reading more into my comment than what I actually said, possible because of someone else's prior comment in this thread.
I was making a point less about McDonald's being equivalent to a restaurant burger and more about people's perceptions of McDonald's and how bad it is. That is, there's probably a lot less difference in the taste of those burgers than a lot of people want to admit.
The other aspect to consider is consistency. I had a $14 burger at a restaurant on Saturday that I would have been happy to swap in any single burger I've ordered from McDonald's in the last 12 months. You may not consider it high quality at McDonald's, but you have a pretty good idea what you're going to get.
All I'm really doing is making a point that there's a bit of fetishism about luxury items going on these days. Are Apple devices generally higher quality than many competitors? Yes. Is the difference in quality in line with most people's perception of the difference in quality? I don't think so.
I haven't had a McDonald's hamburger for many years. You are partly correct that it is because of my perception that it is trash. But when I walk by a McDonald's it doesn't smell like food to me anymore and smells more akin to garbage on a warm day.
> The difference between McDonald's midrange line and, say, a burger at a restaurant for $18 is negligible in flavor.
This may be the single worst analogy I've ever seen.
There is no amount of money you can pay at McDonalds to get a good quality burger.
I don't spend $18 for burgers, since there are a million places where you can pay $5-8 dollars and get a damned good piece of beef. But not at McDonalds.
If the employees are doing it right, it’s not “that bad” of a burger. So, just pay the employees enough to actually care about the burger and it comes out decent.
I’ve eaten at McDonald’s around the world, it really depends but they do have good burgers when they’re cooked right.
It's not the employees. In different countries the entire recipe and production system is different. In many non-US countries, McDonald's is a more upscale "foreign" restaurant and far more expensive than in the US.
90% of these Mac silicon investments would directly benefit their iPhone cash cow—perhaps not this cycle, but certainly in the chips they'll put in future iPhones.
And the remaining 10% would indirectly benefit benefit their iPhone cash cow in the form of keeping people inside the ecosystem.
The Mac silicon is inheriting the investments Apple made in the iPhone CPUs. This will continue. The bits which Apple invests to make their existing hardware scale to desktop and high end laptops won't benefit the iPhone much at all. On future generation chips, Apple will spread the development costs over a few more units, but since iPhone + iPad ship several times more units than the Mac, the bulk of the costs will be born by them.
Indeed, apples G4 cube debuted at 1800 base in 2000. That’s the same ballpark as their iMac now and their Mac mini starts at about half that. Meanwhile inflation would have made that G4 ~2700 today.
Silvers gone up. Golds gone up. Probably various fixed costs had to be further invested in the form of contracts with fabs or new fabs built. Etc. But really the Mac mini is more of a modern likeness to the G4 cube which retails now for a 800$ start, less than half the g4 cubes starting price.
Edit-they also went from being a company with around 8500 employees in 2000 to 137000 today. Surely every part of their organization chart has contributed toward pressures to otherwise push up their prices to maintain revenue.
Since Silver and Gold are priced in $USD their price is influenced by the actions of the US government. One of those driving forces is the current US monetary policies (i.e. an growing budget deficit).
Another factor is perceived risk. Since the markets are always worrying about the current US China trade talks, that uncertainty helps gold and silver as they are seen as safe havens.
This is exactly what Apple has done with all of their products over the 30+ years. The iPad is a perfect example of Apple doing both over the past 10 years. Likewise the iPhone SE & the Apple Watch. It's done it with every product in their portfolio.
I'm actually not so sure about this. Apple's gross margin target is in the ballpark of 38-40% and a savings of $200-800 per MBP would have a substantial upwards impact on that gross margin number. Apple carefully sets pricing to achieve a target gross margin without impacting sales too much (higher price = higher gross margin but likely lower net revenue because they're priced out of the market).
One of the two scenarios (or perhaps a mixture of both) are more likely, and I lean towards #1:
1. Apple decreases the price of the Mac to stay aligned with gross margin targets. This likely has a significant upwards impact on revenue, because a drop in price like this opens new markets who can now afford a Mac, increasing market share, driving new customers to their services, and adding a layer of lock-in for iPhone-but-not-Mac customers.
2. Apple uses the additional budget per device for more expensive parts/technology. They are struggling to incorporate technologies like OLED/mini-LED because of the significant cost of these displays and this would help open up those opportunities.
The high price of MacBooks is treated as a status symbol and the marketing department clearly knows as much, so I don't think they will be willing to give that up, so I lean towards your second option.
Why not got the same road with an MacBook SE? I even think this will be the first product out the pipeline.
MacPro buyers usually don't want to be beta testers and will probably be the last to transition out once horsepower is clearly there with mesurable gains.
The iPhone SE is less a "cheap iPhone" and more "an expensive basic smartphone". Far more people upgrade to it from a low cost Android device then downgrade from a different iPhone.
Bingo. Estimates vary wildly, but I've seen figures saying that Axx CPUs cost Apple about $50 each. Even if it's more like $100, that's still an insane amount of additional profit per unit to be extracted. They don't need to deal with single-supplier hassles and they get much more control over what cores go into their SoC.
This is sort-of-OK for consumers but amazing for Apple and its shareholers.
I suspect the big motivation for Apple is less about squeezing a few dollars more profit per system and more about shipping systems which just aren't possible on Intel's roadmap. Just putting the A12Z into the previous 12" MacBook would be a massively better computer with better battery life, better performance, and significantly less expensive. All while Apple maintains their margins.
This isn't a zero sum game. Being able to ship less expensive computers which perform better is a win for consumers and Apple shareholders at the some time.
> Just putting the A12Z into the previous 12" MacBook would be a massively better computer with better battery life, better performance, and significantly less expensive. All while Apple maintains their margins.
Microsoft doesn't control the hardware very much and definitely doesn't control the software developers whereas Apple completely controls the former and has a lot of leverage with the latter.
You can see that with the latest MacBook Pro 13", if you buy one of the cheaper devices it comes with last years processors. Intel are clearly having problems meeting customer demand.
But customers are going towards an entirely entirely closed everything. ios is apple languages, apple signature required to run code, apple processors. Desktop machines are the last bit of freedom in the apple ecosystem.
This isn't "sort-of-ok", it's "bad-for-customers" and "bad-for-developers".
Why are you implying that they're going to lock down the Mac and make it some kind of iPad Pro? You'll still have complete control to run anything you want on the system. Running unsigned binaries is as simple as a right click on the app to open it on Mac. Or launch it from the command line with no prompt at all.
It looks like from the freedom end of things, the only thing that changes with ARM Macs is they're requiring notarization for kexts, and the fact that other OSes won't boot on the hardware since they don't have support for it. Unless anything changed, the T2 chip already killed linux support before?
This is just my opinion but I think it's great for consumers and a good restriction for developers.
As a consumer you shouldn't be running unsigned software because you're putting not only your data at risk but any data you have access to.
And as a developer on mac you can still run anything reasonably well in a VM.
If you're using node, you should be running that in a virtualized environment in the first place, albeit I'm too lazy myself to always set that all up.
Actually it's pretty amazing that now we'll be able to run an entire x86 OS environment on an ARM chip and get very usable performance too.
> If you're using node, you should be running that in a virtualized environment in the first place
Just curious: why should node be ran in a virtualised environment for development? Is it a security concern? Does that apply to languages like python too? Would you be happy running it in a Docker container from macOS?
How do we know it's "very useable" performance wise?
I'd say that we've moved away from virtualisation completely, we now use containers, so developers will expect native performance, as we get on other platforms.
You could also argue that significant cuts to costs of already-profitable Mac computers, could lead to significantly higher sales volumes.
Greater marketshare also provides more value to shareholders meaning that shareholders still win, as do consumers.
More people with macs (and probably iPads/iPhones) would also increase other profit centers for Apple such as services (their highest profit center), warranties, and accessories. The profits and loyalty from these could easily far outweigh the $100-$300 of extra margin they might gain from keeping Mac prices the same.
Meaning that price cuts to macs might actually be more strategically beneficial (to EVERYONE) than hoarding higher margins.
Cost of cpu is unit cost + all other costs including r and d divided by units. We also don't arrive at a reasonable estimate of unit costs by taking a range of estimates and picking the estimate most favorable to our position.
I also don't believe it's reasonable to assume that switching to arm is as simple as putting an iPad cpu into a laptop shell.
Here is an estimate that their 2018 model costs 72 just to make not to design and make.
The a14 that will power a MacBook is likely going to be more expensive not less. Especially with 15B transistors on the a14 vs less than 7B on a12.
Average selling price of Intel cpu looks like around $126. This includes a lot of low end cpus which is exactly the kind of cpu apple fans like to compare.
Apple may realize greater control and better battery life with the switch but they won't save a pile of money and thoughts about increasing performance are fanciful speculation that Apple, the people with the expertise are too smart to engage in.
Indeed. Apple is going to have to eat R&D costs that were previously bundled in Intel's pricing. And Mac sales are relatively small compared to the Windows market, so economies of scale are going to be less significant.
Which means the actual per-CPU fab cost is going to become a smaller part of the complete development and production cost of a run. And that total cost is the only one that matters.
I expect savings can still be made, because Apple will stop contributing to Intel's profits. On the other hand I'm sure Apple was already buying CPUs at a sizeable discount.
Either way it's an open question if Apple's margins are going to look much healthier.
IMO an important motivation is low power/TDP for AR/VR.
Ax will also eventually give Apple the option of a single unified development model, which will allow OS-specific optimisations and improvements inside the CPU/GPU.
Ax has the potential to become MacOS/iOS/A(R)OS on a chip in a way that Intel CPUs never could.
This only makes sense if you know nothing about Apple's business.
You really think they're doing this to save $50 from ~5m Macs? You really think all this upheaval is for a mere $250m a year in savings? It'll cost them 10x that in pain alone to migrate to a new platform.
Come on now....$250m is nothing at Apple scale. Think bigger. Even if you hate Apple, think bigger about their nefariousness (if your view is that they have bad intentions - one I don't agree with).
I'm not sure how you calculated that but they sell about 20m macs per year not 5m. I also doubt the chips cost them 50$ per unit. The savings may worth few billions so it's not really like nothing. And they wouls save this every year. Will this change cost them 10x in pain alone? I doubt it. They already make the chips.
> I'm not sure how you calculated that but they sell about 20m macs per year not 5m
Quarterly numbers come in between 4.5-5m units these days but point taken - I recalled numbers for the wrong timeframe.
> I also doubt the chips cost them 50$ per unit. The savings may worth few billions so it's not really like nothing.
The true cost of this move is reflected in more than the R&D. This is a long multi-year effort involving several parties with competing interests. People are talking here as if they just flipped a switch to save costs.
Let me make this clear. In my view, this is an offensive/strategic move to drive differentiation, not a defensive move to save costs (though if this works, that could be a big benefit down the road). Apple has a long history of these kinds of moves (that don't just involve chips). This is the same response I have to people peddling conspiracy theories that Apple loves making money off of selling dongles as a core strategy (dongles aren't the point, wireless is; focusing on dongles is missing the forest for the trees).
You aren't making anything clear, just straw man arguments. Apple switches architecture when it suits them, you think they switched from powerpc to intel was for differentation? Nope, it was cost and performance aka value.
The question isn't whether it suits them. The question is: "Why did they choose to take on the level of risk in this portion of their business and what is the core benefit they expect?"
If the the main reason was cost savings, this would be a horrible way to go about it.
There's a better answer: Intel can't deliver the parts they need at the performance and efficiency levels Apple needs to build the products the way they want to build them. This is not a secret. There is a ton of reporting and discussion around this spanning a decade about Intel's pitfalls, disappointments, and delays. Apple might also want much closer alignment between iOS and MacOS. Their chip team has demonstrated an ability to bring chipsets in-house, delivering performance orders of magnitude better than smartphone competition on almost every metric, and doing it consistently on Apple's timelines. It only seems natural to drive a similar advantage on the Mac side while having even tighter integration with their overall Apple ecosystem.
I think you are spot on. Any kind of cost savings here is going to be gravy and won’t come for a long time. This is going to let Apple reuse so much from phones in the future Mac line - all their R&D on hardware, the developer community, etc. It will be very interesting to see what the actual products are like, and whether the x86 emulation is any good.
Oh, so we are talking about value now? Please stick to an argument after you fail to defend it. You already used your dongle argument no one asked for.
Then don't go on a tangent, when the point the parent was talking about potential savings and big oof when you get your numbers wrong then try to strawmen about points no one is arguing against. No one was arguing about vertical intergration bonuses Apple gets by their own SOC. You wanted to boil it into one dimension by dismissing the value Apple can provide with their own chip.
1. I stated quarterly numbers off the top of my head instead of yearly numbers. This mistake doesn't change my point at all at Apple scale - it's a negligible amount of savings relative to the risk. Companies of this scale don't make ecosystem level shifts without a reason far far better than "we can _maybe_ increase yearly profits by 1% (1/100 * 100) sometime in the future". It's just not relevant to bring that up as a primary motivation given what we're talking about.
2. I think you actually missed the point of the conversation. OP said "that's still an insane amount of additional profit per unit to be extracted" and followed that up with "amazing for Apple and its shareholers."
It is not insane at all. And not amazing. It just comes off as naive to anyone who's worked in these kinds of organizations and been involved in similar decisions.
I think it's hard for some people to comprehend that trying to save $1b a year for its own sake at the scale of an org like Apple can in many cases be a terrible decision.
You came with your strawman that it was for its own sake, they just stated it was a profitable move and "amazing for Apple and its shareholders, which is hard to refute. OP even said "They don't need to deal with single-supplier hassles and they get much more control over what cores go into their SoC." It seems you are now arguing with your own points.
> It seems you are now arguing with your own points.
Half the fun is writing down your own thoughts!
> You came with your strawman that it was for its own sake
That's possible. I saw the emphasis placed differently than you did even though we read the same words. Probably describes the nature of many internet arguments. Happy Monday - I appreciate you pushing me to explain myself. Seems like others were able to get value out of our back and forth.
The fact that they are saving $1 billion per year is what makes the transition possible, it's not actually the cause of the transition. They could have done the transition a long time ago if it was just about the money.
It saves them much more over the long term if it lets them get away from having two different processor architectures. It paves the way for more convergence between their OSes. Eventually a macbook will be just an ipad with a keyboard attached, and vice versa.
Yes, they're a big company. But they're also a mature company. A lot of their efforts are going to be boring cost-cutting measures, because that's how mature companies stay profitable.
It's more than just a CPU though - this will make the components of a Mac massively similar to an iPad, and probably save money on many other components.
It also removes any need for a dedicated GPU in their high-end laptops, which is probably $200 alone.
I have no idea how they justify the prices for their lower end laptops as-is, as they have worse screen and performace than recent iPads in pretty much all cases.
1. This is risky for consumers. Whereas the PPC->x86 move was clearly a benefit to consumers given how PPC was lagging Intel at the time, x86 had proven performance and a massive install base. It was low risk to consumers. This? Less so. Sure iOS devices run on ARM but now you lose x86 compatibility. Consumers need to be "compensated" for this risk. This means lower prices and/or better performance, at least in the short-to-medium term; and
1. This move is a risk for Apple. They could lose market share doing this if consumers reject the transition. They wouldn't undertake it if the rewards didn't justify the risk. They will ultimately capture more profit from this I'm sure but because of (1) I think they may well subsidize this move in the short term with more performance per $.
But I fully agree with an earlier comment here: Apple has a proven track record with chip shipments and schedules here so more vertically integrated laptop hardware is going to be a win, ultimately.
If you are a photographer, a developer, a graphics designer, a musician, a teacher, or whatever, and you are looking at buying a new Mac, what is going to get you to buy the new Apple Silicon powered Mac which is almost certain to impact your workflow in some way? If you are making purchase decisions for classrooms, what makes you buy 200 Macs with a new, unknown architecture?
The first generation of Macs on Apple silicon absolutely needs to have a significantly better price/ performance point versus the current generation or they won't sell to anything more than the most loyal fans. If the new Macs come out and pricing is not good, I could seriously see a sort-of anti-Osborne effect where people gravitate towards Intel based Macs (or away from Macs entirely) to avoid the risk of moving to new architecture.
If anything, I expect margins on the first couple generations of Macs to go DOWN as margins on the first couple generations on all Apple products are lower (also public record).
> If you are making purchase decisions for classrooms, what makes you buy 200 Macs with a new, unknown architecture?
Yes, the "unknown" architecture powering the highest performing phones and tablets.
Apple has plenty of problems selling to schools for classroom use because other platforms have invested more in that use case. But ISA being the reason? No. Simply no.
Have you ever been behind the purchase choice for dozens of computers? Hundreds?
IT managers are conservative, if they make a bad call, they have to support crap equipment for the next 5+ years or so. Yes, I'm aware Apple's CPUs are in the iPhone and iPad, but it's a huge change for the Mac and it's a big risk for people making those purchase decisions.
As for this, I have, and I certainly would not buy for the first two-three (if not more) hardware revisions after such a major architecture change until I could evaluate how that hardware has been working out for the early adopter guinea pigs. I'd also need to see where everything stood concerning software, especially the educational software that has been getting written almost entirely for x86 systems or specifically targeting Chromebooks for the last 5+ years. Even then I am not sure the Technology Director is going to be anything but skeptical about running everything in VMs or Docker containers. Chromebooks are cheap, reasonably functional, easy to replace, and already run all district educational software.
undoubtedly, that’s capitalism! but they may also introduce some price cuts. These would probaby increase units sold, so they could better take advantage of their increased margin.
I'm also predicting there will be no difference in battery life.
If you check technical specifications on past MBP battery specification and battery life you can notice one thing: Watt/hour battery is always decreasing and battery life is always remaining constant (e.g., 10 hours of web scrolling).
Gain in power consumption allows to reduce component space which allows further slimmer designs.
Linus Tech Tips recently published a video where they did all kinds of cooling hacks to a Macbook Air, including milling out the CPU heat sink, adding thermal pads to dissipate heat into the chassis (instead of insulating the chassis from the heat), and using a water block to cool the chassis with ice water.
They got pretty dramatic results from the first few options, but it topped out at the thermal pads and nothing else made any difference at all. Their conclusion was that the way the system was built, there was an upper limit on the power the system could consistently provide to the CPU, and no amount of cooling would make any difference after that point.
The obvious conclusion for me was that Apple made decisions based on battery life and worked backwards from there, choosing a chip that fell within the desired range, designing a cooling system that was good enough for that ballpark, and providing just enough power to the CPU/GPU package to hit the upper end of the range.
It could just as well have been, choose a pref level and assure it will run for 10 hours...
It actually good engineering to have all the components balanced. If you overbuilt the VRM's for a CPU that would never utilize the current, its just wasted cost.
OTOH, maybe they were downsizing the batteries to keep it at 10H so they could be like "look we extended the battery to 16 hours with our new chips" while also bumping the battery capacity.
> The 16" MacBook Pro, for example, has a 100 Wh battery, which is the largest that Apple has ever shipped in a laptop. This is the largest battery size permitted in cabin baggage on flights.
I agree battery life for casual workloads will probably stay the same. However, if CPU power consumption decreases relative to other components, battery life on heavy workloads should go up.
My new 16“ MBP is good for 2-2.5h max when used for working on big projects in Xcode. I expect to almost double that with the new CPUs. The people who have exactly this problem are also those who buy the most expensive hardware from Apple.
This isn't always true. The 16" MacBook Pro, for example, has a 100 Wh battery, which is the largest that Apple has ever shipped in a laptop. This is the largest battery size permitted in cabin baggage on flights.
Great, they can make the laptops even slimmer. They're going to make them so thin they won't be able to put a USB-C port and use wireless charging. You'll soon learn that you don't actually need to plug anything into your device. Apple knows best.
> Once you look closely at power profiles on modern machines you'll see that most energy is going into display and GPU. CPUs mostly run idle. Even if you had a theoretical CPU using zero energy, most people are not going to get 30% battery life gains
This doesn't really seem to match my experience; at least on a 2015 MBP, the CPU is always consuming at least 0.5-1W, even with nothing running. If I open a webpage (or leave a site with a bunch of ads open), the CPU alone can easily start consuming 6-7 watts for a single core.
Apple claims 10 hours of battery life with a 70-something WH battery, which would indicate they expect total average power consumption to be around 7W; even the idle number is a decent percentage of that.
(Also, has anyone been able to measure the actual power consumption of the A-series CPUs?)
A typical laptop display can consume around 10W all the time so the 1W from the idle CPU is negligible in comparison.
If anything, you should install an adblocker. A single website filled with ads (and they're all filled with tons of ads) can spin the CPU to tens of watts forever, significantly draining the battery.
10w is on the high end of this, my 1080p screen on my Precision 5520 sucks down a paltry 1.5w at mid brightness, the big killer is the wifi chip. That takes between 1.5-5w.
CPU tends to be quite lean, until something needs to be done then steps up very quickly to consuming 45w.
I usually consider 5 to 15W for laptop display consumption. Depends on the display, size and brightness.
It's quite variable, the highest brightness can consume double of the lowest brightness for example. One interesting test if one has a battery app showing instant consumption (I know lenovo laptops used to do), is to adjust brightness and see the impact.
Yeah, this is probably harder to do on a macbook, but intels 'powertop' program on Linux has quite high fidelity, matches the system discharge rate reported by the kernels battery monitor too.
Anecdotal evidence: On my work notebook (Lenovo X1 Carbon, Windows 10), the fan starts spinning when Slack is on a channel with animated emoji reactions.
I looked up the numbers out of curiosity. The X1 Carbon has a i7-8650U processor which does about 26 GFlops. The Cray-1, the classic 1976 supercomputer did 130 MFlops. The Cray-1 weighed 5.5 tons, used 115 kW of power, and cost $8 million. The Cray-1 was used for nuclear weapon design, seismic analysis, high-energy physics, weather analysis and so forth. The X1 Carbon is roughly equivalent to 200 Crays and (according to the previous comment) displays animated emojis with some effort. I think there's something wrong with software.
Well yes, it's quite noticeably sluggish and bloated on the whole, with even UIs seemingly getting worse over time. Probably doesn't help that everything these days wants to push and pull from multiple networked sources instead of being more self-contained.
That’s because Slack runs on top of basically Chrome, which is a horrible battery hog.
If you run the web versions of Electron “apps” in Safari you’ll get substantially better battery life. (Of course, still not perfect; irrespective of browser all of these types of apps are incredibly poorly optimized from a client-side performance perspective.
If large companies making tools like slack had any respect for their users they would ship a dedicated desktop app, and it would support more OS features while using a small fraction of the computing resources.
(Large-company-sponsored web apps seem to be generally getting worse over time. Gmail for example uses several times more CPU/memory/bandwidth than it used to a few years ago, while simultaneously being much glitchier and laggier.)
Yes, Electron is a bit of a battery hog. But the Slack app itself is horrendous. If you read through their API docs and then try to figure out how to recreate the app, you'll see why. The architecture of the API simply does not match the functionality of the app, so there is constant network communication, constant work being done in the background, etc.
I'll turn your anecdote into an anecdatum and say the same; for all devices I've owned. (Linux on a Precision 5520 w/ Xeon CPU, Macbook pro 15" 2019 model, Mac Pro 2013)
On my laptop, scrolling through Discord's GIF list can cause Chrome and Discord to hard-lock until I kill the GPU process. Possibly because of a bug in AMD's GPU drivers on Windows.
Seems to me to be very likely that Apple's graphic's silicon is much more performant and power efficient than Intel's integrated GPUs. CPUs idle most of the time seems to point to the advantage of a big.LITTLE style design which Apple have been using for iPad's etc for a while. So maybe not 30% but not negligible either.
They demoned lightroom and photoshop which are surely using meaningful CPU resources?
Agreed on the accelerators and the cost savings. All together probably a compelling case for switching.
Try browsing the web on a semi-decent laptop from, say, 2008.
It's a frustrating experience. It is obnoxious how much CPU power modern websites require.
Honestly, back when my PSU died I just did that. Beyond the lack of video decoding support for modern codecs it was perfectly acceptable as a backup machine.
It's worse than that. At least someone would profit off of those bitcoins being mined. Instead we use all of that power to make the dozens of dependencies play nice with one another.
You know that Apple is going to be making the GPU with the same technology as the CPU right?
And those accelerators don't need to be discrete, Apple can add them to their CPUs.
So, it looks like your point is: Sure, Apple is going to jump a couple process nodes from where Intel is, but everything is somehow going to remain the same?
> Once you look closely at power profiles on modern machines you'll see that most energy is going into display and GPU.
Hard to square this with the simple fact that my 2018 MacBook Pro 13" battery lifespan goes from 8 hours while internet surfing to 1.5 hours for iOS development with frequent recompilations.
I'm predicting a future where the os is completely locked down and all software for macs be purchased from the app store. Great revenue model for Apple.
And it didn’t help that the Windows Store back then was a store for UWP/Metro apps.
It also took a long time for Microsoft to actually tackle the issues that UWP/Metro and WinUI/XAML faced. It took so long, it doesn’t even matter anymore and even Microsoft has moved on. But there’s quite a bit of hypocrisy, with Microsoft telling others to use WinUI while not using it everywhere themselves while refusing to update the styles of other design frameworks.
Apple will simply use different bins in different products. The A12X is arguably a "binned" A12Z, after all. Higher bins for pro lines, lower bins for consumer lines.
Apple doesn't have the lineup for that. The CPU in the Mac Pro isn't the same silicon as the CPU in the Mini. It has more cores, bigger caches, more memory channels. It's not just the same chip binned differently.
In theory they could offer the Mini with eighteen different CPU options, but that's not really their style.
One question is whether they'll go down the chiplet route for higher end CPUs, then they can share a single die, binned differently, across more of their range, and just bundle them into different MCMs.
The 3990X costs more than ten times as much as the 3700X. It has eight times more cores. On anything threaded it smashes the 3700X. On anything not threaded it... doesn't. In many cases it loses slightly because the turbo clock is lower.
It basically means that the processor with the best single thread performance is somewhere in the lower half of your lineup and everything above it is just more chiplets with more cores. That's perfectly reasonable for servers and high end workstations that scale with threads. I'm not sure how interesting it is for laptops. Notice that AMD's laptop processors don't use chiplets.
Even the highest core count Threadrippers have decent single thread performance. The Epyc lineup has much lower single core performance and that may make it less useful for desktop workloads.
AFAIK the AMD distinction is currently that APUs (mobile or desktop) don't use chiplets.
On the whole my guess would be that we have the iPad Pro and MacBook Air using the same SoC, the MacBook Pro doing… something (it'll still need integrated graphics, but do they really sell enough to justify a new die? OTOH they do make a die specifically for the iPad Pro, and I'd guess it's lowest-selling iOS device v. highest-selling macOS device, and idk how numbers compare!), and the iMac (Pro)/Mac Pro using chiplets.
Don't worry, apple already tiers most of it's hardware by soldering in the ram / storage & charging an offensive, obviously price gouging amount to upgrade - even though the maximum spec has a base cost to them of 1/4 to 1/6 of what they charge FOR AN UPGRADE.
The Mac line will start to look like the iOS line very quickly. Binning will be important and you'll likely see processor generations synchronized across the entire product base.
I've been thinking about this. I can't see Apple realistically being able to produce multiple variants (phone, tablet, laptop, speaker, tv) of multiple elements (cpu, gpu, neural accelerator, wireless/network, etc) packaged up on an annual cadence.
The silicon team is going to be very busy: they've got the A-series, S-series, T-series, H-series, W-series, and U-series chips to pump out on a regular roadmap.
The A-series (CPU / GPU / Neural accelerator) is the major work. It gets an annual revision, which probably means at least two teams in parallel?
The A-series X and Z variants seem to be kicked out roughly every second A-series generation, and power the iPads. The S-series seems to get a roughly annual revision, but it's a much smaller change than the main A-series.
I could see the Mac chips on a 2-year cycle, perhaps alternating with the iPad, or perhaps even trailing the iPads by 6 months?
The iOS line looks like the low end device using last year's chip. How does binning help with that? Are they going to stockpile all the low quality chips for two years before you start putting them in the low end devices? Wouldn't that make the performance unusually bad, because it's the older chip and the lower quality silicon?
If you think the bins are determined by yield rather than by fitting a supply/demand curve, I have a bridge to sell you.
Of course, yield is still a physical constraint, but apple sells a wide range of products and shouldn't have any trouble finding homes for defect-ridden chips.
> CPUs mostly run idle. Even if you had a theoretical CPU using zero energy, most people are not going to get 30% battery life gains
I don't agree. Simply disabling Turbo Boost on MBP16 nets me around 10-15% more battery life. Underclocking a CPU can even result in twice to thrice the battery life on a gaming laptop under same workload.
I actually think total battery life will go up a fair bit and compile times will be much faster, 20-30%, while giving everyone full power when not on the mains. The amount my MacBook throttles when on battery is startling and stopping that while still giving huge battery life, say 6h at 80% CPU will be a huge win. Apple wouldn’t bother unless they knew the benefits they can bring over the next 10 years will be huge.
All of this is complete speculation of course but I don’t believe it will be a financial decision this one, it’ll be about creating better products.
Multi-core performance is not a strong suit of Apple's ARM architecture, I suspect you're going to see a mild to moderate performance hit for things like compilation.
The rumours are that they're doubling the number of high-performance cores for the laptop chips (so 8 high performance cores and 4 low-power cores). That + better cooling ought to boost the multi-core performance quite significantly.
Is their multi-core performance poor, or have they just made size/power trade-offs against increasing the number of cores? The iPad Pro SoCs are literally the only parts they've made so far with four big cores.
That’s mostly because desktop systems are built with more background services and traditional multitasking in mind. iOS has a different set of principles.
I was looking at the benchmarks of the latest MacBook Air here [1]. In GPU performance it's not competitive with the iPad Pro, and that's quite an understatement. For me the most obvious win of this migration to "Apple Silicon" will be that entry-level MacBook/iMac will have decent GPU performance, at long last...
“Apple can start leaning on their specialized ML cores and accelerators“
I think that hits the nail on the head. Since I only cursory listened to both the keynote and the state of the union I may have missed it, but I heard them neither mention “CPU” nor “ARM”. The term they use is “Apple Silicon”, for the whole package.
I think they are, at the core, but from what they said, these things need not even be ARM CPUs.
JS/ads and the wifi chipset seems to be the big culprit across laptops in general in this scenario. Even Netflix doesn't drain my battery as fast as hitting an ad heavy site with lots of JS and analytics and I can watch the power usage for my wifi chipset crank up accordingly. This happens across every laptop, iPad, Chromebook etc that I own.
-the CPU will be a lot more powerful and faster, but it isn't really faster because it's like an accelerator or something.
-if you actually use your computer get some vague "Linux desktop" or something (which is farcical and borders on parody, completely detached from actual reality). Because in the real world people actually doing stuff know that their CPU, and its work, is a significant contributor to power consumption, but if we just dismiss all of those people we can easily argue its irrelevance.
My standards for comments on HN regarding Apple events are very low, but today's posts really punch below that standard. It's armies of ignorant malcontents pissing in the wind. All meaningless, and they're spraying themselves with piss, but it always happens.
I was going to follow up with an anecdote about how my computer has used less than 15 minutes of CPU time in the last 2 hours but then again I forgot to stop a docker container that automatically ran an ffmpeg command in the background consuming 70 min of CPU time.
> Apple is going to save $200-$800 cost per Mac shipped
Does Apple actually have its own silicon fab now or are they outsourcing manufacture? If the former, those are /expensive/ and they'll still be paying it off.
This seems very inaccurate to me. Most laptops do not have discrete GPUs, so tasks like rendering a youtube video do require CPU cycles. Zoom is very CPU intensive on basically any mac laptop, and people always have a ton of tabs open, which can be fairly CPU intensive.
In other words, there are definitely gains to be had. My ipad pro offers a generally more smooth and satisfying experience with silent and much cooler running CPU versus my MBP, and they offer similar battery life. Scale up to MBP battery size and I suspect we will be seeing a few hours battery life gain.
Here's an analogy to help explain the skepticism: ants have amazing efficiency - they can lift multiples of their own body weight. So why can't an ant lift my car? Well, because it's too small. So let's just take the same ant design and scale it up? Unfortunately, it doesn't work like that. A creature capable of lifting my car wouldn't be much like an ant.
There is no guarantee that a phone-scale CPU can just become 4x faster by 4x'ing the power/TDP/die area. If it were that easy, Intel would already have done it (and no, the x86 architecture isn't so terribly inefficient that they are leaving triple-digit percentage improvements on the table).
What I expect we'll see are ARM chips that are power and performance competitive with x86 chips only for specific curated use cases. Apple will extract an advantage by putting custom hardware acceleration into them, to cater for those specific tasks. They will not be able to achieve general purpose performance improvements wildly beyond what Intel can already do.
This is how the current iDevices achieve their excellent performance and battery life. Not through raw general-purpose CPU horsepower, but by a finely tuned synergy between hardware and software. Apple are taking their desktop down the same route. This will be the ultimate competitive advantage for their own software - they will be able to move key software components into hardware, and make it look like magic. But as a developer, you won't be able to participate in this unless you target Apple-blessed hardware instructions/APIs. Your Python script isn't going to start running 4x faster unless you can convince Apple to implement its inner loop in custom silicon.
I have no doubt that Apple will be leaning hard into ASIC territory as they build out their new CPUs. The endgame? Every software function you need, baked into perfectly optimised silicon by the monovendor.
> What I expect we'll see are ARM chips that are power and performance competitive with x86 chips only for specific curated use cases.
Sorry but there is no justification for this. With the same thermal constraints there is every expectation that an Apple / Arm CPU would be more performant and efficient than a comparable x86. Why? Because aarch64 doesn't have the historical legacy that x86 has and Apple has already shown what they can do in the iPad etc. Sure they won't be triple digits but it will enough to be noticeable.
And, as you say, they will have the advantage of Apple's custom silicon for specific use cases. So best of both worlds.
The comparison is a bit unfair. x86 is like a decade older than ARM. Not that much in retrospect. aarch64 is as "free of historical legacy" as x86_64 is (that is: not at all free). There is lot of cruft and even multiple ISAs in aarch64 (e.g. T32/Thumb).
And the CISC vs RISC arguments are questionable, seeing that Apple has done the migration in both directions by now.
I noticed that Apple made absolutely no mention of ARM in their keynote. Seems like they're trying to whitelabel it for brand benefits as well as to divorce themselves from any expectations around standards?
That was interesting! Surely not an accident. Possibly to:
- Emphasise the breadth of their silicon expertise across CPU / GPU / Neural Engines etc.
- Because Arm has little or no brand recognition (Apple > Intel > Arm in branding terms).
- Distinguish from any me-too moves to Arm by competitors.
There you go! Someone finally figured it out. Apple is moving to Apple Silicon, not ARM. Try to get LG to announce they’re offering a PC with Apple Silicon tomorrow.
The absence of any ARM mention is marketing, nothing more.
They scarcely ever have with regards to iOS either; architecture has never been a talking point for their CPUs. How long was it from iPhone announcement to knowing it was ARM? How long from the Apple A4 announcement to knowing it was ARM?
Yep, most of that old "cruft" is essentially unused and turned off. A log of critics of x86 don't really know what they're talking about. x86 is inefficient because they don't really have much incentive to end the status quo where performance is more important than power to most customers. People are happy with 3-4 hours out of their laptops so Intel and AMD aim for that and sacrifice power for performance, quit often that is the tradeoff in the design.
> People are happy with 3-4 hours out of their laptops so Intel and AMD aim for that and sacrifice power for performance
Heck, I'm happy with 1 hour. I leave my laptop plugged in nearly 100% of the time. The point of the laptop is that it's easy to move, not that I want to use it while I'm in transit.
> Some is turned off but some still has to be dealt with (variable instruction lengths for example).
Modern x86_64 processors don't actually natively execute x86 instructions, they translate them into the instructions the hardware actually uses. The percentage of the die required to do that translation is small and immaterial.
> Intel tried to compete in mobile for a long time and failed even with a better manufacturing process.
Intel didn't understand the market.
I recently bought a new phone. On paper it's twice as fast as my old phone. I imagine that's true but I can't tell any difference. Everything was sufficiently fast before and it still is. I never use my phone to do anything that needs an actually-fast CPU. I have no reason to pay additional money for a faster phone CPU. But I do notice how often I have to charge the battery.
These are not atypical purchasing criteria for mobile devices, but that's not the market Intel was chasing with their designs and pricing, so they failed. It's not because they couldn't make an x86 CPU for that market, it's because they didn't want to, because it's a lower margin commodity market.
Faster cpus become more power-efficient cpus because they can race to sleep. So you really do want to pay more for that cpu, but not for the compute performance but for the battery life.
That's assuming the faster CPUs use the same amount of power. It's possible for a slower CPU to have better performance per watt. This is often exactly what happens when you limit clock speed -- performance goes down, performance per watt goes up.
> Intel tried to compete in mobile for a long time and failed even with a better manufacturing process.
They didn't fail because of performance, though, they failed because of app support & lack of a quality radio. The CPU performance & efficiency itself was otherwise fine. It wasn't always chart-topping good, but it wasn't bad either.
Agreed - CPUs (at the end at least) were fine. Also they were probably looking for bigger margins than were available.
General point is that I think that Arm has a small architectural advantage due to lack of cruft but that other factors are usually more important - e.g. the resources and quality of team behind implementation.
Sorry meant A64 rather than aarch64 as pretty sure that Apple hasn't supported 32 bit for a while now (so no T32 or Thumb) so the instruction set was announced in 2010 and definitely cleaner than x86.
Agreed that CISC vs RISC is very questionable by now.
Provided that the software is correctly written, ARM's weaker memory model allows for more flexible instruction and I/O scheduling.
It seems most people feel that the DEC Alpha went too far in weakening the memory model to improve performance, but A64 seems to at least be near the sweet spot.
It's also not a huge amount of work that gets thrown away decoding x86 instructions in parallel, but there's non-zero overhead introduced by having the start location of the next instruction depend on what the current instruction is.
Your wording makes it sound like ARM is still being used just for smaller devices and controllers with very well defined and limited uses. General purpose computing is already possible with iPads and iPhones. They're just artificially limited by the OS.
iDevices weren't really made with games in mind, but they can push out performance that beats handheld gaming devices. Artists (including myself) use iPads extensively and the response time with the Apple Pencil beats out just about anything else on the market. The only limiting factor is the tiny memory that limits the file size and layer limit on some programs. They're just fine for watching video, and even multitasking with a video playing while working on something else. This is on tiny device with no active cooling and long battery life, beating out most laptops in the same price range.
I don't believe there is any curated use case. They're already more than capable of being general purpose computers. I mean, Apple is already openly advertising that they're making iPad OS more desktop-like and operable with mice and keyboards. Literally the only things holding them back are the OS and Apple's refusal to put some decent memory inside.
The OS is the curated use case. Multitasking is an afterthought. Once the OS is no longer "holding them back" the Apple chip will run into similar problems that Intel CPUs run into.
Isn't that for specific benchmarks though such as some geekbench/specint or web browsing benchmarks? I worry about non-gpu floating point for example. There is so much hand-optimized AVX/SSE code out there in big apps.
There’s a fair bit of AVX/SSE code out there, but these days the vast bulk of AVX/SSE code is generated by the autovectorizer and that’s mostly going to work on NEON without a hitch. Clang enables the autovectorizer at -O2 by default.
I’d be interested in estimates of how much hand-written AVX/SSE your computer actually runs. The apps I’ve seen usually have a fairly small core of AVX/SSE code.
They're admittedly not applications most users run every day, but many multimedia applications (audio processing, encoding, decoding) is mostly done with hand-crafted instrinsics, the same goes for video stuff.
In an even more niche area (high-end VFX apps, like compositors, renderers) SSE/AVX intrinsics are used quite a bit in performance-critical parts of the code, and auto-vectorisers can't yet do as good a job (they're pretty useless at conditionals and masking).
But is the bulk of AVX code by time spent running, code that was generated by autovectorizer? The SIMD in openssl and ffmpeg is written by hand. I bet the code that spends a lot of time on the CPU, especially the code that runs a lot while humans are waiting, is written by hand.
Desktop productivity content creation apps have never before needed ARM versions, so many probably don't have ARM specific optimizations, and some probably have x86 specific code that is just enabled by default.
The memory model differences are going to be painful to debug, I think ("all-the-world's-a-VAX syndrome" is now "all the world's a Pentium/x86-64").
Given that Amazon was able to get there, what makes you think Apple can't? I would struggle to believe that Annapurna Labs has any significant advantage over PA Semi given the track record PA has had since joining Apple, and the fact they had nearly a decade head-start.
>>> They already have a mobile chip that is as fast as an active-thermally cooled notebook chip.
>> "As fast" on specific curated use cases. Show me an Apple chip that beats any laptop on 7zip.
> Is this a joke, what kind of usage benchmark is 7zipping large numbers of files?
A benchmark that Apple is unlikely to have implemented specific optimizations for, which therefore is a better test of the general purpose performance of the chip.
The situation being claimed here is sort of like if someone cited a DES benchmark to claim that Deep Crack's DES cracking chips (https://en.wikipedia.org/wiki/EFF_DES_cracker) were faster than a contemporary 1998 Pentium II.
I believe it is what the poster you are replying to would call "a specific curated use case."
(Semi-seriously, I don't know anyone who uses a Unix(-like) system who uses 7zip, although I'm sure they're out there. For the record, I just unzipped a 120M archive on both my 2020 Core i7 MacBook Air and my 2018 (last-gen) iPad Pro and as near as I can tell the iPad was faster actually extracting the files, but had an extra second or so of overhead from the UI.)
Correct. 7zip is a LZMA compressor. The common equivalent command line tool on Linux is xz.
Linux distributions have been using xz compression for all packages (replacing gzip). So to the question of how relevant is xz/lzma/7zip performance to day to day task, is it's a lot relevant.
this is something that's looking at getting moved to storage controllers on motherboards, ex: PS5/Xbox consoles so that compressed data can be streamed directly to the GPU. Hopefully we'll start to get this type of tech after it's been proven in the console space.
so could you provide some actual, open-source basic benchmarks. And not strange, opaque geekbench results...
I guess AMD is fine for me (as is my old Intel-notebook) and I'll just wait for POVray, GROMACS and Co.
EDIT:
And well, I noticed, supposedly Anandtech ran SPECint2006 on the A13 (and numerous other chips) - they ran it with WSL for x86 (because running on a dozen Android things is easier than running a standard benchmark on Linux/Windows ofc). You find the results here: https://images.anandtech.com/doci/14892/spec2006-global-over... - I guess (not sure, because it's for some reason not cleary marked and mentioned...) these are SPECint2006-results. So, let's check them for validity (because WSL is no problem and it matches Linux ofc); just looking at an i7-6700K (which is a little bit behind the i9-9900K they supposedly ran on):
https://www.spec.org/cpu2006/results/res2016q1/cpu2006-20160... - marginal worse performance than @anandtech in some benchmarks, but that's with an older CPU and an older arch! And then there are the 3 or 4 benchmarks which are just way off. Makes one wonder, what they really did (because of course, installing CentOS and running SPEC on native Linux is too much of a hassle, when running and compiling on 8 ARM-platforms!?)
EDITEDIT: it's even worse for SPECfp2006: https://www.spec.org/cpu2006/results/res2016q1/cpu2006-20160... [well, here the old 6700K suddenly sometimes is twice as fast as the 9900K and 3x as fast as the A13 (and yeah the story of the 2.8GHz-low-power-chip drawing circles around a 4.5GHz-HF-part just didn't sound convincing in the first place...)]
The official results from spec.org have a bunch of cheating, eg. exploiting undefined behaviour to run a benchmark improperly. AnandTech uses a consistent compiler (Clang, not ICC) without settings to exploit this, hence the divergence.
When I started at Google, I sat next to a guy who used to write compilers for DEC and Intel. I asked him, given the huge amount Google spends on hardware and electricity, if he thought that switching to ICC was worth while. His answer was basically that ICC is tuned to maximally exploit undefined behavior for marketing purposes and he wouldn't want to use it in production, at least without heavily tweaking flags to disable some optimizations. ICC gets most of its speed advantages by enabling optimizations that are present in GCC/Clang, but deemed too dangerous to turn on by default.
Yeah, last time they did that with Nvidia graphics cards just as Adobe released it’s new rendering engine everybody was really thrilled to learn they could also buy Apple video editing software that would not sht itself instead of using the Adobe tools because of that inherent Apple advantage...
Intel can't shrink the die size to what TSMC/Global Foundries/Samsung can and they will never let them manufacture due to IP/national security/etc reasons.
> Apple is roughly one chip cycle ahead on perfomance/watt from any other manufacturer.
Eh? This is a flimsy claim. AMD's performance/watt is extremely impressive right now. Apple is ahead of Intel for sure, but Intel isn't the only other player here.
> So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+, and 20-30% faster. Alternately a Macbook Air type with 16 hours plus strong 4k performance.
A slightly more efficient CPU doesn't get you this. You need significant efficiency improvements across a variety of aspects, including those Apple has already been optimizing for years like the display.
I'd say that you should take a look at a comparison of the power efficiency of Apple's "little" core in the A13 to a stock ARM little core.
>In the face-off against a Cortex-A55 implementation such as on the Snapdragon 855, the new Thunder cores represent a 2.5-3x performance lead while at the same time using less than half the energy.
AMD is tiny compared to intel, the fact that they are besting them goes to show how they have been stuck for ~5 years.
The real problem though, is that apple is actually designing a core 100% focused on the target market. Unlike intel, for whatever reason, and AMD which didn't have the funds to run a dedicated design team for laptop/desktops.
So, I would expect the engineering tradeoffs for said laptop/desktop processor to show. AKA, things like hyperthreading are quite a win for servers, but at best are a wash for a desktop use case focused on extremely high single thread perf at the expense of throughput.
> AMD which didn't have the funds to run a dedicated design team for laptop/desktops.
Given the extremely impressive performance of the 4800H notebook cpus, I'd assume that might be a thing of the past.
> AKA, things like hyperthreading are quite a win for servers, but at best are a wash for a desktop use case focused on extremely high single thread perf at the expense of throughput.
This might be true for devices like the MacBook Air which are designed for relatively light usage like Office, but I don't see that argument working with their "Pro" lineup, including the iMac Pro and the MacBook Pro. These are devices specifically targeted to a professional audience like graphics designers, 3d artists, software developers or video editors. All of those tasks can be done with decent single-threaded performance, but lots of those tasks also benefit from multithreading.
I haven't owned a single MacBook so far and I doubt that'll change anytime soon. Nevertheless, its exciting to see Apple do this move and it'll be interesting how good their CPUs can compare to mobile processors by Intel and AMD.
TDP is whatever you want it to be. The big cores in an A12Z will pull around 4w each. That means an "unchained" A12Z is a ~16W+ CPU. The 4800H is a 45W TDP, but also has 2x the fast CPU cores. And the binned 4800HS is a 35W TDP, still for 8 cores / 16 threads.
So ~2-3x the TDP for 2x the core count and 4x the thread count. Pretty interesting head to head when the Apple dev kits actually show up in people's hands, don't you think?
I'm unconvinced this comparison is very meaningful at all.
First of all, TDP is not the same thing as power consumption - it is a specification for the required performance of the heatsink/fan cooling solution.
For example: a Ryzen 3900X is a 105W TDP chip. Running at full speed on all 12 cores it consumes 146W; about 10W per core and the remainder for the rest of the package.
Secondly, it is entirely typical to run a single-threaded workload at a higher clock frequency (because if that's all you have to do, why not?), and chasing higher clock speeds is disproportionately expensive since it requires higher voltages, and dynamic power in a switching system increases with the square of voltage.
Again, taking the Ryzen 3900X: that's a nominal 3.8GHz processor. Running a single-threaded workload, it will typically boost up to 4.45GHz in testing. At that frequency, that single core is drawing nearly 18W - i.e. 80% more than at the nominal frequency achieved when all cores are busy and no boost headroom is available.
From what I've read about the A12/A13, the voltage/clock curves are particularly skewed at maximum clock speeds - something like 1.1V at 2.49GHz on the A12 and well under 0.8V at 2.3GHz - basically half the power to run at 93% of the clock speed.
There are a lot of unknowns here, but I think there are more reasons for optimism than your analysis suggests.
That's either 4-5W per core or the uncore in an A13 is hugely power hungry. I'm rather positive it's not an extremely bad uncore, so the only other option here is a 4-5w per core power figure. Which also lines up with the voltage/frequency curve numbers: https://images.anandtech.com/doci/14892/a12-fvcurve.png
If you have data to support a different number I'm all ears, finding power draw figures in this space is rather difficult, but 4-5W per-core aligns with expectations here. A 1W consumption would be unheard of levels of good.
It’s a meaningless comparison. The 4800H could be a 200W chip if it was “unchained”. Peak burst performance is dynamic in modern CPUs, it’s what you can measure in the real world that matters.
Intel TDP doesn’t include the power usage of DRAM and other IO, or the screen, or WiFi or modem (which may have been disabled tbf).
Geekbench 5 multi core scores are roughly 7400 vs 3300. Let’s say for example that the Thunder cores are half the perf of the Lightning ones. So that 3300 score might be roughly the perf you could get from 4 x Lightning instead of 2 x Lightning and 4 x Thunder. 4800H has 8 cores. Getting a bit over 2x the performance.
But that’s at a TDP of 45W (let’s call it 40W to be more generous). 5W for A13 (well, A13 entire device) vs 40W 4800H. That’s 8x the power draw for 2x performance. Am I wrong?
The linked Anandtech data shows it using 4-5W in a single thread test. That doesn't mean it will use 4-5W/core in a multithreaded test, but thats almost certainly only due to limitations in power delivery and thermals.
> All of those tasks can be done with decent single-threaded performance, but lots of those tasks also benefit from multithreading
Most of those tasks benefit from multiple processors. Multithreading is less clear-cut because you're trading the win for under-optimized code against increased pressure on shared resources (which is one of the reasons why it's opened some windows for security attacks). It's not hard to find pro workloads which perform better without multithreading enabled and considering that Apple will own the entire stack up to some of the most demanding apps they're well positioned to have both solid data on the tradeoffs and architectural changes.
Note for the downvoters who might confused: I’m using multithreading in hardware sense of symmetric multithreading (SMT), which Intel refers to as HyperThreading:
Hyper threading is indeed meant for languages like Python or Javascript that use pointers everywhere. Once you have an optimized workload with little pointer chasing the only other meaningful benefit of SMT come from the fact that you can run floating point workloads alongside integer workloads. That's a pretty rare situation but it does happen sometimes.
That was basically my thought: there are plenty of programs which it can help (almost all business apps) but not all of those are limiting anyone’s work and the feature isn’t free. Having multiple true cores has been common for multiple decades now and I’d be really curious whether a modern chip design team would feel it’s worth investing in if they didn’t already have it. My understanding is that SMT has a power cost comparable to extra cores and given how well Apple’s CPU team has been executing I’d assume there’s been careful analysis behind not implementing it yet.
> AMD is tiny compared to intel, the fact that they are besting them goes to show how they have been stuck for ~5 years.
I'm having trouble following the reasoning.
> AMD which didn't have the funds to run a dedicated design team for laptop/desktops.
They have plenty of funds for R&D. Problem is, processor manufacture goes much beyond the processors themselves. You have to design the entire manufacturing chain, and spend billions on new foundries which will get obsolete in a few years.
AMD is fabless, they just didn't have any money for spinning up several different chips for different markets. They even had a single chip from desktop to high end servers (with all the tradeoffs that entails).
Because AMD has a 4 core low power processor with multiple times the memory bandwidth and multiple times the I/O and greater performance per core at the same power draw as Apple's 2 core processor with about a tenth of the I/O and a third the memory bandwidth.
You're correct that AMD’s offerings are impressive, but that‘s vs an uncooled A12 chip. Add active cooling and a few more watts there's no reason why they couldn't blow the doors off.
I think you're either underestimating how much power an A12 consumes or overestimating what actively cooled CPUs consume per-core.
The A12 will pull around 4w on a single-core workload to come close to 9900K in performance. That's a good number, but it's not unheard of. The 4800HS is also a 4w per core CPU, and also comes close to the 9900K in performance.
The problem is increasing single-core performance becomes non-linear. It's not just a few more watts to bump from 3ghz to 4ghz. It's a lot of watts.
Simply having 4 big cores on an A12 would push it into 10w actively cooled territory as well, same as the i5 in a macbook air (a CPU that's also 10w). Add a few watts to bump the single core performance while you're at it and suddenly it's a 20w chip. Make it 8-cores to compete with the current macbook pro CPUs and suddenly it's a 40w chip.
> "Simply having 4 big cores on an A12 would push it into 10w actively cooled territory as well"
The A12X/A12Z already has 4 big cores (and 4 little cores, and 7-8 GPU cores). I imagine the A14X will follow this pattern, but the cores will be 2 generations newer, with 2 generations of performance-per-watt improvements.
Power-performance is a very nonlinear curve, so it doesn't make sense to compare single-core TDPs with all-core TDPs. The iPhone XS is 2500 MHz when one core, but drops to 2380 MHz when both primary cores are in use, a 5% performance drop... but this drop lowers power from 3.85 W to 2.12 W, 45% less!
Hence it makes much more sense to treat the A series processors as 2 watt chips when looking at multi-core scores. They're targeting efficiently hitting these lower frequencies. You'll get a similar result for the 4800HS; it'll use a lot more than 4 W single-core.
I think you mean 2w cores not 2w chips? The A12X also cuts frequency when multiple are in use (as of course AMD and Intel CPUs do as well), but at 2w/core you're still talking 8w for a quad core, comparable in power to the quad core i5 in the current MacBook Air.
The problem here is a severe lack of quality data. The best we have right now is SPEC2006 which is unfortunately only single core. You're absolutely right that it makes more sense to compare like for like in workloads, but we don't have any good multithreaded cross-platform benchmarks. There's geekbench, but it's somewhere between mediocre and shitty. And then nothing else? There's then no multithreaded benchmarks that also have measured power draw on an A12/A13.
A 4800HS at it's 4w/core all core load is also still clocking higher than an A12X. When Apple has the thermal budget to spend as well they'd almost certainly do the same thing?
Yes, 2 W/core. The difference is that at this power level, the A series chip will be a lot closer to peak performance than the Intel chip.
> A 4800HS at it's 4w/core all core load is also still clocking higher than an A12X.
This doesn't mean that much, since the range of efficient clock speeds depends on design choices, so they aren't always 1:1 comparable between architectures. Apple might well increase clock speed on their desktop chips, but unless it's only a few percent, it won't be as simple as pumping more power into the same dies; they have to actually redesign the core to operate efficiently at those higher speeds.
This isn't unique to Apple, either. The "big" cores in ARM CPUs have been pulling 2-4W for years and years. That's why thermal throttling is such a major issue in mobile, especially mobile games.
> In virtually all of the SPECint2006 tests, Apple has gone and increased the peak power draw of the A13 SoC; and so in many cases we’re almost 1W above the A12. Here at peak performance it seems the power increase was greater than the performance increase, and that’s why in almost all workloads the A13 ends up as less efficient than the A12.
> The total power use is quite alarming here, as we’re exceeding 5W for many workloads. In 470.lbm the chip went even higher, averaging 6.27W. If I had not been actively cooling the phone and purposefully attempting it not to throttle, it would be impossible for the chip to maintain this performance for prolonged periods.
In other words, to get those good specint numbers, power was sacrificed to do it. 5W per-core power draw is right in line with a typical x86 laptop CPU, too. 4800HS sits at 35w, or 4.3w per core.
Yes. I saw that there were already other responses to your comment, but I'd like to add my own, quoted from the conclusion of the article:
"But the biggest surprises and largest performance increases were to be found in the A13's GPU. Where the new chip really shines and exceeds Apple’s own marketing claims is in the sustained performance and efficiency of the new GPU. Particularly the iPhone 11 Pro models were able to showcase much improved long-term performance results, all while keeping thermals in check. The short version of it is that Apple has been able to knock it out of the park, delivering performance increases that we hadn’t expected in what's essentially a mid-generation refresh on the chip manufacturing side of matters."
One of the key phrases is "...Apple has been able to knock it out of the park..."
The rest of the article is pretty clear - Apple gets it, and beats their competitors pretty soundly.
Apple is definitively competent at chip design but the end result won't be leagues ahead. They might be something like 10% ahead in terms of IPC and another 10-20% just because they get early access to 5nm compared to whatever "ancient" process Intel is using.
Everyone seems to think about cost and speed all the time.
I think that's only part of the story, which also needs to include the ability to add features and control the entire feature set across chips on all of their devices.
Encryption, ML, graphics, power management, security, etc. are all things that Apple can now add or remove as needed.
The level of optimization they can do is now well beyond just speed and price.
Right now absolutely everyone else is ahead of Intel in the NM race, with some currently shipping chips two process nodes ahead and several announced chips going so far as being twice as small as Intel's current process node. This is partially why AMD has been able to offer laptop CPUs that rival Intel's desktop offerings for less cost.
That's fair. However, according to the same chart, it draws 1W at 2000 MHz, which should give 80% the single core performance. This is how Apple is getting 4x the perf/watt of Intel and AMD competitors in Geekbench and similar workloads. Apple is able to acheive very high multithreaded performance within a 5-6W TDP by running all the cores around 80% of peak performance.
> Those curves also apply to Intel & AMD. As in, you can drop frequency on AMD to also achieve significant improves in perf/watt. That's not a unique aspect of the A12. That curve is more "this is how TSMC's 7nm transistors behave" type of thing. Geekbench only measures perf, not perf/watt. It does not try to achieve maximum perf/watt, nor has Apple tuned to the A12/A13 to achieve maximum perf/watt in Geekbench either. Geekbench's single thread numbers where it "competes with Intel & AMD" are also these ~5W per core power figures.
While similar curves also apply to Intel and AMD, their mobile parts are drawing 10-20 watts per core to acheive the very top results. When you're using all the cores together under a 6W TDP (as apple is doing), Apple is able to achieve a much higher Geekbench result than Intel or AMD parts set to a comparable TDP, or even 3x the TDP. Compare multi-core Geekbench scores of Apple's parts running at a 6W TDP to Intel or AMD's most efficient parts running at a 15W TDP, and you'll see that Apple outperforms them while drawing 1/3 the power. Similar curves apply, but Apple can achieve far morea at 1W per core than any x86 competitors.
Those curves also apply to Intel & AMD. As in, you can drop frequency on AMD to also achieve significant improves in perf/watt. That's not a unique aspect of the A12. That curve is more "this is how TSMC's 7nm transistors behave" type of thing.
> 4x the perf/watt of Intel and AMD competitors in Geekbench
Geekbench only measures perf, not perf/watt. It does not try to achieve maximum perf/watt, nor has Apple tuned to the A12/A13 to achieve maximum perf/watt in Geekbench either. Geekbench's single thread numbers where it "competes with Intel & AMD" are also these ~5W per core power figures.
I'm not sure where you're getting this random 4x better number from anyway?
Researching the very most efficient parts from Intel and AMD today, 3x the perf/watt would be more accurate. Operating at a 5-6W TDP, the 2018 Apple A12X gets a multicore Geekbench score of 4730. Operating at a TDP of 15W, the i7-1065G7 (Ice Lake) gets multi-core Geekbench score of 4865. This is on Intel's 10 nm process that's comparable to TSMC 7 nm. Near equal performance for 3x the power.
I'd expect a 2020 A14X or whatever it's called to comfortably beat what they could achieve in 2018, so getting 4-5x the perf/watt of Intel and AMD's best is what I'd expect when operating at similar points in the frequency/power curve. The A12X was around 4-5x the perf/watt of what Intel and AMD had out in late 2018.
Unfortunately I could only find this old chart [0] showing how power draw scales with frequency on Intel. However for the sake of demonstration it should be more than enough.
The chip needs around 25W at 2.5GHz and 200W at 4.7GHz. 8x more power for
1.88 times the performance. In other words Intel chips running at 2.5GHz are 4.25 times more efficient than Intel chips running at 4.7Ghz. No magic. Once Apple has chips that go this far they will suffer from the same problems.
Here is a slightly newer chart [1] that demonstrates a 57% increase in power consumption for a 500Mhz frequency gain (12% performance gain).
If it was that easy then everyone would do it. I call this the curse of the single number. There is this complicated machinery with lots of parts with different shapes, some are bigger some are smaller. However, the customer is not aware of the complexity and only sees a single number like 5W and maybe another number that showcases the performance score of the chip. Surely, since that is the only information we have about power consumption and performance it must be true in all situations. The reality is that those two numbers were measured during different situations and combining them into a meaningful calculation might actually not be possible.
For example. Geekbench measures peak performance of all cores at the same time and the power draw may go above 5W.
The 5W TDP may refer to normal day to day use where one or two cores are active at the same time for the duration of the user interaction (play a game for 5 min or something) and once the user stops using the phone it will quickly go back to a lower TDP.
You're comparing peak single with all-core TDP/cores. Ryzen 2 cores peak at about 10 Watts per core...they probably have at least 30% frequency increase headroom...plus whatever bump they get when they move to 5 nm. I expect at least a 50% per core performance increase when they refresh their laptop/desktop lines, and probably twice the cores. And they'll also be saving a few hundred dollars per laptop...
Increase frequency by 30% and the A12 will also be hitting 10 watts per core - the end of that graph is going real vertical real fast.
Since Zen 2 and A12/A13 are all on the same TSMC process this shouldn't be that surprising...
> I expect at least a 50% per core performance increase
Based off of what evidence? That'd be an unheard of improvement. TSMC isn't even claiming anything close to that at a pure transitor switching frequency for 5nm? They are predicting 15% frequency gain (at the same complexity and power) or a 20% power reduction (at the same frequency and complexity) over their 7nm process.
> They are predicting 15% frequency gain (at the same complexity and power)
My intuition is that 50% might be overoptimistic. But going from iPad to Laptop thermal constraints, you'd expect a big increase in frequency just from clocking the thing higher, no?
> Apple would lose their edge over Intel because most of the efficiency gains come from the lower frequency.
Presumably that would only apply when the cores are actually running at full-throttle though. For casual use there could still be considerable gains if the processors are better at power management (which they most likely are, as they've had to hyper-optimised for this for phones).
AMD is small though. I have no data to back up my gut but anecdotally I feel like they don't have the manufacturing capacity to keep up with Apple's demands right now.
AMD is the provider of APUs/CPUs and Graphics of both PlayStation 5 and Xbox Series X.
AMD represents about 2/9 in Windows and 3/10 in Linux of processors using Steam month-by-month and raising; In this same survey Windows represents 95% and MacOS 4% of computers.https://store.steampowered.com/hwsurvey/processormfg/
I think they can manage the production to provide for all Apple CPU needs.
I saw speculation elsewhere that this change, along with AWS's addition of Graviton-based (their own ARM processors) instances at much more competitive price points relative to x86, are bound to spearhead the change to "ARM by default."
If your devs are already using ARM, and ARM's notably cheaper in the cloud, that's a compelling case. If you're already using Kubernetes / Docker heavily, you're probably already 80% of the way there. Linuxes that aren't supporting ARM as a "first class citizen" will soon, and undoubtedly that will be a speed bump at worst.
I'm interested to see the specs relative to the x86 Macs, but the only open question to me was whether or not we'd see the x86 emulation layer. Well, we did, and it may not be perfect but it certainly looks like they put a lot of effort into it. If it works as well as it looks, I think this transition is borderline inevitable. I think I've bought my last x86 hardware.
This claim doesn't really hold up. The problem here is the vast majority of non-Apple laptops & desktops that are in use. THOSE will still all be x86 for the foreseeable future as ARM CPUs not made by Apple all have terrible per-core performance. Graviton2 compensates by just throwing 64 cores at the problem, but that's not going to do anything for your Electron-based text editor that struggles to use 2 CPU cores in the first place. Or for a typical webpage, which struggles to use more than a single CPU core.
That's going to matter when a company is spec'ing out workstations to buy, which are unlikely to have an Apple option on the table at all in the first place, and Amazon isn't going to sell you Graviton2 CPUs to put under your desk, either.
This _could_ be the start of a bigger focus on ARM, definitely, but to really make inroads into what devs use you'll need someone other than Apple to step up to the plate. Or for Apple to become vastly larger than they are in the desktop space. Otherwise we'll all just keep cross-compiling like we have been for the last decade of mobile app development.
I can't agree with the characterisation here that only Apple can make decent Arm cores. Graviton is apparently pretty closely based on an Arm Neoverse N1 CPU and the 64 vs 32 core point is comparing a hyperthreaded part vs one that isn't. Plus Graviton seems to be materially more cost effective.
However, there is a real challenge here and that's who has the capability and incentives to make laptop and desktop Arm cores. Microsoft probably, but hard to see many other firms doing so.
So a scenario where Apple gains a material lead in desktop and laptop performance over everyone else and grows market share as a result seems quite credible.
> Graviton is apparently pretty closely based on an Arm Neoverse N1 CPU and the 64 vs 32 core point is comparing a hyperthreaded part vs one that isn't.
How does hyperthreading change the story here? The 32-core CPU is the one that had hyperthreading while the 64-core one didn't. Hypthreading is widely regarded as being around +20% performance for multithreading-friendly workloads. Either way, the per-core performance of the 32-core x86 CPU is nearly 2x that of the 64-core ARM one. That's not a good look for being desktop-viable.
Especially when the 32-core x86 cpu also comes in a 64-core variant. And then a 2P 64-core variant even. You can have double the CPU cores that are each 2x faster than the Graviton 2 CPU cores.
Which gets back to only Apple has managed to get ARM to have good per-core performance so far.
> Plus Graviton seems to be materially more cost effective.
The c5a.16xlarge is the same price as the m6g.16xlarge. No cost effective difference in that head-to-head.
Disclosure: I work at AWS building cloud infrastructure
> The c5a.16xlarge is the same price as the m6g.16xlarge. No cost effective difference in that head-to-head.
c6g.16xlarge is more than 10% cheaper than m6g.16xlarge (and c5a.16xlarge). It also provides more EBS and network bandwidth, and provides 64 cores versus 32 cores with SMT.
I actually agree that x86 will dominate the desktop for quite a while yet. I also agree that EPYC has materially better performance than the Graviton - Rome is very impressive.
Just can't agree though that only Apple has the ability to make desktop / server Arm parts that don't have 'terrible' per core performance. The real issue is who has the economic incentive to build competitive desktop parts - I don't see anyone who would see it as worthwhile.
That's the fundamental problem with the Apple monopoly. I would be perfectly happy if I could use an non-Apple laptop with an outdated Apple SoC. However, since only Apple gets access to their SoCs everyone is worse off.
c5a and m6g instances are the same price, but m6gs have twice as much memory. c6g instances are a better point of comparison for c5a – same vCPU count still, same memory, marginally better network at 8xlarge and up, and about 88% of the price.
> Graviton2 compensates by just throwing 64 cores at the problem, but that's not going to do anything for your Electron-based text editor that struggles to use 2 CPU cores in the first place. Or for a typical webpage, which struggles to use more than a single CPU core.
More cores will help your typical developer who's running 8+ apps at once, along with several browser tabs that are all running in separate processes.
Webapps are kinda like mobile phone apps. Only one tab is open and therefore only one process is actually running latency sensitive code. It's very unusual when a web app is using significant resources in the background since no rendering is taking place. Of course there are exceptions to the rule. One or two powerful cores are often all that's necessary.
Perhaps. But it's not at all uncommon for me to be running Chrome with devtools + Firefox + webpack + Sublime Text + xcode + a second webpack for react-native + an iphone simulator + Android Studio + flipper for debugging + Slack...
This definitely makes my computer run slowish (esp. Android Studio!). Of course I can shut things down and run fewer things at once, but it would definitely provide value to me not to have to.
I'm no expert on ARM vs x86 performance, and AWS's own language is careful to specify it's only significantly more cost-effective for certain workloads.
It'll be interesting to see how fast improvements are made in both Apple's and AWS's processors. That's another factor I see contributing to this: if Apple's pace of processor improvements continues as it has for iPhone and iPad, it'll be tougher year after year for competitors to stick with the status quo.
Apple chips are fast mostly because they have a lot of cache to spare.
Take for example the A12Z. It has 8MB of L2 (not L3, it's L2!) cache. An Intel Core i7-1068NG7 present in the latest Macbook Pros (that performs akin to the A12Z according to Geekbench) has only 2MB of L2 cache.
No other ARM CPU has this level of L2 cache. Apple chips are not "magical", Apple just can afford packing up lots and lots of cache because they are not in the silicon "race to the bottom" like Qualcomm and Intel are, for example. L2 cache is very expensive and Apple is just hacking its way up by packing as much L2 cache as they can.
Don't take me wrong, it's not that Apple is "right" or "wrong" by doing it. They just can and did it. However, it's needless to say that their CPUs are not so different from other ARM ones, they just happen to have a budget and a business model that let's them ignore the price/performance ratio when designing chips in order to achieve the maximum performance.
> Take for example the A12Z. It has 8MB of L2 (not L3, it's L2!)
It's not really that clear cut. You could also argue that the A12Z has 8MB of L3 and 0MB of L2. The L2 in the A12Z is shared while the L2 in the Intel CPU is not.
So it's not "traditional" L2 as you're familiar with it, it's more like an L2.5 or something. Although still accurately called L2 as it is the second level of cache, it's just that Apple went with a rather different cache hierarchy & latency structure than Intel did. But it's really not at all accurate to compare the 8MB of L2 on the A12Z to the 2MB of L2 on Ice Lake. Those are very different caches.
Further, your point is that Apple chips are faster because of easily replicated reasons. Intel is charging hundreds of dollars for their chip -- you should charge them some consulting fees and tell them how easy this is to boost their performance! This isn't even considering that your whole analysis is flawed to begin with and you're comparing apples and oranges.
I think you misunderstood the comment. Intel CPUs are already performing well and it is Apple that is using the same strategy to reach Intel level performance (or even go slightly above it). What you missed is the fact that other SoC vendors like Qualcomm don't follow this strategy and therefore end up with cheaper but also lower performance SoCs. Since Intel doesn't manufacture ARM chips, you are now forced to go with Apple if you want good performance from an ARM chip.
The comment was very literal in comparing the cache on the Apple chip to the Intel chip.
And just to be clear, cache is "expensive" in die size. They aren't putting an order in for L2 cache to Samsung or something.
That Apple chip has a die size less than half the size of Intel chips than it outperforms. So the whole "expensive" claim is debunked before it even gets started.
Further we are very explicitly comparing Apple silicon to Intel because that is exactly the transition that's happening here.
The Apple chip doesn't have all big cores and only 6 total cores in the A12, Intel or AMD (CCX) have 8 big cores with SMT. Apple's multithread performance is accordingly slower. Once you account for these two big omissions, you'll likely find Apple take as much or more die area than AMD's Zen2 CCX for similar MT performance.
It'll probably be more die area for equivalent performance, which for Apple might not be an issue given it's margins. Of all the ARM designs we've seen, cache is by far the unique factor in Apple's design, so comparing die size with equivalent cores+features makes complete sense.
Like others have mentioned, maybe Apple will just focus on implementing new instructions, but at that point, they will likely diverge enough from the ARM ecosystem that developers and users should be somewhat worried.
Amazing how quickly all of the goalposts are moving so people can desperately try to diminish whatever Apple does. Now it's die size? Or, odder still, die percentage.
Firstly, the A12Z is 8 full cores. The "small" cores aren't limited to a subset of instructions or something, they're made on a more efficient, but lower headroom, tracing. That is a 120mm2 die, versus 197mm2 for the Ryzen 7 3700x (8 cores).
Oh but wait, the 3700x has no integrated graphics, no video encoder/decoder, no 5TFlop neural network, no secure enclave... It's absolutely huge comparatively, and has a tiny fraction of the features.
This whole die size nonsense really isn't turning out, is it?
The 3700X is of course a faster chip (not in single threads, but when all cores are engaged), but that's with active cooling and a 65W+ TDP, versus about 6W for the A12Z. Oh, and it's even a year newer than the Apple chip which is just relevant for a development kit.
Maybe we can prioritize based upon how many "Zen" codenames exist in the product. There the A12Z clearly falters!
The 3700X CCX die has 8-core+36MB L2+L3 cache and is just 74mm2, the IO die has pcie4, ddr4, other IO and is 12nm 125mm2. For a total of 199mm2.
If you want to compare cpu, graphics, video and nn, then the AMD 4800U die size is 156mm2, this chip has +4MB L2+L3 cache (12MB total), a much better GPU+FP16 for 4TFlop nn, and full AVX2+SMT cores more than the A12Z. The little A12 cores might be full ISA, but they're 1/3 the die area and are lower performance. NEON is half the size of AVX2 and the GPU difference alone would likely push the A12Z past 156mm2. And there are 15W/45W versions of this chip going as low as 10W. The A12Z is likely around 10W+ too in the iPad Pro and the devkit, but I can't find sources on this.
Looking a lot more competitive now isn't it?
The Qualcomm 855 is 73mm2, and the A12 is 83mm2, and the performance gains here are impressive. Beyond this, A12Z 120mm2 vs AMD APU 156mm2 and it's starting to look like a much closer fight, and by no means a perf/watt or perf/$ advantage for Apple until we see real systems.
Die size is _the_ trade off Apple is making with their ARM/RISC+loads of L2 cache design. It's a trade off every chip makes, but it's especially important here with large cache sizes. I don't doubt in a couple of generations Apple can compete with an AMD 4800U CPU+GPU on real world multi-threaded tasks at 10W (assuming 15% increases/gen), but the 4800U is already a few months old now. Apple fanboys never learn. Sigh. Also, Apple fanboys are the new Intel fanboys when stressing single thread performance.
Just to be clear, you (and several others running the same playbook) are attacking Apple's entrant from every possible dimension, cherry picking specific micro-traits from various other systems (even if they aren't SoCs and have a tiny fraction of the functionality -- hey, if you can tease a dumb argument out of it...) and turning that into some sort of Voltron combined creation to claim..."victory"? And people impressed with Apple's progress based upon actual reality are the "fanboys"?
Again about cache. To repeat what has already been said, the A12Z doesn't have an L3 cache. The L2 cache is an L3 cache given that it isn't per core.
The A12Z has 8MB of this L2+L3 cache. The 855 has 7.8MB of L2+L3 cache. The 4800U has 12MB of L2+L3 cache. The 3700X has 36MB of L2+L3 cache. So tell me again how the A12Z is somehow hacking the system or cheating? This is an outrageously dumb argument that the, I guess, "AMD fanboys" have all fed each other to run around trying to shit on Apple, and it betrays a complete lack of knowledge -- just copy/pasting some bullshit.
Enough about the stupid cache nonsense because it has no basis in reality.
"Also, Apple fanboys are the new Intel fanboys when stressing single thread performance."
It is the single most important facet of a single-user performance system, or we'd all be using shitty MediaTek NNN-core designs.
And, I mean, the A12Z annihilates the 4800U at single thread performance, and equals it at multithread performance...for a little tablet chip, and despite that 4800U having that mega, super, giant hack of die size cache, and despite it boosting that single core to 4Ghz, versus "just" 2.49Ghz for the A12Z.
Oh, and that Apple core has a 5TFlop neural engine aside from the GPU. Separate hardware encoders/decoders (not as a facet of the GPU). Camera controllers. And on and fucking on.
What Apple has done is very impressive, and I imagine on their desktop/laptop chips they'll be a lot less conservative, likely with all "Big" cores. Maybe they'll even put dedicated L2 cache!
sidenote - you talked about the AMD chip being a "couple of months" old. The A12Z we are talking about is over two years old. You understand that we don't know what Apple is going to drop in their actual production designs, and we are talking about the A12Z because they happened to be confident enough to demo their systems on it.
Time for more corrections, I don't keep up with Apple stuff. The 855 has ~5-6MB of L1+L2+L3. The A12X/Z has ~18MB of L1+L2+System cache. That's ~2x the performance and ~3x the cache against the 855, and 10% worse performance than the 4800U where AMD has 30% less cache at 12.5MB (L1+L2+L3). The 6 core A13 has 28MB of L2+System cache and is maybe 10% faster on single thread than the 15W 4800U with just 12.5MB! of cache.
You want to compare Desktop systems with a mobile chip, but get blown out completely by the multi-thread performance, and then when comparing to a laptop chip when people point out the cache amounts say but look at the single thread performance. Who is the fanboy here? Apple can spend the money on die size/cache if it wants for single thread performance, but the rest of us care about a complete multi core CPU+GPU system. More cache means somewhat lower clocks and power use too, big surprise.
AMD 4800U FP16 4TFlop is 8TFlop for FP8 which is what Apple has, so enough of that. The 8 AVX2 units in the 8 core 4800U will do another ~1TFlop of FP32 if needed in 15W. The A13's AMX seems to have about 1TFlop more of FP8, which is like dual core AVX2 and not 8 cores of AVX2.
Audio/Camera and Video decoders/encoders all do the same stuff anywhere and are basically a commodity for any number of standards, so enough of that too.
Just to be clear, you and other Apple fanboys just can't handle what Apple has currently in CPU is no real way better than a 4800U. Single thread performance (with loads of cache!) is important to JS in the web browser, but by now even most AAA games will do better with more cores, and most real world tasks also do better with more cores. I'm just comparing reality, and you and other fanboys are the ones that aren't.
The 4800U is being generous for multicore CPU+GPU, the A12Z is about equal to the 12 Watt 4 core/4 thread Ryzen 2300U in multi threaded+GPU tasks, it's a 2 year old cheaper processor, and Apple is selling the same performance currently in a $1000+ iPad, I guess this is only possible because of fanboys. Even this is impressive to me given it's an ARM processor+in house GPU and Apple has been making chips for all of a decade now, but I lost all respect for people touting single threaded performance (with loads of cache!) 15 years ago when consumer dual cores first came out. The 2300U will run Shadow of the Tomb Raider at ~30FPS for reference.
The A12X/Z has 256KB of L1 cache per core, 6MB of L2/3 cache shared by all cores.
(256*8)+6144 = 8.2MB of L1+L2 cache.
It has no L3 cache. I don't know where you invented this so-called "system" cache, but are we now ridiculously adding GPU core caches or something absurd? Knowing this argument, probably.
The 855 has 512KB of L1 cache, 1,768KB of L2 cache, 5,120KB of L3 cache.
512+1768+5120 = 7.4MB of cache
You seem to be pulling numbers out of your ass, so refuting the rest of the bullshit you're inventing is a rather futile exercise. But keep on talking about "fanboys". LOL. You came straight form some sad AMD-rationalization website.
> Current A12z chips are highly performant; Apple is
> roughly one chip cycle ahead on perfomance/watt from
> any other manufacturer.
We haven't been able to compare them. Micro-benchmarks do not count because mobile versions of Apple chips haven't been designed for desktop requirements. People love comparing CPU cores with micro-benchmarks, but the hardest thing for a modern desktop/server chip is to feed data to many cores while maintaining cache coherence.
> So, I’m predicting an MBP 13 - 16 range with an
> extra three hours of battery life+, and 20-30% faster.
Before agreeing with your estimates, I want to play with a true 8-core Apple CPU with a large multi-level cache first. Building the Linux kernel with -J16 will be a fun exercise. Look, AMD is not stupid, and they're on the same node Apple will be using, and they're not 30% faster than even a 5y.o. Skylake.
> Apple has to pay Intel and AMD profit margins for
> their mac systems. They are going to be able to put
> this margin back into a combination of profit and
> tech budget as they choose.
I wonder how their conversations with TSMC go. With Intel, at least they had AMD to use as a bargaining chip. With TSMC there's no alternative.
> One interesting question I think is outstanding -
> from parsing the video carefully, it seems to me
> that devs are going to want ARM linux virtualized
> vs AMD64.
That's the big one. The world's software is built for and runs in data centers, not laptops. Our machines are increasingly becoming nothing but thin clients, remote displays that happen to run Javascript. CPUs do not matter. And I suspect that's the real reason they're switching.
But from the developers perspective, it's incredibly convenient to use the same platform (OS + instruction set) as the machine they're targeting, even for interpreted languages. Linus Torvalds wrote a well-articulated email about this a while ago, IIRC he was commenting on POWER, but I think his points are valid. At my company, devs keep struggling with Docker on a Mac. Add to that the ARM pain, and I wonder how many will finally get a Thinkpad. Developers will switch to ARM when majority of AWS instance types goes ARM.
P.S. I love how "old-tech" HN is, but for the love of god, give us a decent way to "reply with quote".
> P.S. I love how "old-tech" HN is, but for the love of god, give us a decent way to "reply with quote".
You have to copy and paste, but that's not too hard, even on mobile.
The main thing is to not use code formatting, and not break up a quoted sentence or paragraph into multiple lines.
Instead, do it the way I quoted your comment above, like this:
> *Entire quoted paragraph.*
That will render nicely on all devices regardless of the length of the paragraph. If you quote multiple paragraphs, add a blank line between each paragraph so they don't run together.
Wasn't Linus' email implying that if you were to run something like docker natively on ARM then the images you build would be ARM specific. You are not going to spend time and effort on running the build on a x86 machine to then deploy on another x86 machine. You will just deploy your docker images straight to an ARM server.
They're both making 7nm chips at the same TSMC fabs. Essentially, Apple will not have a "process advantage" over AMD. They may release their desktops chips on the latest 5nm TSMC process to get that "wow" product and make a good initial impression, but AMD will be right behind them with the equivalent desktop x64 chips. The current rumor is that late 2021 or early 2022 is when AMD will have a "Zen 4" on 5nm, which will almost certainly blow the pants off everything else on the market at the time.
> The current rumor is that late 2021 or early 2022 is when AMD will have a "Zen 4" on 5nm, which will almost certainly blow the pants off everything else on the market at the time.
You mean in terms of max performance, perf per watt, or... ?
Apple has an absolutely top-shelf team, designing chips.
By hand (Qualifier: Not sure if they still do, but they did, while everyone else was using automation).
They also have a great deal of experience in repaving the highway while traffic is running at capacity. They mentioned it in their keynote. They've done it three times. I have been there, for each of those times.
I was also there for the one time they completely pooched it (Can anyone say "Copeland"? Drop and give me twenty!).
It will be moderately painful. Not too bad. Quite manageable, and it will take at least a couple of years to transition.
I am in no hurry for one of those dev kits, though. They will be quite rough, and I have no compelling reason to use them.
I am looking forward to an entirely new Xcode. The current one is getting crashier every day. I'd also like to have one that can run on my iPad.
Not using automation isn’t “badass”, it’s a sign of a deeply screwed up engineering culture. It’s on par with forcing software developers to program exclusively in machine code.
Luckily, Apple uses the standard EDA tools pretty extensively so I don’t really think this applies to them. I also agree that Apple hardware engineers are generally extremely good.
Toe-May-Toe, Toe-Mah-Toe. Some site that does tear downs (not iFixit) tore open one of their chips, once, and defecated masonry. They said the chip design was obviously hand-designed, and stood head and shoulders above other ARM architectures.
They have done OK.
Sorry if I offended you. None was meant. I’ll edit that out.
The article likely meant they did a custom implementation of the architecture, not that they didn’t use automation during the design process. At least that’s what I’d assume without reading it (I’d be interested to read if you have link). It’s basically the difference between optimizing your application by rewriting the performance critical parts (good idea) and never using a compiler (bad idea).
Also, I don’t think what you wrote is offensive in any way - no need to edit unless you feel compelled to.
"So this is the first Apple core we’ve seen done with custom digital layout. In fact, with the exception of Intel CPUs, it’s one of the first custom laid out digital cores we’ve seen in years! This must have taken a large team of layout engineers quite a long time. The obvious question is, why? This is a more expensive and time-consuming method of layout. However it usually results in a faster maximum clock rate, and sometimes results in higher density. Certainly one possibility is that Apple could not meet timing on a automatically laid out block, and chose to go with a custom laid out block. Was this a decision at the architecture stage, or did timing fail late in the design cycle and a SWAT team of layout engineers brought in to save the day? We’ll probably never know, but it is fascinating, and also 2X faster (according to the below image)"
So layout; not design. My brother is the one that does this kind of thing; not me.
Hand layout is certainly impressive but also a very far cry from no automation. I'd also like to point out this was probably the result of a design problem that had to be fixed through sheer brute force rather than something to aspire to. The quoted part of the article sort of says as much, albeit in an oblique way.
Why do feel that Rust is behind on ARM? I can't comment on the performance, but everything that I used that was purely written in Rust compiled and ran perfectly on my PineBook Pro. (with the exception of alacritty, but that's because PBP doesn't support OpenGL 3.x)
Go does have the (platform) advantage (?) of preferring the "rewrite everything in Go" approach, so those just transition when the tooling supports a new architecture. Rust is intentionally going with a interop design, rather than telling people the only answer to to rewrite all their favorite libraries.
Rust and ARM is just fine. For example, Cloudflare famously keeps their entire stack cross-compilable to ARM, and even ships Rust on iPhones.
There's two areas where I believe you could call it second-class at the moment, though:
1. There is no ARM targets in Rust's "Tier 1" platform support.
2. std::arch doesn't have ARM intrinsics in stable.
For 1, ARM targets are in a weird space; they aren't Tier 1, but they're closer than most of the other Tier 2 targets. Several ARM targets are Tier 1 for Firefox, for example, so they get a bunch of work done there.
For 2, well, there hasn't been as much demand before. I expect that to change because of this.
I’m not sure why. As I said, they’ve been using Rust on ARM (as part of Firefox) for a long time; I’m not aware of them being unsatisfied with the current state.
I expect that the Rust project will end up benefiting from the extra interest though, and improving on the things above.
There has been very good progress on filling in the gaps to officially get Rust's AArch64 Linux toolchain triple to Tier-1 this year. We have CI for the Rust compiler test suites on native AArch64 silicon in a joint collaboration between Arm and the Rust lang core team and are converging on zero compiler test failures. Overall, we are pretty close to attaining Tier-1!
We haven't been very vocal about it just yet but all the bits to enable CI etc and the new t/compiler-arm Zulip stream are happening openly pretty much.
I'll be pinging all the relevant folks soon-ish (we're pushing out fixes to the last remaining compiler test suite failures this week).
I've been working on an OS in Rust that has an aarch64 port, and I've for sure seen some... questionable output. It's all been valid code for the input, but not nearly as optimized as I've come to expect out of LLVM based compilers. I'm sure there's some low hanging fruit that needs attention is all.
I think most of the gripes described are really issues that come from moving from a MacOS desktop to try a Linux desktop; if you were moving from a x86 Linux desktop to the Pi the experience would have been much less painful.
I had an ARM Chromebook for a while (around 2016) that I customized with a 256GB SD card and Linux Mint. The software all worked well, but the WiFi card died after a year and effectively bricked the damn thing. Cross-compiling might be an issue, but that's not a primary use case.
You mean all the Android handsets? The history of Linux on ARM is colorful, full of corporate missteps and giant brands (namely HTC and to lesser extent Samsung) that were created from that.
> I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen
In terms of distros maybe yes. Most distros are targeted at laptops, desktops or servers, and few of those have ARM processors.
In terms of architectural support by the kernel and low-level infra, I see no reason for that to be true at all. Open source kernels and (at least lower-level) userspace have for decades paid more attention to compatibility with various hardware architectures than proprietary operating systems.
Of course you'll have fewer drivers for hardware associated with a particular architecture if there's less interest for the hardware, or if the hardware is less available in form factors that most developers are interested in. But that applies at least equally to non-open source platforms. If MS or Apple don't have a commercial interest in maintaining support for a particular platform (and they usually have only one or two in mind), nobody's going to do it.
> Most distros are targeted at laptops, desktops or servers, and few of those have ARM processors.
I wouldn't bet my farm on that without a bunch of research and cross checking because embedded linux is really common. Consider manufacturers are producing very large numbers of embedded ARM micro's with external memory interfaces. I think the majority of those are running linux.
That's a fair point. I was referring to "most" distros in terms of the plain number of distros with reasonable mainstream visibility (and therefore possibly the majority focus from mainstream userspace developers), as a kind of a generous argument. The number of embedded Linux deployments is undoubtedly huge.
I am predicting the opposite. Apple isn't about the extend its Mac for Performance or Battery sake.
They are going about expanding its marketshare.
There are close to 1 Billion iPhone users, most of them have never used a Mac. We will need a 2nd Devices for some of those task. And that will either be a iPad or Mac. Out of the 1.5B total PC Market, Apple has 100M Mac users. I will say it is not too far fetched to say Apple wants to double the number of Mac users to 200M.
For every $100 dollar going to Intel, Apple could knock $200 off its Retail price while getting the same margin.
A $799 MacBook ( the same starting price of iPad Pro ) will be disruptive, the premium is now small enough over the ~$500 PC Notebook price.
In the longer term I think Apple is trying to reach 2 Billion Active Devices. And it certainly cant do this with iPhone alone. There are plenty of Market Space for the Mac to disrupt.
I haven't felt particularly constrained from CPU in a long time. My main issues have been with RAM (thankfully the new MacBooks finally started supporting 32GB of RAM), and GPU, which ever since Apple has been in a fight with NVIDIA has been miserable. It's not just that Apple doesn't use NVIDIA, it's that they won't allow them to ship their own drivers for it.
I just want to plug in an eGPU to an RTX 2080 card. Instead, you have this incredibly limited set of officially supported cards the are also hyper expensive. Black Magic stopped making their eGPU Pro, so even if money is no object you can't get a great laptop GPU extension that supports their XDR displays.
Now, you might be saying, if you want a great GPU why are buying a laptop? Well, 1) even if I were to get the one model of Mac that allows me to do something interesting with GPUs (Mac Pro), I still can't install the NVIDIA cards I want. And 2) laptop + great eGPU is a great setup that is supported in the non-Mac space, so it is not a bizarre request.
All of this to say: the ARM stuff is fine, but it won't really move the needle for me, and doesn't address any of my performance issues, and I would argue a lot of the performance issues a lot of people actually have (especially graphics artists).
Apple doesn’t control their machine learning stack. The models they ship are likely created and trained on PCs running Linux and NVIDIA GPUs. It’s entirely possible they’ll extend the Neural Engine to be useful for training but they’d still need to contribute or convince others to contribute to the existing tooling.
> those complaints are 100% down to being saddled with Intel.
And yet the new Macbook pro base models feature an 8th Gen Core i5. That is 2 generations behind the bleeding edge. I think some people might be experiencing the speed problems you described because their machine has an old gen processor in a shiny new box
Not to mention that Apple setting a standard of throttling at 100c and not giving the machines adequate cooling affects performance in a non-trivial fashion.
> One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64. I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen — I wonder if systems developers will get on board, deal with slower / higher battery draw intel virtualization, or move on from Apple.
It's in fairly good shape, and has an active community. With the current proliferation of IoT devices, both the kernel and userland are well-maintained, and plenty of distros are available. The kernel also benefits from much of the work done for Android and Chromebooks as well.
All of the usual FOSS software is ported and runs well. As of this moment, you could easily take your pick from any of Debian, Ubuntu, Fedora, Arch, Manjaro, Slack, or Alpine just for starters, plus a whole mess of specialty distros. Many of those offer both 32- and 64-bit ARM versions.
Also bear in mind that recent ARM CPUs do support hardware-assisted virtualization as well. KVM and Xen are both available for ARM today, and I'd be shocked if Apple's Hypervisor.framework doesn't roll out with ARM support in the new macOS version as well.
(I'm writing this from Firefox in Manjaro ARM on a PineBook Pro, that I use as my daily driver)
I agree with you but I think we are going to see at least one A12Z consumer product (other than the iPad) probably going to be a rereleased MacBook with a better keyboard.
> One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64. I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen — I wonder if systems developers will get on board, deal with slower / higher battery draw intel virtualization, or move on from Apple.
Somewhat ironically, I think it's mostly the languages trying to be safer alternatives to C that are most behind on supporting ARM.
I've done a little bit of Lisp development on my Raspberry Pi (with SBCL and Emacs/Slime), and in most cases I don't have to change anything moving between my AMD64/Linux desktop, Intel/OSX MBP, and ARM64/Linux Raspberry Pi. And that's even when using CFFI bindings to C libraries.
I'm not sure SBCL's ARM backend is at the same level as the x86 backends, but it works well, and there's on going work on it.
I'm excited. My #1 wish is a 16" MacBook Pro that ways 3 lbs or less. Take an iPad Pro, make it 16" instead of 12, add a keyboard, run MacOS. LG already makes 15.6" intel notebooks that weigh 3 lbs. Apple can do it too!
> One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64.
Hahaha, look for a user-agent at 1:44:26 :) They use old Intel Mac for a virtualization demo.
The main thing Apple has done to improve their A-series chips has been massive L2 caches.
I still major advantages of putting a A-series chip into a MacBook Pro.
1) There will be a much larger thermal and power draw envelope available to new A-series chip. I suspect we will see insane “boosting” clock speeds.
2) Incredible “at idle” performance well beyond what X86 can provide with on did GPU cores, which means a bit better battery life for that screen.
3) More opportunity for tightly integrated acceleration chips On die for codec, ML, and other hardware acceleration methods for Apple only software libraries.
> Rust seems behind on ARM, for instance; I bet that will change in the next year or two. I don’t imagine that developing Intel server binaries on an ARM laptop with Rust will be pleasant.
Agree with you on some points, I’m really excited to see what’s next. I’m also betting on a new, and faster, MacBook maybe with a discounted price to incentive the migration?
About the virtualization, they will probably make it more efficient, resource wise? Some cloud providers are also offering ARM so...
Anxious to check the GPU performance too!
To add: Control Center on macOS and some other UI improvements hints for Mac w/ touchscreen?
Could we finally see a true BYOD, like Dex or using the improvements in Handoff?
I'm not sure virtualising ARM on Intel platforms will ever be performant enough to be usable. They will probably have to ship an emulator, and even then there will be issues as it'll be very difficult to emulate the strictness of ARM CPUs on non-ARM architectures, for things like unaligned memory accesses and replicating the memory model.
Wont this move result in more software compatibility issues from developer side though? Like why would you buy the update if the developers don't want to move to the new platform or determine it's too big of a change now?
You've missed off possibly the biggest advantage of ditching Intel. A perfectly usable machine without jet engine fans and a scolded lap. I don't use Mac's, but I see this as good for the industry.
I think MacBooks can run heavy ML processes, but why not run those on separate devices with specific hardware for that? I'm thinking any kind of job you'd want to run on the GPU.
> Apple has a functional corporate culture that ships; adding complete control of the hardware stack in is going to make for better products, full stop.
it's easy to resign iphone models every year. it's not as easy to increase chip performance every year. there's a lot of R&D involved, i think in the long run it'll be better but Apple will have to devote more resources into it. you can't just wish for specs. the manufacturers actually get the hard job of trying to make it.
It has a lot of historical baggage, most notably implicit interdependencies that make it hard/impossible to optimize and reorder stuff. We're entering an era of many-core processors, and IMHO we'll absolutely have to move to a RISC architecture, and the sooner we get that box checked, the better.
“Apple’s own pro apps will be updated to support the company’s new silicon in macOS Big Sur, and the company is hoping developers will update their apps. “The vast majority of developers can get their apps up and running in a matter of days,” claims Craig Federighi, Apple’s senior vice president of software engineering. []...
Microsoft is working on Office updates for the new Mac silicon, and Word and Excel are already running natively on the new Mac processors, with PowerPoint even using Apple’s Metal tech for rendering. Apple has also been working with Adobe to get these pro apps up and running on these new chips.“
So the bottom line is: “your previous tools won’t work, will have to be rewritten, the burden is on the developers so we can rake in more cash”
Great, customer focused, and completely altruistic move back in the days when they killed Nvidia cards on high-performance rendering and simulation machines (and everywhere else).
So, Apple has performance libraries that are better than what Intel has to offer? So, cross-platform applications are now again “passe”?
I don’t use my iPhone for working. Why would I want iOS Apps on my computer? So I can install Apple Mail instead of Outlook
You are aware that you will (most likely) lose all those pretty amazing optimisations that you tend to rely on if you develop software that is a bit more “sophisticated” (e.g., parallel programming leveraging IPP etc.). Can’t wait to see how fast that translation layer will be for my FFTs that have been super optimised for Intel chips.
I wonder: when I observed 20+% performance loss for high compute intensive scientific tasks Intel vs. AMD because I relied on IPP - how much performance loss will we see for the x86 to Arm?
For your average Text application you may not care about performance loss. But I bet you, that for video, image editing, science etc. you are easily 20+% worse off than before.
So what exactly is the benefit for the customer or the developer community?
I mostly worry about software that will never receive an update. I'm sure the tools you listed will be ported over to ARM simply because it means vendors can sell them again to the same customer.
Apple ported many Pro Apps to ARM , especially their Logic Pro, Photoshop and they were showcasing Maya on ARM. That is about as Pro as it gets for Mac.
That reads to me Apple isn't going to have Intel for some high end Pro machine. They intended to go all in with ARM. i.e There will be a Mac Pro with High TDP ARM Chip. I wonder what are the owner of Mac Pro feeling now having just spend a $5K+ Mac Pro with Intel.
Question is,
1. They are going to design their own CPU for the whole range of Mac? up to 10W for MacBook, up to 45W for MacBook Pro. ~150W for iMac, ~ 250W for Mac Pro ? How is that financially feasible considering the volume of Mac Pro sold. Or do they intend to use those high TDP chip in their server farm / iCloud?
2. What happens to GPU? Having their own GPU for iMac and Mac Pro as well? Dual GPU options where Apple GPU for power efficiency? This feels like additional complexity.
3. Would it be like the PowerPC era where you will get a new iMac once you finish with the development kit?
Finally while I am excited for ARM Mac, at the same time I am also feeling a little sad. Good bye x86.
> What happens to GPU? Having their own GPU for iMac and Mac Pro as well? Dual GPU options where Apple GPU for power efficiency? This feels like additional complexity.
I think GPU scaling will be much harder than CPU, so whereas Apple can surpass Intel CPUs for all but the highest segments, putting together a standalone GPU will be hard and very interesting to see. For an entry-level GPU? No issues. But what about a midrange (AMD RX 5700XT or Nvidia 2070S)? And not to mention the top-tier Nvidia 2080ti.
The other unspoken risk is that while Apple may be vertically integrating its SOC, it still relies on a fab like TSMC. Intel's recent problem is rooted in their inability to move off legacy 14nm fabrication process. TSMC may have done great in 7nm and now to 5nm transition, but what happens if/when they stumble? Would Apple also want to acquire them or build its own fabs to mitigate this risk?
> The other unspoken risk is that while Apple may be vertically integrating its SOC, it still relies on a fab like TSMC. Intel's recent problem is rooted in their inability to move off legacy 14nm fabrication process. TSMC may have done great in 7nm and now to 5nm transition, but what happens if/when they stumble? Would Apple also want to acquire them or build its own fabs to mitigate this risk?
Surely this is an advantage to being fabless? If TSMC stumble, they can evaluate other options. Same for AMD, where would they be now if they were still tied to GlobalFoundries?
Valid point as a GF-tied AMD would not be in the same position as today.
That said, what are the other options if not TSMC? Besides Intel, Samsung is only other cutting-edge option. Intel's 7nm would be technically on par with TSMC's 5nm (marketing names aside). https://en.wikichip.org/wiki/7_nm_lithography_process
There is a chance, however unlikely, that TSMC's 3nm push will run into issues and be delayed. Would create an interesting scenario where Apple would pay Intel to fab their SOCs.
There were rumours going around about its demise a few years ago, but a fair bit of that was simply their failure to ship 10nm parts on schedule AFAIK. They're still doing some degree of third-party manufacturing, and I don't doubt once they reach a point of having the capacity for their first-party products on 10nm we might seem them expand.
However, the inevitable flip-side of this is unlike TSMC/SS where Apple can bid the highest for the early production of a new node, Intel are highly likely to keep the new node for themselves to start with.
Intel currently doesn't have enough capacity to make their own chips and are rumored to be outsourcing to Global Foundries as a result ( https://wccftech.com/rumor-intel-moving-select-cpus-to-globa... - huge grain of salt on this one ofc, but the supply constraints on Intel's fabs are well known - they mentioned it even at their earnings call) - why would they stop making their own products to make Apple's instead? Apple'd have to pay an absurd amount for that to make sense.
I will tell you one thing for sure that it's impossible for Apple to acquire TSMC. TSMC have a lot of customers other than just Apple. I think it's logical for them to come up with their own fab but honestly that is incredibly hard. Maybe in 10 years, I would say.
We only have mobile SOCs as a reference point so far, but Apple is doing very well on that metric.
>On the GPU side of things, Apple has also been hitting it out of the park; the last two GPU generations have brought tremendous efficiency upgrades which also allow for larger performance gains. I really had not expected Apple to make as large strides with the A13’s GPU this year, and the efficiency improvements really surprised me. The differences to Qualcomm’s Adreno architecture are now so big that even the newest Snapdragon 865 peak performance isn’t able to match Apple’s sustained performance figures. It’s no longer that Apple just leads in CPU, they are now also massively leading in GPU.
But landing troops (they can't march, it's an island, and the difference between amphibious and land based operations matters a lot) would be extremely difficult, and not obviously in the PLA's favor. See, for example:
The larger point is that it'd be way too messy for them to even try. Basically on the scale of US/SK invading NK.
Besides that significant causalities and economic damage to both sides, which of course they'd eventually win, but it'd be a geopolitical disaster that won't end with the quelling of the armed forces.
I’d imagine China would more likely begin blockading Taiwan, at least until the US and the west responded. Given the strength of US naval power it’ll be a while before China consider even that.
If China mearly wanted to destroy Taiwan's civilian society, they could do it. But to take out the defences and make an amphebious landing across a large and dangerous sea is extremely hard. Perhaps possibly only with cyber-warefare to disable the defenses.
My understanding is that the actual machineries and materials necessary to make and run fabs are made in US, Japan and Europe, not in China or Taiwan. For example photo-lithography machines are made by the like of ASML (Netherlands), Nikon (Japan) and Canon (Japan).
Although it would assuredly take some time to ramp up, TSMC should be able to spawn fabs outside of Taiwan, out of CCP reach. They are already building one in the US, albeit with a small output.
Their whole supply chain is going to grind to a halt in that scenario even if TSMC's fabs were somewhere else. China would at minimum get sanctioned and there'd be component and raw material shortages for a while.
The United Nations would issue a strongly-worded letter. Markets would fluctuate for a week. Then everyone outside of Taiwan will pretend that nothing happened.
I heard from random sources that their GPUs are actually (relatively?) very powerful. Better sources/experience appreciated.
"Apple claims the GPU in the iPad Pro is equivalent to an Xbox One S, although how they came to thise conclusion is difficult to say since we know so little about the underpinnings of the GPU." [1]
Game console GPUs are middle-end at best. They compensate by huge amount of hardware-specific optimizations since game developer can only target few models of hardware during 5 years lifecycle and devkits for hardware become available for game engine developers almost 2 years in advance.
Apple sales for Mac are much smaller numbers compared to consoles, they change generations more quickly and amount of optimization in GPU-intensive apps is nowhere close to consoles.
So they might be able to compete with Intel iGPU, but that's nowhere close to AMD or Nvidia offerings.
That's less true than it once was. Both that the console APIs abstract way more of the systems than they once did so that console manufacturers can do perf refreshes halfway through the cycle, and that the whole point of Vulkan, et al. is to bring console like programming techniques to full computers since the GPU's MMU means that all you'll do is crash your own process anyway. That all adds up to them needing pretty nice hardware to keep up.
I dont see how this helps Apple to compete with AMD or Nvidia GPUe. It's much easier for Apple to just use AMD GPUs for their Pro devices. Macs market is just too small to create custom high-end GPUs.
Apple might have huge leverage on mobile with iPhone because Android graphics drivers are horrible mess, but on Windows GPU drivers are decent and building custom driver stack sounds like too much.
I guess because one can't just reuse hardware. They cant just take iPhone GPU, make it 5 times bigger and get 5X more performance with 5X of power budget. So more likely they'll just gonna sell laptops with GPU performance closer to iPhone which is not at all impressive.
GPU is actually relatively easier to scale by adding more cores/ALUs. The difficulty lies in figuring out how to handle peak current requirement of the whole IP.
Still a current-gen console grade GPU in a form factor like the iPad is pretty impressive. But then again maybe it wouldn't be able to sustain console-like performance due to thermal constraints
Agreed, plus benchmarks and measures like TFLOPS are always highly subjective.
I'd be curious if Apple's non-mobile SOC roadmap emphasizes on CPU development while still allowing for eGPU setups or even some built-in integration (e.g. Mac Pro with Apple CPU and AMD GPU) initially. Maybe in 5-10 years they'll shift to focusing on the GPU front and bring out their own dedicated GPU.
I don’t see how an integrated GPU can compete with the top of the line chips from nvidia and amd. The discreet GPUs(Radeon vega) in the Mac Pros have 13+ billion transistors and have 1 TB/s memory bandwidth with specialized memory.
The GPU in the Xbox One S was far from top of the line, it's pretty mediocre. A middle of the road GPU from that time frame was easily 200% faster than it.
That being said, frankly I'm amazed at what most modern mobile GPUs are capable of and for most people who are casual gamers, that level of performance will be more than enough. What Apple will bring to the table will certainly be better than that, it's already better than Intel GPUs and they can still support 3rd party GPUs if necessary.
The demo they showed of Shadow of the Tomb Raider running at 1080p looked great for a game, AND it also looked awful if you compared it to the PC version on Ultra settings.
Apple kept mentioning pro apps like maya where you need highend GPUs. Gaming isn’t a big use case on a Mac. Our video production team uses highend macs with discreet gpus. If they don’t plan to offer comparable solutions to current line of desktop machines, we’ll be forced to switch platforms. I don’t see how an integrated gpu can compete with a discreet GPU with specialized dedicated ram and a memory bus that’s 10x of a CPU using ddr4x.
I mean, it looked awful compared to what I run on my PC, but considering as well that it was running using Rosetta I thought it was genuinely quite impressive.
Yeah, did you notice they also ran it at 1080p and it really didn't look all that great?
I was actually a bit surprised they would break that kind of a demo out at WWDC -- I've seen that game not running in emulation and it was beautiful. That demo wasn't.
Yeah, I would agree with that. It definitely demonstrated that gaming is possible even under Rosetta, which is an accomplishment. Watching some other stuff from them this morning, it seems like they're passing the metal calls directly through to the GPU (which makes sense), even while translating CPU calls from x86 to ARM.
It seems (wasn't really that clear?) that Maya was running emulation? (as in, x64 binary). I don't think Maya's viewport on MacOS actually runs with Metal (it's still OpenGL), so I doubt it's a native port.
Did it do any CPU-intensive stuff (skinning, deformation), or was it just GPU-intensive viewing?
High-end VFX will be interesting for this with Apple (Maya, Houdini, Nuke) - already there was quite a lot of anger at OpenGL being deprecated and Vulcan not being officially supported. Another instruction set into the mix for highly-optimised apps (lots of SIMD code) is going to be quite annoying, especially for the CPU renderers (Arnold, Renderman, etc)...
As Apple's own GPUs do not run full OpenGL, does this in turn mean they didn't only create a x86 to ARM translation layer but also a full OpenGL implementation running on top of Metal? Similar to other projects implementing OpenGL on top of Vulkan? Or did they actually invest the time to implement OpenGL directly in their graphics drivers?
That seems a bit weird considering OpenGL has been deprecated in macOS already. I would have expected a full removal once the first ARM Macs ship.
Logic isn't worth much without plugins, and I expect many smaller developers not to port to ARM, and there's nobody to fill the gap in the first years. If that is indeed the case, Apple will begin losing market share where it currently reigns. When there's no pro software, the mac will be just an iPad with a keyboard. It's quite a gamble.
(Audio/DSP) plugins are a tricky thing, it is not uncommon for some of them to have assembly or processor specific instructions to squeeze out as much performance as possible. Your 'budget' in this domain is limited to only a few milliseconds ...
Even if it's precompiled and not at runtime, we don't really know what the performance looks like, especially in hand rolled assembly where something that isn't a one-to-one cycle match could have obvious effects.
> The system prevents you from mixing arm64 code and x86_64 code in the same process. Rosetta translation applies to an entire process, including all code modules that the process loads dynamically.
Apple has pushed XPC architecture for plug-ins for several years and has announced that these plug-ins will work in Rosetta for a Native host app. Audio Units will work as well.
The question is if those plugins are modules (shared objects/dynamic libraries), or if they are used via some sort of IPC to an external process. (or even an XPC service)
A lot of people do say that, including a lot of professionals, but in my opinion Logic with its stock plugins is already absolutely great. Some people buy a lot of plugins because they don't know how to use the builtin plugins, and some because they enjoy playing with new stuff more than making music (and yes, some who know what they're doing too).
Some of the built-in stuff is quite good, but there's a whole lot missing. For me, it would be the sample players (Kontakt, Play, SINE, ARIA, and whatever Spitfire's sample player is called), Pianoteq, spatialization, reverbs, some of the outlandish plugins, iZotope's stuff, a PSP equalizer, and a few synths. I know other people use a wider range of plugins, and they depend on them in their workflow. They are quite likely to bail to Cubase on Windows if their plugins don't get ported.
Professional users of Logic (but also Pro Tools) tend to be very conservative with their upgrades. People who use Logic for a living won’t be using this for years to come.
This is absolutely true. Many are still running on old-old Mac Pros with Snow Leopard. Upgrades are done when hardware can't keep up anymore, but they also tend to use big DSP rigs from Avid and UA that remove the CPU bottleneck for major tasks.
> Logic isn't worth much without plugins, and I expect many smaller developers not to port to ARM
I don't think they can afford not to - mac users make up a signifcant share of their target audience.
Furthermore there's always the emulation option although it remains to be seen how performant that would be in an such a demanding, time-critical application like AU/VST instrument and effects.
>I wonder what are the owner of Mac Pro feeling now having just spend a $5K+ Mac Pro with Intel.
Not much. If you bought that machine you didn't care about it much as you bought older hardware at a premium price-point. You bought it to use it right now without fussing and have accepted it's obsolescence in 2-4 years, as is quite normal for studios.
If you bought it as an IT enthousiast, well..why would you even do that?
I don't see Apple coming out with an ARM Mac Pro within 3 years anyway. Why would they do that? No upside for them, that market has to be won back first. Slowly start with laptops and iMac, focusing on consumers, get the OS in shape and third-party vendors accustomed to the platform first.
iPad-like performance is already fast enough for almost all consumers, and I'm sure Apple doesn't want to break with Intel on everything right away.
The Mac Pro was seen as an an assurance that Apple was going to support their Pro customers for the foreseeable future. It's not just the Mac Pro itself. It's the software and supporting hardware.
Also crucially, due to Apple's spat with NVIDIA, the Mac Pro doesn't support CUDA. This means software has to be modified to use Metal Compute to support the Mac Pro.
If you make pro software Apple just let off a huge signal that the future of the Mac Pro is at best uncertain. So maybe hold off on that Mac support for the next two years.
Apple is not going to make an ARM Xeon. The resulting computer would be so expensive after you amortise all the R&D to create a single workstation class CPU for it, that nobody would be able to afford it. All the Pros who bought Mac Pro got played hard.
The Mac Pro serves such a tiny, tiny sliver of the pro market, I don't know if they got played. I mean, the big audio/editing/GFX studios that bought a Mac Pro will keep using them for years or swap to Windows if no powerful ARM Mac comes out.
All the important pro software supports Metal by now.
Anything that isn't a Hollywood studio will have to use an ARM iMac.
Enthousiasts and semi-pro's that want a powerful, affordable and extensible Mac with state of the art discrete GPU's can probably get lost, as is the case right now.
Then again, Apple is now only bound by their own operations, so who knows what they have planned.
How did they get played hard? They got the latest model that is still supported by lots of software. The only way you get shafted is by buying the ARM Mac Pro which is something the owners of a x86 Mac Pro luckily avoided.
It is only supported by software because Apple convinced big software developers to port everything to Metal. Apple no doubt made assurances to these developers that this was Apple's big re-entry into the Pro market and that they were in it for the long term.
Now, it turns out that Apple wants to complete a transition in 2 years... they are not keeping x86 around for the Mac Pro longer than that. It's not credible that Apple can just scale up their CPU into a workstation part. So if you are making pro software then you have to come to the inescapable conclusion that Apple is, in fact, not serious about the Pro market, and it's probably best to avoid expending further development resources on the Mac.
AFAIK, they're going to make their own GPU. A recruiter from Apple reached out to me a few weeks ago trying to poach me from my job at a large GPU manufacturer.
They already have their own, custom GPU on their A-Series SoCs, so them hiring people working on GPUs is hardly surprising? I'd expect them to replace where they use Intel's integrated parts with their own, which might push them a bit higher end than where they have been previously, but I doubt they're chasing after the dedicated side of things?
Re. the PPC era and whether it will be like that in the future, this is from a lowly web backend developers perspective, so I might be naive, but hopefully we've learned to build generic solutions without sacrificing performance, when it comes to things like this, and it doesn't devolve into a code that only runs on X-hardware type thing.
I think it's a more common problem to see things like the T2 security chip not being present on older hardware or hackintosh(unsupported) hardware, so if you're not running the right hardware you can't take advantage of a feature (AR in certain iPhones) or you won't have the performance advantage (Filevault encryption with T2 chips vs encrypted by CPU).
1. They explicitly said yes, there would be a whole range of CPUs that they're making. This isn't just "throw the phone CPU on a desktop," they'll have laptop class and desktop class CPUs that are different.
2. They're saying integrated. Whether they also throw Radeons on high-end desktops is probably not yet determined. (They have a two year roadmap for this rollout, and I think it's pretty obvious they'll start with the portables/iMacs where the benefits of ARM are more likely to outweigh the pain of change for more people.)
Will they have a wide TDP range of designs? Yes. On feasibility, there are two major components to the cost - first is the engineering cost for design. You can bet that these costs are not massive; Apple has roughly $200 billion cash on hand right now. Most of the work will be scaled up as part of their overall design process.
The second cost component is the Non-recurring engineering to get a mask-set made for the chips. A 7nm start right now costs maybe in the range of $20mm? If the Mac Pro is required to stand on its own feet financially, perhaps that will push out timing / push up cost. On the other hand, if one considers the cost of the top end chip to be a marketing cost, it’s literally a rounding error.
GPU - Apple is going to have their own GPUs, and no longer worry about AMD and Nvidia. They’ll have complete control of the hardware and software stack and control their own destiny. I would be very surprised to see dual GPU options.
Keep in mind that this doesn't imply a different chip design. TDP is primarily a function of intended application and cooling capacity, not chip tech.
That said, they've already demonstrated ability to do this with A12/A12X/A12Z, which are the same chip at different CPU/GPU core counts. Clock rates are not impacted except where thermal limitations (TDP) impose a limit.
Which has nothing to do with why Apple took on debt to pay dividends.
If Apple took funds from a foreign subsidiary to pay shareholder dividends it would have to pay
1) Federal Corporate income tax of 20%.
2) CA state corporate income tax of 9%.
Then out of the 73% left over, the shareholder would pay state income tax + federal dividend tax. That means the shareholder ends up keeping between 50-60% of the funds Apple paid.
If Apple borrows the funds instead, it owes no federal or state corporate tax. The shareholder just needs to pay state income tax and federal dividend taxes, so after tax they end up with 70-85% of the funds Apple paid.
They said "we expect the transition to take two years [...] we've still got some Intel Macs to show you soon [sic] [roughly what they said]" during the keynote.
This isn't just going to replace low-end machines; its every machine they sell, within two years. Probably starting with low-end, but moving up.
I'd assume that they'll still have AMD/NVidia GPUs as options if they truly plan to bring these CPUs to the Mac Pro market. There's no way that e.g. Animation studios will accept anything less than absolutely top-end performance. And upgradability too. I think Apple must know this.
It's definitely possible, just need to expost the PCIe lanes in a sensible way (this has been rare to see on ARM-based machines so far) and have PCIe device manufacturers distribute drivers for ARM macOS.
PCIe is surprisingly robust for a high performance interface. fail0verflow mentioned doing PCIe over serial so they could mitm some of the communication when hacking the ps4.
> Apple ported many Pro Apps to ARM , especially their Logic Pro, Photoshop and they were showcasing Maya on ARM. That is about as Pro as it gets for Mac.
> That reads to me Apple isn't going to have Intel for some high end Pro machine.
That conclusion seems a bit too far fetched from my point of view.
There are many users that use Logic on a MBP and with Photoshop it's even more common to use it on a Laptop.
Sooner or later a ARM MacPro is comming but if i'd had to guess i'd say that will take a while.
There were 6 years between trashcan and the current cheese grater. So maybe 2025 then ;)
> 2. What happens to GPU? Having their own GPU for iMac and Mac Pro as well? Dual GPU options where Apple GPU for power efficiency? This feels like additional complexity.
Nintendo Switch uses a nvidia gpu with a very slow and outdated arm processor. Yet, this allows it to run Rocket League, Witcher 3 etc.
Apple, might go with discrete GPU from nvidia/amd to pair it with their arm processors.
Some services are in Google Cloud, some in AWS, some in Azure, plus Apple has eleven data centers of its own. I believe that Apple is working on bringing the pieces that have been offloaded to AWS/Azure/Google back in house.
1) Before the demo even starts: all Apple apps, pro or casual? Already running. Microsoft Office? Already running. Adobe Creative Cloud? Already running. That's the vast majority of the Mac userbase right there.
2) No apparent hard cuts on the legacy. I was expecting them to not support x86 backwards compatibility if they could get away with it, but apparently they're committed. Even naming the technologies "Universal Binaries 2" "Rosetta 2" is a confident been-there-done-that-will-do-it-again presentation. Unlike last time around, there also doesn't seem to be a major removal of macOS APIs?
3) Acknowledging what kind of x86 stuff machines are used for by showing VMs right away, and (trying to?) show Docker right away. Is that the first Linux demo in an Apple keynote presentation? It was a Linux desktop environment, even.
Now it seems this ARM announcement was a bit rushed by design, flashing by the features without allowing a substantial look. So it's likely we're going to be dissappointed by x86 performance and having to say good bye to APIs (this for sure is the end of OpenGL right? edit: no [1]), but they do leave an impression of having their priorities of broad software support straight and a pretty seamless transition as far as you can get one.
>Acknowledging what kind of x86 stuff machines are used for by showing VMs right away
I do believe they showed a Linux ARM VM. And not a x86 VM.
x86 VM's are probably going to take a massive QEMU performance tax. The positive news it the boost for the Linux ARM space, which will see a massive boost.
Games on Mac? Games on Mac with Windows Bootcamp? Yeah... Maybe buy a console or a second PC...
Your comment made me rewatch that section of the Keynote [1], and I believe you're right. Docker and Parallels were shown in a 'virtualization' subsection, and not the Rosetta subsection. So that must have been ARM64 Debian we saw there indeed. Did I say that their presentation was too fast :) ?
That's going to be interesting in the end. Being able to build/smoke test x86 containers on macOS will be important, at least for a while. So it's up in the air whether that will be addressed, although it's worth noting that Docker already supports cross building images [2].
In the “Platforms State of the Union” video, which gets a little bit more technical, 30 minutes in, they are explicitly mentioning the ARM version of Debian Linux and show it off. So yes, we are talking about virtualization of an ARM system here.
I'm surprised we didn't get any performance numbers. Either raw power or at least power efficiency and projected battery life improvements. Seeing as this is a major reason for the transition (according to them), it feels very weird.
They're shipping a 'Development Transition Kit' Mac mini with an A12Z this week, so it's not like the numbers are going to stay private for a long time. Even if there's an NDA, someone's bound to break it.
There's no indication that the A12Z will be the chip that ships to consumers at the end of the year. So honestly it'd be a bit out of character to boast about the specific performance numbers of a pre-release dev kit chip - especially when that chip has already had Geekbench run on it for a while: https://browser.geekbench.com/ios_devices/ipad-pro-12-9-inch...
Last time around, the dev kits had Pentium 4 processors but the Intel Macs that launched used Core and Core 2 processors— a totally different microarchitecture with drastically different performance and power characteristics. It's a pretty safe bet that the first ARM Macs will be using SoCs that are at least a generational improvement over the A12Z. The higher-power Macs that will probably be released toward the end of the transition will likely use chips that are more drastically different from what's in an iPad Pro.
I'm honestly pretty excited to see what Apple can deliver. The A12Z is passively cooled, yet is on par with the 10th gen i5 in the new 13 inch MBP: https://browser.geekbench.com/macs/macbook-pro-13-inch-mid-2... Just imagine what they can do with active cooling!
While it's certainly possible the A12Z in the Developer Transition Kit is a true drop-in from the iPad, I would be surprised if it's not clocked significantly higher and actively cooled given the relaxed power and thermal restrictions.
I don't imagine the benchmarking software is being run through the App store process, so does the OS really make that big of a difference in the results? I'd think that if anything, the restricted nature of iOS would lower the benchmarks.
I just meant, we know very little about how the underlying technology we're testing actually works—how it prioritize cores and resources, how instructions get optimized under the hood, etc etc.
To be picky, that's the hardware, not the OS. And the same argument still applies: If we're flying blind, the benchmarks may underrepresent performance (although I think you're overestimating the opacity of the i-architecture).
It was pretty much the same with PowerPC to Intel...
Steve Jobs demoed OSX first. Then surprised everybody by saying, OSX had lived a double life in a secret building for many years (with photo), and that he had been running on an Intel Pentium 4 during the demo all morning. Nothing about performance.
There was also a developer system back then; an Intel Pentium 4 in a PowerMac case.
In a lot of ways, this is far more ambitious, and could mean a lot more for Apple long term, but...
... The one thing that hit me the most was, how impressed I was with Apple back then, and how excited I was that a company could do this. Steve Jobs presented it really well, but this time it felt quite flat.
... I really, wish they worked a bit on their showman-ship. They rushed through so many small things, and the presentation felt unnatural. Like they have all over-rehearsed it, but are still reading while presenting (you could even see eye-movements). It is just too smooth, too generic, and a bit too polished.
Please slow down, focus on only the most interesting bits, and give us time to digest it...
I think some of that stems from the fact that Steve Jobs was saying his own words, but everyone else is saying marketing's words. The best marketing could do to Steve was tell him that he's using a competing's brand name incorrectly. Everything else was his. So he could speak passionately in his own words.
Nailed it. With Steve Jobs his own passion and enthusiasm really shone through, whereas now we're seeing a rehearsed, scripted presentation. I don't think they're necessarily wrong to take this route, and I think with Apple's much bigger reach it's probably a fairly wise and safe bet, but it does sadden me a bit that we don't get Steve's showmanship anymore.
That wasn't their first processor change, either. Not even was it first in the Mac line.
The Apple I and II were MOS 6502 machines except for the Apple IIgs which was a 65c816. Then the early Macs were 680x0 machines. Then PowerPC. Then Intel.
They looked at Intel chips for the iPhone and settled on Arm before launch. I wouldn't be surprised if some very brittle, early development version of iOS was running on an Intel mobile platform at some point.
Your "buts" are dead-on. Everything felt so distant and unauthentic. They should require their execs and presenters to not read from somewhere else and do it live.
I don't understand why anyone would care about this. What difference could it possibly make? Are fewer people going to buy ARM Macbooks because the execs sounded a little wooden?
When I saw the PowerPC to Intel move, it felt like a company with ambition and vision who knew what they were doing with technology.
It was a confident CEO that used his own words to passionately live-demo the products his company was developing and selling. He almost apologetically told us that Apple had to make the change to deliver the notebooks he had promised two years before -- But couldn't with PowerPC. It made sense. And then, he showed us that all along Apple had the foresight to plan for this many years ahead.
It was inspiring, and I was really excited about it. As a user and computer scientist, it made me curious about OSX. As a developer I wanted to support their platform, and went on to work on iOS apps a couple of years later. Apple felt like the future.
This time, I feel unenthusiastic, and wondering where to go next... Despite the fact, that I objectively think this has the potential to be far more significant.
Delivery with confidence and passion for the product always matter. A lot.
Meh. You're in tech. The CEO could be a dildo on a stick for all I care, the only thing that should matter are the products itself. Otherwise you're just buying into a cult.
I am sorry, but I really need confidence in the person leading the platform I am developing for. My income depends on it, so I need to feel confident that the platform will actually move forward in the right direction.
I don't have confidence that a dildo on a stick can make the right decisions... But what do I know... I suppose, I have heard stranger things.
> He almost apologetically told us that Apple had to make the change to deliver the notebooks he had promised two years before -- But couldn't with PowerPC. It made sense. And then, he showed us that all along Apple had the foresight to plan for this many years ahead.
And what is the difference with the current switch?
I mean, apart from the presentation, which I do not care about.
I’m sorry, but to me what you just described is a sales pitch. I consider it vitally important to see through them even evaluating technical decisions.
It wasn't a surprise to anyone paying attention; NeXTSTEP had run on 68K/x86/Sparc/PA-RISC. Removing architecture support would have been remarkable.
What's important, for those paying attention, is that Apple promoted PowerPC emulation with the first x86 Macs in OS X 10.4 and then removed it after 10.6. If you think Apple won't screw you again, well, go ahead, it's your money.
[Dis]claimer: I have no long or short in AAPL. Anyone posting or voting in this thread should similarly disclose.
The first Intel Macs were shipped in January 2006. Rosetta was dropped with the release of 10.7 in July 2011. Five years' support for a discontinued architecture seems rather generous.
(1) OS X 10.0 through 10.3 were released for PowerPC only. Apple first supported x86 in 10.4 and last supported PPC in 10.6.
(2) It's not 2030. If you're reading this is 2030, HN won't let you reply. That's a separate but related problem.
(3) Time passed, so fuck you is not a customer-centered philosophy. Time passed, so I'm going to remove already shipped capabilities is a customer centered philosophy in the sense that it's centered on fucking the customer.
(4) I have an Apple IIe and a MacPro3,1 and a whole bunch in between: fool me once, shame on you; fool me fifteen times, shame on me.
[Dis]claimer: I have no long or short in AAPL. Anyone posting or voting in this thread should similarly disclose.
I still think that comments here, which require some minimal creative effort and are attached to identifiable user names, are usually somewhat legitimate, and more likely to be from fanbois than financiers. Voting to amplify or silence perspectives, on the other hand, entirely lacks accountability.
(I'm curently taking bids on an HN account with 4575 interweb points.)
It seems to me that the reverse is more likely at the scale of most people on HN: people emotionally invested in a brand use disposable income to buy shares of the brand (or, the reverse).
It wasn't just lack of performance numbers, there were no actual products announced. They would have had to tip their hand on a lot of info that is not helpful to customers or their ability to keep selling Intel stuff.
One big question though will be how this devkit benchmarks against the current maxed Intel mac mini. I'm curious if GPU performance beats the current BlackMagic eGPU. (rx 580)
I think concerns about how Apple will handle the transition can generally be addressed by the relatively smooth transition from PPC to Intel. Apple has literally done this before.
Apple transitions CPU architectures every 10-15 years.
6502 -> 68k in 1984 via hard-cutover [edit: see cestith's reply, there's more to this story than I knew]
68k -> PPC in 1994 via emulation
PPC -> x86 in 2006 via Rosetta JIT translation
x86 -> ARM in 2020 via Rosetta 2 static recompilation
You could even argue the transition from Mac OS 9 to Mac OS X was a transition of similar magnitude (although solely on PowerPC processors), with Classic Environment running a full Mac OS 9 instance [1]
I disagree that 6502 -> 68k was a "transition." The Apple II and Mac were two separate product lines. The three major early home computer companies (Apple, Atari, Commodore) all did this.
This is true, but note it was released in 1991, many years after the Mac's introduction. By that time, the Apple II was definitely on the way out. The last hold outs (schools...) probably needed encouragement.
Yes, in 8-bit mode. The IIgs runs with the processor in 16-bit mode from everything I've read about it. It might be able to swap modes to run older Apple software, but the IIgs is a 16-bit machine.
Not divulging their hand may be a thing. But they could at least have said something (rehashed) about the A12Z: "it performs better than the CPUs currently shipping in the Mac mini by X% in Y benchmark".
I'm not intrisically excited for a new Apple product, but if they could have told me, we can deliver 50% extra battery life in your new MacBook at comparable performance, that would build up some hype and maybe mindshare.
> not helpful to [...] their ability to keep selling Intel stuff.
I hope that that's it. If we're going through the pains of a platform transition, I'd like to get something out of it.
> Not divulging their hand may be a thing. But they could at least have
Let's say that the new numbers are mindblowingly good. So then what? Nobody buys anything from them until next year because they're all waiting? Yikes. This way fewer people will be mortified of the idea of buying something right now instead of waiting.
> But they could at least have said something (rehashed) about the A12Z: "it performs better than the CPUs currently shipping in the Mac mini by X% in Y benchmark".
It's a kit to allow developers to prepare for transitioning their applications to ARM, for future retail MacOS/ARM devices. It's not a new Mac Mini, and it doesn't make sense to compare the retail machines to this dev kit (which is probably running a yet-to-be-fully-optimised OS)
I'm guessing they're not planning on releasing any A12Z products. They kept going on about how "scalable" their platform. I'm betting they launch with a significantly more powerful processor (they could easily double core counts and up clock frequencies for a laptop-class processor) on a next-gen process (i.e. 5nm). They probably don't even know what the performance will be like yet.
I think the lack of hardware and lack of benchmarks are related. Apple doesn't know yet what the thermal throttle will be on an A12Z MacBook until they start testing the cooling system.
There is no reason why rx580 would not be supported on ARM or why there would be any meaningful performance delta. AMD does not have any kind of “secret-sauce” driver for that, it is simply LLVM targeted to that architecture that converts HLSL/GLSL/SPIR-V into the architecture specific code.
It's an integrated GPU so it isn't going to compete against serious dedicated GPUs, and no one should expect that. I imagine much like existing Apple devices (and Windows laptops) with dedicated GPUs it will switch as necessary. But at least the integrated GPU will be better.
I think the 5700 series is eGPU compatible on macOS.
These fail because Apple is orders of magnitude away from Nvidia and AMD in performance, plus this is a chip with a very limited TDP. I think they will fall a bit slower than current AMD APUs.
I don't think they even used the term "ARM" at any point. They're calling it "Apple's silicon," and they acknowledged it's the same as what iPhone and iPad use. But I thought it was interesting how they seemed to avoid the term. It's probably just a matter of avoiding getting too "techy" and marketing.
The first guy in the "lab" scene (Johny maybe?) mentioned plenty of other "techy" terms. I think that they want to distance themselves from other ARM manufacturers and put the focus on Apple's advantages over Qualcomm and others.
ARM doesn't really matter very much to Apple - Apple designs the micro-architecture and many (most?) of the other SOC components themselves.
With the technology moves Apple has made, they could probably switch to RISC-V at this point, however being able to use ARM devtools probably adds more value to Apple than any cost savings moving Apple would gain from moving away from ARM
No, the custom silicon matters more. They've spent years building infrastructure to make it easy to change late stage code generation to multiple ISAs.
It doesn't. Ninety-nine point five nines of software is architecture-independent, and if you're an App Store sharecropper you'll never notice. It's the users with paid-for x86 binaries who will be screwed, like they were when Apple removed the ability to run PowerPC binaries in OS X 10.7.
[Dis]claimer: I have no long or short in AAPL. Anyone posting or voting in this thread should similarly disclose.
They mentioned running Linux in a VM at least twice in the keynote. I'm not sure why, unless it's an acknowledgement that OS X is no longer a usable development environment.
Linux, like any OS written in the past 30 years, is substantially architecture-independent. My day job involves coding for several devices with Linux kernels on ARM (32bit) and Aarch64 and I have no idea which is which, nor any need to.
[Dis]claimer: I have no long or short in AAPL. Anyone posting or voting in this thread should similarly disclose.
(‘ARM’ has become meaningless marketing drivel; there are physically existing pairs of 32-bit ‘ARM’ processors that have exactly zero physically existing machine instructions in common.)
[Dis]claimer: I have no long or short in AAPL. Anyone posting or voting in this thread should similarly disclose.
It's funny how that works. This story may be only tangentially related, but here goes: A few years back, I was visiting the local ARM offices in Trondheim, Norway. I happened to mention something about the iPhones using their processors, and my host immediately said, “I am not allowed to comment on that”. But everybody knows it, I said. In response, he said yes, but he still can't talk about it. Possibly, he broke the rule by admitting even that much.
I don't think it's strange at all. Consider for a moment, what competitive advantage does Apple have advertising that the iPhone uses an ARM CPU? The people who need to know the architecture can find out easily enough, the people who are just buying the latest iPhone know it has an Apple CPU.
Everyone and their dog has known that Apple uses Gorilla glass on their devices since the original iPhone. I believe it was only until recently (this year) that Apple executives acknowledged the relationship in any capacity (one of their SVP visited the Gorilla Glass factory with press)
FAANG NDAs can be crazy. I've been in a position where, yes, everyone knew something, and the other company openly talked about it, but because of the way the NDA was written there'd be heavy financial penalties for us to talk about what everyone seemed to know anyway.
I also thought this might be a 'dry run' for a risc-v strategy across their platforms, as risc-v was designed to be expanded for specific applications. Most of the IP is no doubt in the peripherals, which are largely CPU-instruction-set independent. We'll see. I say 5 years, tops, to first Apple risc-v device.
RISC-V doesn't really get Apple anything. They were a very very early investor in ARM back in the early 90s, and the rumor is that they have a pretty much "do whatever you want" licence to ARM technology.
Maybe people thought I was making some kind of value judgement about web developers not being "real developers" because they're not kernel hackers. That's just my guess though - I doubt any of the downvoters will see or reply to this.
I believe Apple can make processors for laptop and normal desktop in A-series way but I'm curious how they make for Mac Pro. Adopting architecture like chiplet?
I imagine they may use a mass amount of SOCs. Keep in mind that their Laptop game plan may be to build CPUs to the Max limits of the TDP and then use their (rather incredible) power management system to reduce power far below Intel chips for normal use. This is the iPad Game-plan, the chip is rarely ever fully hammered. You may see a 12 core Macbook Air capable of (unsustained) desktop Ryzen-level performance but it will generally be ran as 6 low power cores and one or two high power cores, with bursts of insanity when needed. If you can pool a few of those together, it would be easy to beat Intel's server offerings.
They said they'd be using Intel chips for a while, it could specifically in regards to Mac Pro.
Even if apple could make a great server/high performance chip that seems like possibly a lot of additional work for little gain in the market?
Keeping intel chips for some computers may make more sense in keeping their toes in the water for intel incase they need to rekindle that relationship for future projects down the road.
The goal is to transition in two years. Given Apple's record at the high end, I believe that goal is optimistic.
I would be astonished if an Ax chip can match the media creation performance of the Xeons in the MacPro and iMacPro any time soon. The top Xeon has 28 cores, and matching that is going to be an interesting challenge.
I think it's more likely there were be further splits in the range, with prosumer iMacs running Ax and no-compromise pro models staying on Xeon for at least the next few years.
Intel support was announced during OS X Tiger, and 10.4.4 was the first public release with support for x86. 10.7.0 was the first release without support for PPC. So 10.5 Leopard and 10.6 Snow Leopard were the two major releases with support for both Intel and PPC.
Now, Apple tends to provide security updates for at least a few years for each OS release, so I can envision recent Intel Macs getting security updates for another 4-5 years.
Up until last month I would've said they would keep things around much longer - I mean, they supported old iPods and iPhones far longer than their competitors...
But the killing of openGL and 32-bit software is making me wonder about their previously-amazing commitment to supporting older things.
Even if the goal to transition is 2 years that still means there will be a long tail of having to support Intel chips for future updates to the operating system. I have a 2011 Mac Pro that is chugging along perfectly fine with zero issues.
Possible, they've yet to drop the i-moniker of the imac after all, though resurrecting the ibook after so long and when the macbook remains "available" would be a bit odd.
I am interested in seeing real world performance. I agree that there isn't a lot that Apple needs to do here; the most curious bit will be how much they're able to automate behind the scenes with Rosetta to help out the development community. For most of my workload I am sure that it'll be completely transparent. The only bits that'll likely be less performance will be testing in virtual machines for x86, but it isn't like I care too much about that performance. I'd take the 3+ hours on battery.
3. Osbourne effect avoided by not crowing about specs - those who are cautious or dependent on x86 will continue buying Intel kit. Once the new product is available and it kicks the daylight out of the legacy, they will already have their pareto-split of interest in their new offerings.
The A12Z is already shipping in the latest iPad Pro, so it's not like its performance is some unknown quantity: benchmarks are aplenty... Although I guess this dev kit could run at a different frequency, and have more/less memory bandwidth. Performance should still roughly be that of the iPad Pro.
The A12Z is itself only a small update on the A12X from 2018, so it's basically two years behind whatever will ship in actual ARM Macs this fall...
It might be running at a somewhat higher frequency, because why not. But the DTK is not a production model, so I don't expect Apple to spend significant resources on it. After all the A12Z should be good enough as-is.
You have to consider that Apple engineers are busy designing new SoCs for the entire Mac line-up: optimizing the A12Z for the DTK is probably not their top priority at the moment. They will want to wow people with a new MacBook (Air? Pro? <blank>?) this fall: that SoC should be their priority...
Apple will almost assuredly have a dedicated event later this year where they will announce new hardware.
Apple has development kits running in modified Apple TV's. This is a chip that has essentially been out for a few years in iPad Pros. Why would Apple announce numbers based on this? It also assumes Apple will ship future laptops without fans or ports, which is how the development kit is coming out.
Apple will most likely have an A14X out later this year in at least one laptop. That's going to be significantly newer and more advanced than the A12Z in development kits.
No it's not less dependency on 3rd parties, it's more dependency on TSMC. Before they were able to play Intel vs TSMC, it's not the case anymore. Then you add the geopolitical issue of Taiwan vs China and the risks level keep increasing.
> I'm surprised we didn't get any performance numbers.
It's the CPU in the iPad Pro, performance numbers are out there in the wild. The only big change is the RAM. This isn't a retail product, it's a developer kit. When they release retail Macs I'm sure there will be some performance numbers.
There's still a huge difference in TDP. The iPad probably has a TDP of maybe 10 watts? The Mac Mini Intel CPU's have 65 watt TDP's. They can deliver more power and cooling to the 12Z than in an iPad and it should result in much higher performance.
They said they are planning on making a family of SoC for the Mac though, I doubt we are going to see iPad-level processors in the actual ARM-based Macs they will sell.
I wonder if they're holding them for the actual hardware release in Fall? They could still be deciding the tradeoff between battery life and raw power.
It's not a product release - it's an announcement of direction for the Mac product line and the Mac OS platform. Once they have hardware with ARM processors for purchase they'll be speaking to the processor specs and how much better they are at power management.
They had to announce this early to allow developers to get ready. If they could have gotten away with not announcing early they would have. Obviously (if all apps would automatically run natively on ARM without any developer involvement) they would have first announced this with an actual new Mac.
That, however, was not an option. So they have to tread carefully in what they say and they also have to be a bit careful about showing off too much.
They only had to tout the benefits of ARM insofar as to placate the fears of consumers (their Rosetta story plus virtualization story helped there) and to provide some reasonable justification to actually make devs at least a bit excited, even though they have to do additional work.
Plus: No ARM Mac (except the transition kit) currently exists. It’s not even clear if the first Mac they will announce is even finished yet, if only internally. And even if it is finished: Do you think going on stage now and talking about a new MacBook Air that has twice the performance and 50% more battery life as the current MacBook Air – oh, and you can get one in December – would be a good idea?
This is Apple’s tightrope walk to avoid too much of an Osborne effect. I think they are ok with some Osborne effect (if only because they know that even if no one buys an Intel Mac ever again during the next two years transition time they will not go bankrupt, so far from it) but you don’t have to provoke one, right?
I expect plenty of numbers and comparisons when they introduce the actual first ARM Mac.
It doesn't seem weird at all, and the fact that they're sending out devices on the A12Z seems to be intentional sandbagging: they know that people are going to benchmark and the results will likely be simply comparable to current hardware (from a performance perspective...energy efficiency will clearly be much better). When they release the actual devices, where their power and thermal profile is dramatically higher than an iPad Pro, it will actually wow.
That’s my takeaway too. But imagine the impact of the speed is as good or faster? They’re shipping with 16 gigs ram too so it’s at least not the typical 8 gig minimum.
I would also expect this hw to be nerfed compared to what actually goes out to customers. That way no matter what is achieved on this it is better for real users.
Those numbers would be meaningless without knowing what the actual geometry of the internals will be, because cooling is a major limiting factor for laptop processor performance.
Don’t expect performance. The Intel DTK was Prescott based (while AMD had great dual cores and Intel were lagging). Then they’ve released their Core series that started from mobile and had great performance.
I guess they did some homework before ditching Intel.
The big question Is if they have enough headroom for manufacturing reliable chips with sustained high power.
i was replying to the parent thread that observed no performance data was shared. if performance were say, the same or slightly worse, then it would explain why apple wouldn't release performance data. but they might still want their own chips, if they increased profitability
Unlikely - being able to have full control of your roadmap is a huge strategic advantage. Profits and revenue are nice, but if Apple was interested in that they could dual source x86 from AMD and drive cost down.
You don’t think companies like Oculus are envious of Apple’s flexibility from not having to rely on Qualcomm for their mobile SoCs? It’s not just about profit margin.
> Profits and revenue are nice, but if Apple was interested in that they could dual source x86 from AMD and drive cost down.
If Apple could do that and play it to their advantage, they would have done so a long time ago.
Higher margins and profits are the drivers in the end. Strategic control or not is just a way they use to achieve that. It is a publicly traded company, after all.
> You don’t think companies like Oculus are envious of Apple’s flexibility from not having to rely on Qualcomm for their mobile SoCs?
I doubt Oculus cares given their goal. There are pros and cons of vertically integrating an entire company into one.
Genuine question: unless you run a datacenter with thousands of CPUs, does it really matter?
Apple has zero presence in data centers.
I read people here writing "double the battery life" without any source, but even if that was the case I own a laptop that does 2 hours on battery, I use it to run models on a discrete GPU so power efficiency goes out of the window anyway, it's really not achievable.
The other one can handle average workloads for 12 hours and weights a bit more then a kilo, if it was smaller or lighter it would be a much worse laptop than it is (if it's too light it has no stability and you fight constantly to keep it steady on your desk)
> Genuine question: unless you run a datacenter with thousands of CPUs, does it really matter?
I think it does. Other than double the battery life (which I wouldn't really need, but my Dad who travels a lot would absolutely love), the big thing is thermals (which were specifically mentioned in the keynote).
The biggest constraint on Laptop performance is thermal throttling. That's why gaming laptops have huge bulky fans, and a current MacBook has pretty decent performance for short bursts, but if you are running something (say a compiler) at full throttle for a few minutes then it gets significantly throttled.
Better thermal performance (which is directly proportional to power usage) could well be the key to unlocking close-to-desktop performance in a laptop form-factor. Which could be a pretty big win for the MacBook Pro market.
> I'm surprised we didn't get any performance numbers.
The fact that all the demoes were on their top-of-line most-expensive machines fell very weird to me. "Look at this amazing performance" would be great if the demo was on a Macbook Air.
They did have a Mac Pro there as a prop, which was interesting and potentially confusing. There was a brief moment where you could see that it was connected to something in a Mac mini case.
I really wonder what this says about the x86 platform going forward.
Mobile completely passed it by.
I’ve been seeing more hype about ARM servers for a while with AWS Graviton instances, the new #1 supercomputer in the TOP500, etc.
And of course today we see that Apple plans to transition their Macs to their own ARM chips. Even Microsoft made an ARM-based Windows/Surface product but it didn’t seem to amount to much. I wonder if they’ll want to make another stab at it seeing Apple’s direction with strong vertical integration.
While I don’t think x86 as a platform is going away anytime soon, I feel like its market share and by extension its relevance will slowly dwindle over the next decade or two. Interesting times.
I too don't see x86 disappearing soon but it feels like the world has changed and that change is not positive for Intel.
We've been used to x86 dominance on the desktop and servers for so long that I think a future where there are two architectures with critical mass and one of the architectures can be licensed by a number of firms is hard to imagine.
There will be some short term effort and pain but it must surely be a better competitive environment than we have now.
The historic / current Intel and AMD duopoly has surely not been healthy.
Intel has either squandered the tech and we will see drastic improvements soon or possibly they’ll fall closer toward the dustbin of history. I suspect the later since they’ve been stuck at various points of specifications for a decade plus. Or they do something novel for once. I wouldn’t count them out of a breakthrough into new architectures/fab processes entirely.
It's astonishing to me how far Intel has fallen in so short a time. I feel like it was only a couple years ago that Intel was understood universally to be the "heavyweight champion of the world" so to speak.
They completely missed the boat on mobile and AMD has leapfrogged them very recently on desktop. On top of that there were the Spectre vulnerabilities which shook confidence even further. This announcement is another huge blow given the extent to which the entire consumer electronics industry tends to follow Apple. I would be interested to hear an insider's perspective on such a rapid decline.
Not an insider, but as far as I know, it's mostly their fab that failed.
They're still on 14+++nm when AMD is on 7nm, with 5nm coming soon.
You can have the best architecture in the world (no idea if they have) and the best engineers, it's hard to compete when you're so far behind in transistor size.
Intel’s CEO, Brian Krzanich, was forced to resign in the middle of a major chip and manufacturing transition. [1]
That had to be a heavy blow because the politics in this company are ugly.
I feel like sometimes people look past the most obvious signs of why a company is struggling.
For example, Apple’s move to the mothership had a major impact on the company. It was one of the company’s biggest “product” releases. No one mentions this as a reason for anything.
They tried to jump too far with intel 10nm. The process node names no longer represent actual transistor size, they just indicate a new generation. With intel 10nm they took a risk and tried to shirk the die more then would normally happen in generation jump. If it worked it would put them a whole generation ahead of TSMC and secure process leadership. But physics bit them in the ass. It turned out that shirking that small was much harder then they thought, and the cells they were using were not robust enough to handle it.
Instead of putting them way ahead it cost them years of recovery and let AMD sneak up from behind.
I agree with you, but keep in mind that those numbers represent the generation of the process node, rather than specific physical characteristics. Intel 10nm is roughly comparable to TSMC's 7nm.
The problem (in my non-insider view) is that Intel's 10nm just hasn't delivered. It was delayed substantially, and faced several problems even after rollout.
Well they have spent the last 4 years trying to fix wave after wave of side-channel vulnerabilities. A handful of those affected ARM and AMD as well, but a very large number of them were Intel-only, and were a direct side effect of Intel cutting corners on safety to get advances in their performance numbers.
My comment was meant to be more general than Intel. I’m even including AMD here. x86 has no presence in mobile (that ship has sailed) and again I’m seeing quite a lot of innovation on ARM side wrt server-class processors. Not sure how traditional PCs will play out but Macs are going to split off from x86 within years.
I totally agree, I think it would be a good thing if x86 were to be replaced by something newer. Intel could even adapt and continue to play a leading role, that is another question.
I don't think they can ban emulators without just straight up banning competitors like AMD that make compatible chips. (which presumably they would have done if they could)
They were forced to allow AMD early on to have a second source to sell to .gov, and since then AMD has gained enough patents that they need to cross licence.
I'd suggest avoiding either hyping or dismissing the new processors. We don't have performance numbers for a real model, and until we do, we know very little.
The thing we can talk about is Apple's strategic direction. The good version has Apple releasing a notably superior general purpose computer, maybe even gaining more marketshare in the process. The bad version has the Mac turning into an iOS development station. The fact they showed a game of all things does give some encouragement.
Key questions:
1. How open the new OS/Models will be, and how much developer support we get. The more open, the more likely is will be a powerful general purpose computer.
2. Whether Apple can keep riding the tiger regarding processors. I'm sure they did their due diligence, and the new processors will be powerful enough. But x86 isn't dead yet, AMD is capable and even Intel isn't dead - they still have a hand to play.
If Apple can keep at this, developers will flock in and we'll see nice stuff. If x86 (re)gains its momentum, Apple will be left behind, but they will be unlikely to switch back (unifying processors with the iPhone has a lot of advantages for Apple), and we end up with the bad future.
Apple took a chance today, we'll see whether it pays.
There's gonna be a weird moment soon where developing docker images for Armbian hardware (Pi and friends) is going to be more straightforward on a Mac than developing docker images for Intel servers.
I wonder what sort of involvement ARM server vendors could be offering during this period. I don't expect Apple 'needs' anything from them, but there might be some Docker work they could be jumping on.
I wonder what this will do to Electron. If the iOS apps are really 1:1 on macOS, then the need to maintain an electron app will probably diminish. As long as they both support the same OS APIs I can see devs that can learn a new language (Swift) ditch Electron.
Apple had a list at the State of the Union of open source technology projects they had built pull requests for to add ARM support. Electron was up there, as was Python 3, OpenJDK and Go, notably.
Electron already has support for ARM64, but no official releases yet. But it needs to be build from x86 machine. No native compilation on ARM64 yet. I think with apple moving to ARM, google will add native ARM64 compilation for chromium. This in turn will be picked by electron. Chromium has been running on ARM for a long time with Android and Chrome OS so it has all the optimizations.
I'd wager for a significant amount of shops it's less about cross-platform support and more for being able to throw existing generalist or web frontend developers into native development. If a business wants to ship a desktop app quickly, it's hard to argue against electron because your existing teams can become productive without too much training.
Catalyst and SwiftUI are (were?) very immature technologies, not yet ideal for production software. I'd imagine that we're at least a year out from seeing real SwiftUI software in the wild.
Eh, Catalyst apps are as good or bad as the developer wants them to be. Voice Memos and "Find My" on the Mac are two fantastic Catalyst apps, and certainly better than Electron.
Anyways Catalyst won't even be relevant in the long term, once iPhone apps are written in SwiftUI
I often close the Twitter app and Apple's news and stock apps on my Mac mini because the performance is terrible. Hoping they've spent time tuning this more.
I am soo much looking forward to using native Slack and Teams instead of their horrendous electron apps, that even don't use GPU acceleration on iGPU Macbooks!
Hopefully MacOS support for x86 won't just be 3-5 years from whenever the new ARM models come out.
I have invested quite a bit into the Mac/Apple ecosystem. A big part of that reason was the longevity of the hardware, along with good resale values.
I hope they do right by their existing Mac customers. As of right now I don't have a strong reason to switch away.
I also hope that Apple does not blow this transition from a quality perspective. Their design choices and attention to detail have left a quite a few things to be desired in the past few generations of hardware.
My suspicion is that new macOS releases will continue to support Intel-based Macs for at least three years after the last Intel-based Mac ships. New OS X releases only supported PowerPC for about two years after the last PowerPC Mac shipped, which is shorter than I'd expected, but Apple under Tim Cook has been a little less aggressive about ending support for old hardware, despite what people often seem to think. MacOS Big Sur supports hardware going back to 2013, and iOS 14 supports hardware back to 2015's iPhone 6s. (And iOS versions keep getting updates for a year after their replacements ship, while macOS versions tend to be updated for two.)
As for the quality, reply hazy, ask again later? The whole butterfly keyboard of laptops turned out to be a fiasco, and Apple's long-held tendency to push their industrial design right to the edge of thermal and material tolerances got kind of crazy-making in the last few years. Yet so far, I'm really liking my MacBook Air 2020, and the only thing I'd absolutely change about it if I were given a magic wand would be to add a third USB-C port on the right-hand side. I appreciated much of Jony Ive's design work, but I'm hopeful that with him gone, the drive to prioritize minimalism over functionality will be at least toned down.
> My suspicion is that new macOS releases will continue to support Intel-based Macs for at least three years after the last Intel-based Mac ships. New OS X releases only supported PowerPC for about two years after the last PowerPC Mac shipped, which is shorter than I'd expected
I agree with this. There's a big difference between Intel -> ARM and PPC -> Intel too: With PPC -> Intel, they were moving from their own special architecture towards the mainstream architecture that everybody else was using. In this case they're leaving an architecture that is likely to remain highly relevant outside of the Apple bubble for a long time to come.
I can't imagine how people who dropped serious cash for the 7,1 pro machines feel. I've used my 6,1 for 7 years, and I will use it until it is no longer receiving updates. So hopefully 10 years.
> I have invested quite a bit into the Mac/Apple ecosystem. A big part of that reason was the longevity of the hardware, along with good resale values.
Well, given that demand for x86 probably won't go away in 3-5 years and that Apple generally makes it impossible to downgrade OSX on new machines, dropping x86 support might actually increase the resale value of existing hardware.
I searched this thread for Thunderbolt and USB and nothing.
The only way this can work if they implement USB 4. That's earlier than expected but not by much. USB 4 came out last September and everyone was guesstimating 18 months before the first systems ship with it. AMD shipping it with the Ryzen 4xxx laptop chips would've been a really big win but apparently it was too early. And now, as with Thunderbolt, it seems Apple will be the first.
As far as I understand these things, you can connect a TB3 device to a full featured USB 4 port (which Intel plans to market as TB4 just to increase confusion as if there isn't of that around USB C). Not necessarily the other way around: the USB 4 bus while compatible with the TB3 bus, can contain USB packets, TB3 only carried PCIe and DP packets.
Areca has an amazing SAN-in-a-box product: it's a 24 bay 4U RAID box which allows as many as nine TB3 hosts to connect to it at the same time and it's immensely popular with various movie / TV show production. It's less than 10K USD which is an absolute bargain for the capabilities. Consider the 2K price of just one 40gbps Ethernet to TB3 adapter which you'd need for a traditional SAN.
Devices like these make TB3 compatibility an absolute must for Macbooks.
You'll notice all the demos of macOS Big Sur were shown using a Pro XDR Display, connected to hardware that was later revealed to be the MacMini+A12Z/Developer Transition Hardware. And during the Maya demo, it was a Pro XDR Display explicitly connected to the Developer hardware.
Which is interesting, because the Pro XDR Display can only be driven TB3 [1].
So Apple figured out the licensing for a TB3 controller chip to work in this design (which funny enough would be licensing that IP from Intel), or they are using USB4/TB4, or something else I'm not smart enough to think of.
Or they are bullshitting - even if the Dev kit wasn't ready to connect to XDR, they wouldn’t dare to show it. Which monitor would they even show it on? 10 year old Cinema? Glossy plastic Acer? :)
There is one (as in, one) certified AMD motherboard and I can't Intel rushing to help Apple out with TB3 when they are ditching their CPU. No, it will be USB 4.
Yes. That's because they do not compete directly with Intel. It's certainly a thawing of matters that an extremely high end AMD motherboard that will sell about a thousand copies altogether got a cert. Good job Intel. I doubt, however, they will give Apple a helping hand in moving away from their CPUs. Of course I could be wrong. But why would even Apple bother with this when they can just throw a giant wad of cash at Asmedia to accelerate their USB4 development a tiny bit and then they are free of Intel, forever. To recap, Asmedia said in January they are working on an USB4 chip to ship this year.
I have to wonder if Apple even needs to worry about TB on the Apple Silicon computers moving forward.
Now that they're going down the path towards controlling everything in their computer hardware, I can see Apple creating a new proprietary high performance interface to replace TB and use USB for everything else that requires interoperability.
3.1 Gen 2 is 10gbps USB + DisplayPort and DisplayPort doesn't mix -- out of the four high speed lanes it either gets two or four lanes. Now full USB 4 is a 40gbps bus with mixed PCI Express, DisplayPort and USB packets. It is not a small change.
Rosetta 2 - the interesting bit was that it was going to pre-translate binaries instead of at runtime. The implications for actual VM emulation is that Rosetta won't work for run time environments like OS emulation. They touched on it briefly with the emulation technologies bit, but it looks like it will be separate from, and likely much less performant than Rosetta.
They explicitly said that it can perform both static and dynamic translation for JITs. I wouldn't be surprised if there is substantial hardware support too.
I was wondering why did they mention the virtualization. Let's see what technology do they use. Whether it is going to be proprietary or something like Xen.
I meant that it supports dynamic translation in order to support JITs such as turbofan.
But to answer your question, yes a JIT can be static. JIT just means that the compilation happens at runtime, and "static" in this context means that the compilation is happening at the very start of runtime. You could imagine a JIT that compiles all bytecode to native code immediately on launch. The reason this technique is not used often is that it tends to lead to long startup times. But if the result is cached somewhere then it might be acceptable.
The emulation thing seemed to me to be Hypervisor.framework for ARM, as they showed Linux and Docker running (which both run on ARM), but not Windows (which an average user may be more interested in).
I wouldn't be so sure, there are a fair number of people who want a machine that will run x86 for various reasons. Windows support/ Linux support. Even considering how impressive x86 VMs looked in the demo, lots of people will prefer using intel silicon for guaranteed compatibility.
I could see the "final Intel Macs" having a value to folks; somewhat similar to the 2015 Macbook Pros which many considered the "last good Macbooks" before Apple fumbled things with the 2016-onward Macs and their gimpy keyboards.
In this case, I don't think the first ARM Macs will have undesirable hardware ala the first few years of Touchbar Macs, but there will be some straggler software whose ARM ports will be delayed or will never happen. For those who depend upon that software, the final Intel Macs will be invaluable.
> Yeah, the fact that they didn't show off Windows, but instead Debian of all things, was very telling.
I suspect developers running VMs with Linux on them is far more common than developers running Windows VMs. Likely by an order of magnitude. Web developers want Linux VMs, Windows developer have Windows laptops.
You'd be surprised. It was the only option for those writing native apps to have a platform that could legally run all of your tooling if you shipped Mac/Windows/Linux.
But given the amount of time they talked about how large a part native apps play into the transitions, an extremely strategically important segment for them.
Additionally, there's the Android/iOS crowd in the same boat, where emulation of non x86 in Android dev is pretty limited (but I can see that being rectified with the newer virtualization extensions).
I don't think there's a way to have a licensed Windows ARM copy right now on arbitrary hardware. I thought they only provided OEM versions on certified hardware.
The end goal isn't just using Windows for Windows sake, the reason people use Windows on mac hardware is to get access to apps that run on Windows. And most of those apps still run like garbage (if at all) on the ARM versions of Windows.
They showed Maya (x86 binary) running on their Chip. It has some ability to run intel binaries on Apple silicon through at least two options-emulation and something that sounds like “jit interpretations”-lack of a better word.
I think that is a slightly different use case, though. That demo was an x86 binary running on ARM MacOS via a translation layer. So if there is a MacOS x86 version of the app you want to run, that might be an option.
But I know a lot of people still run Windows because they want the Windows version of an app, either because it isn't available at all on Mac or just because the Windows version runs better (Excel was a classic example of this for a long time, might still be). In that case, I don't know if that same translation layer will have the same performance (if it can run at all outside of MacOS) when running an entirely different OS.
FWIW I remember running PowerPC binaries on Intel macs via Rosetta was pretty painless. They mentioned explicit support for linux/windows emulation so they know it’s an important use case.
I think this is called "transpiling" -- a version of compiling that's mainly translating from another architecture. And it didn't sound from their description like it was JIT -- it sounded like it would do the transpile when you first installed it (or maybe first ran it?) and keep the results.
Transpiling (as much as I hate that word, because the more you know about compilers, the more meaningless it is), is about source to source translations, not binary translation.
And they have a first pass AOT, with a JIT backup from the sounds of it to support JITs like browsers, node, and java.
I found the same problem when I tried running off Pi OS 64-bit for a day—almost every app that _did_ have a Linux binary available was only able to run under x86_64, not on arm64... More here: https://www.jeffgeerling.com/blog/2020/i-replaced-my-macbook...
Zoom, Bluejeans, Dropbox, pretty much all the popular apps I used where I could find a Linux version for my Dell laptop, I couldn't find a way (though got close at least, with Dropbox) to run them on an ARM64 CPU.
Have you tried box86? https://github.com/ptitSeb/box86 Let's you run x86 (not x86_64 though) programs on ARM Linux. It does a neat thing where system library calls are converted to ARM system library calls rather than using x86 ones for better performance.
For Dropbox, you could quite trivially get by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem.
...and we need new, more hardware for CI/CD that's just dedicated for Apple OSes. Pretty sure we won't be able to use the 5 unused raspberry pi we have.
Like, a joke version of Windows, yes. As a developer, if my machine can't run Visual Studio then it's not interesting to me at all. I can see it being acceptable to people who work predominantly with tools that have an ARM binary though.
VSCodium works great on ARM64, I was testing it last week when I tried doing some dev work from a Raspberry Pi running the beta 64-bit OS: https://github.com/VSCodium/vscodium
Visual studio code has an arm build now [0]. I don't know what regular VS is written on but assuming it's .NET framework I assume a build will show up as the framework itself improves on ARM.
I known in 2010 the Visual Studio UI was rewritten in WPF/CSharp, I don't know which UI system they are using in 2019. It is documented in the wiki entry and a few blog posts MS made.
Unless the new ARM chips can handle VirtualBox well, I’ll pick up the last intel model. I have to work with old ASP apps in IE VMs side-by-side with stuff on my Mac every day.
I know, that’s why I run it in compatibility mode in a VM. Just because it’s dead, doesn’t mean there aren’t organizations that have built mission critical web apps that only work in IE and those of us who have the burden of dealing with them.
Dunno. The free Windows VMs I have saved can be reloaded in VirtualBox and used pretty much forever. They can’t run any serious software very well (on my old MacBook anyway), but I’m just using them to support some legacy web apps.
I am – right now even – but I agree it's become impractical.
This announcement is the final nail in the coffin I suppose. I hope some company will come out with great quality, well-designed, minimal laptops. That run Linux perfectly.
If it ran Linux perfectly, it could only do so if the manufacturer maintained its own Linux distribution. Laptop hardware and power management is just too complicated these days for anything else to work well.
If the PowerPC to Intel transition was any indication, they'll do just fine. Most consumers won't care, or will want to wait until Apple works out the kinks.
To put it a different way: the initial sales of the ARM-based chips won't be as strong as you think they might be, or as wide spread.
Having both Intel and ARM based systems in the market will even out the sales.
History is repeating itself, it's worth reviewing the previous transitions before making assumptions.
With the powerpc to x86 transition Apple was moving towards where everyone else already was. It was one less thing developers had to worry about when supporting Mac. With this migration it's the opposite, and MacOS is still a very small slice of the desktop/laptop market.
I remember Apple's m68k to PowerPC transition. Platform changes were a much bigger deal back then because it determined what kinds of apps you could run, but even so it went without a hitch. We lived with fat binaries for a while but otherwise it wasn't a big deal.
Apple's focus has always been the consumer market, and for most consumers, they're not likely to notice the platform change.
One might say, Apple gained a huge developer demographic when it moved to x64 from PowerPC, won't this matter to them? But most of those developers were creative types that don't work at the binary level anyway. If you're doing front-end web dev on VS Code and almost never fire up gcc/clang, your life isn't going to be impacted significantly.
Those whose lives were impacted (e.g. scientific computing folks who struggled to get certain Linux-specific software to compile on what was a BSD-ish platform) have probably moved off the Apple platform and on to Intel Linux years ago or at least they're likely to be running a Linux VM on their Macbook Pros. (homebrew doesn't really cut it)
Also unlike back in the day when most people were still running desktop apps, so much of what we do on a daily basis is on the browser nowadays. Calendar? Cloud-based. Email? Cloud-based. Word processor/spreadsheet/presentation? Cloud-based -- and Microsoft Office is being ported to ARM. We're living in much more platform-agnostic world than we were even 10 years ago.
I expect the x64 to ARM transition to be seamless to most people who are currently on the macOS platform.
A bunch of Python wheels might need to be recompiled though.
> I expect the x64 to ARM transition to be seamless to most people who are currently on the macOS platform.
I completely agree with you IF Apple contains this to just the Macbook market. If they push this to iMac Pro / Mac Pro, too, though? Or even Macbook Pro? That's a different story. For developers it probably won't matter much, but for users currently relying on things like Protools to make a living? There's a good chance this won't be a friendly transition to them in the short term.
The pro tools (like Pro Tools) are usually the tools that are being ported first. It was the same with all the other transitions.
If not, there will be Universal 2 binaries with both x64/Intel binaries for a while and an emulation mode for legacy binaries. Underoptimized at first? Maybe, but they work well enough and performance catches up eventually -- the transition is over 2 years. Same situation as before. Software that work will continue to work on existing and new hardware for a while. Nothing is going to stop working the moment the new processors come out.
We've seen all this happen before (m68k - PowerPC - Intel). And it's really not a problem.
It depends on where your perception of "where everyone else already was" in this context.
Is the future more like Intel desktops or more like ARM-based mobile devices? If you think the latter then Apple is definitely moving towards where people will be (insert Gretzky quote: "skate to where the puck is going to be, not to where it has been.)
I’m pretty sure their point is that last time, the incentives/burden was more aligned between Apple and soptware producers, because it meant devs had one LESS architecture they had to care about. This time they will have one MORE, and it could very well be an architecture they have no familiarity with. And Apple isn’t exactly known for their technical documentation of new platforms.
I don't know that it's one more platform. I mean, sure, it's not exactly an iPhone, a Pine 64, a Raspberry Pi, or a tablet. It is, however, a very broadly supported processor family. More and more support is coming out for ARM all the time. If you're targeting desktop and anything smaller (phones, tablets, watches, SBCs, smart home devices) then the something smaller if it's bigger than an Arduino is probably ARM.
I think quite a few developers would be happy to use the same basic software stack on on a phone app and a laptop app.
If I'm making a professional desktop app like Blender or whatever, then right now I only need to worry about x86 SIMD optimizations & behaviors. I don't care about iPhone/iPad/Raspberry Pi performance. I may care that it works, because why not? But I don't care about optimizing for it.
Now, though, suddenly I need to figure out how NEON works if I want to continue to support professional users on MacOS. I need to increase my hardware costs to ensure adequate test coverage. I need to use something other than Intel's glorious VTune for profiling & optimization. When I'm step-debugging my C++ or assembly tight-loops, I need to know double the amount of assembly than I do today. I need to learn ARM's memory model & cache behaviors.
If I'm just writing some silly native app that could have been a website as is trendy then yeah this is all no big deal, who cares? But if you're really pushing professional app boundaries, where time is money? That's a different story.
Many things in the world of software are neither webapps nor contain a lot of highly optimized hand-written assembly. That's the worst false dichotomy I've read so far today, and I read political Twitter.
Just because your use case is different from those who have C or C++ code and a good optimizing compiler doesn't mean you need to disparage every other type of developer on the planet.
There are a lot of people who write native ARM for phones and tablets for performance purposes, too. Some of them get down into ARM assembly. If they port to Intel and want to do assembly-level things there, they need to learn twice as much assembly coming the other direction.
I was waiting for Craig to say that every Mac with a T-series chip could be used as a dev platform, since they're pretty strong on their own. Oh well. Maybe to smooth things over they'll ship an A-series co-processor with those Macs.
Apple released a bunch of PPC Macs after they announced the switch to Intel, so this isn't unprecedented.
I'm in the market for a new laptop, and if I knew how long the Intel macs would be supported, I could see myself picking one up. Unfortunately, nothing was said about that so I guess I'm holding off.
Disagree. He said Intel Macs would be supported for "years to come" and that they had more in the pipeline. They'll be supported for significantly longer than 2 years.
On the contrary, I'm tempted to buy the latest Intel Macbook Pro so I can stick with Intel for a few more years. Whether or not I'll end up doing so depends a lot on how good the new ARM-based laptops are in terms of performance (esp. wrt virtualizing x86/x64 VMs). But the current lineup, however imperfect, is mature and a known quantity, in contrast to the unknowns surrounding the first generation of machines with the new architecture.
They said the same with the Intel transition, and also projected a two year timeline. The transition was over much sooner than that and there weren't any new PPC products to speak of, as far as I remember. And of course, PPC Macs only got one more version of OS X before support for them was dropped.
They're just trying not to Osbourne themselves too badly. They want you to keep buying Intel Macs, but who's to say how long they'll keep support.
PPC was complicated by coming after years of failing to have decent chips actually ship. In this case, Intel and AMD are both still heavily invested in pushing x86 forward and Apple doesn’t have to do all of the work maintaining things like compilers or the JVM so it’d be a lot easier to keep that option alive if there was a good enough reason.
And I reckon those of us who just bought a shiny new Intel Mac in the past few months might be questioning our decision. Should have held on to that 2010 MBP one more year?
Buying a Mac right now, need to replace my EOL Mac Pro. I definitely won't be getting a Pro now. Probably just an iMac with more RAM and SSD instead of Fusion. I really wish I knew when they were updating their iMac and Mac Pro lines.
Typing this on a 2010 MBP, bought a new MBP this weekend (was later canceled by the seller b/c of inventory issues) and thinking about waiting another year
Maybe it depends on the pricing for the x86 vs. ARM macs. If the ARM machines are significantly cheaper the x86 sales will probably suffer quite a bit. As someone else in this thread said, most people have no idea what processor is in their computer. They just care that it's cheaper and runs all of their stuff (browsers, photos, messages, etc.)
The Mac Pro (and possibly the iMac Pro) will still be Intel for a while. I expect the portable lineups to get the ARM treatment first (as they are the most similar with the iPhone/iPad in terms of power requirements), and then scale it up for actual high-power desktop CPUs.
Apple could continue to ship arm/x86 machines in parallel for years if the fat binaries+emulation work well. If say the desktop mac pro maintains an edge somehow, or Intel comes back from their current slump in 5 years this could go on indefinitely.
(that probably totally depends on whether Intel actually designs a desktop chip for the first time in nearly two decades, rather than continuing with their current sell server chips as poorly designed desktops path)
Do you think most people know who makes the CPU inside their Apple machine? Go into an Apple store and take a survey. I almost guarantee no one will know.
Remember the MacBook Pro with a DVD drive lived on for years after all its siblings were discontinued. We can expect some x86 support to remain available for years after the last new ARM machine is launched.
I think that's the mid-2012 MBP? They were still selling them in 2015, I bought one of the last ones before it was pulled from sale. It remained in the lineup because it continued to be a best-seller.
Apple was already grumpy about servicing it by 2018 or so though, even with AppleCare. (I've switched to Windows now.)
I think it could be tied to large contracts and ensured availability. I suspect there will be an x86-based Mac available well past the two year transition that was announced.
What is really sad is that I don't think the majority of Apple consumers really understand what this means for the Intel macs in the long run. I fear a number of people will buy them not realizing they have a very limited lifespan.
> […] not realizing they have a very limited lifespan.
It may depend on one's definition of "limited": the Rosetta was first released with Mac OS 10.4 in 2005, [1][2] and was last available in Mac OS 10.6, which first released in 2010, [3] but whose last update was in 2011.
Six years of transitional support is not unreasonable.
They won't, necessarily. Fat binaries worked ok the last time around. Sure, Apple could stop making updates available for Intel Macs, but they can stop making updates available for any older model of Mac if they want to.
If those devices will have 4-5 years worth of use, then I think most people would be fine with the purchase. 4-5 years already does require upgrading for either performance or quality of life features.
Part of the motivation for the transition is the hope that the progress will be much faster over the next 6. If that happens, those intel macs may age out sooner.
The Mac is the Mac. Apple Silicon doesn’t change that. You can set boot args, disable security, and other things the Mac has historically supported. In fact these things are required to develop 3rd-party kernel extensions which are still supported on Apple Silicon.
But will Apple let you run iOS apps in an environment where you can break iOS security guarantees? I suspect that this is the beginning of the end for sideloaded apps.
The Platform State of the Union presentation made a particularly big deal of “this is the Mac and the things you expect from the Mac will function as they always did”, and demonstrating things like showing the iOS app container file and moving it around.
I don't know how much clearer things can be. The Mac remains a Mac. It can run software from outside the App Store. That applies to Apple Silicon. This was specifically called out in the SOTU stream.
They are going to get the Apple ecosystem even more closed and make desktop consumer platform like iPhone. Steve Jobs' dream coming true. I have never seen Linux running great on a ARM consumer device (like android tablets, smartphones) except ChromeOS devices. The big issues being unavailable graphics and other drivers and lack of ACPI (devicetree has to be provided separately). Some attempts were able to get Linux on framebuffer with some X patches but system upgrades required special care and having just framebuffer is slow and useless. And that is what will be achievable at most. ARM means freedom for manufacturers, not for consumers/end users! I suspect Apple will also put there multiple proprietary accelerators and specialized chips to make things faster by offloading some work which won't be accessible from Linux, of course.
It is true that the boot process on ARM is more entangled with the platform then on x86. In contrast to x86 where you have to do additional steps to lock it down on ARM additional work of the system integrator is required to keep it open/customizable. Not sure if we can expect Apple (or anyone else who is not interest in this business case) to invest into that (if they don't explicitly state that the will).
Edit: This touches a lot of areas, like auto-detection and configuration of hardware and sub-components (either connected or embedded into the SoC) or security specific areas like ARM TrustZone which has vendor specific implementations and secure boot procedures
At 1:34:04 of the presentation Fedherighi is opening up Xcode on the ARM Mac, and says, "we're using Xcode, like all our developers will" which might be construed as app store only.
Given that virtually everyone developing Swift or Objective-C applications for the Mac is using Xcode whether or not they're publishing to the App Store, that just seems to be a real leap to take, though.
It certainly might be nothing, but it's hard to imagine they would say "all of their developers" are using Xcode to describe the current and diverse IDE landscape. A lot of developers are currently using multi-platform game engines, open technologies like HTML, multi-platform frameworks etc.
It really isn't, though. There's a lot of IDEs and text editors out there, sure (I seem to collect them, myself). But if you're specifically writing Mac apps in Objective-C or Swift, the entire development chain is tied to Xcode, and all the developers writing apps for Mac or iOS that I know of -- not "developers writing apps on the Mac," but the specific subset of people writing apps for the Mac or iOS devices -- are using Xcode. The only other IDE I know of that's remotely competitive on this front is JetBrain's AppCode, but it has a tiny market share in comparison, and people developing native Mac apps in other languages are an even smaller group.
(Also, it's not really like Apple to acknowledge competition in this space, especially anyone using non-native toolkits. They're not going to mention people writing Electron apps in the WWDC keynote, right?)
The question is will I be able to install Debian on it? Is the bootloader going to be locked like iPhones? Is the ISA going to be different from Aarch64 on commodity ARM boards? Will the only supported compiler be Apple LLVM?
Running Linux is already not very straight forward on modern x86 Apple hardware, it would be extremely surprised to see the ARM Apple running Linux anytime soon.
It'll happen, eventually, when the open source community will reverse engineer them and find how to "jailbreak" them.
Then we still have to deal with drivers; I don't see mesa supporting the A12 GPU anytime soon, for instance. I think this is the end of the line for Linux on Macs.
I think your expectations for what emulation is capable of are set a bit high. The fact that it is able to emulate a game that's a few years old at decent frame rate is more than acceptable. You didn't see Microsoft demoing games for their Surface on ARM systems at all and for good reason.
I mean, if I had things my way they wouldn't be switching to ARM at all and emulation wouldn't be necessary, so I don't think it's wrong to be skeptical.
> You didn't see Microsoft demoing games for their Surface on ARM systems at all and for good reason.
Those were also lower-end computers with poor GPUs.
> I'm going to assume that a dedicated GPU was being used for Tomb Raider—they would have said something otherwise.
They said exactly what SoC they were using, and it's not known to have spare PCIe lanes lying unused in existing products. Apple pretty much just demoed an x86 game running on an overclocked iPad Pro.
They only said the demos were running off Apple silicon, not that they were running off a Mac Mini DTK machine.
They probably have other systems more akin to Mac Pros that they use internally.
It's not like Apple can't change what connectors are available on the back of the mac Mini. The form factor may not have changed, but the ports available have in the past releases.
Don't be surprised if there is no Thunderbolt 3 at all, but just USB-C.
It may be that by the time these things are ready for an actual release they can be USB4, which is USB-C + Thunderbolt technology, but no longer Intel exclusive.
> Those were also lower-end computers with poor GPUs.
Were you under the impression this $500 developers kit shipping with an iPad Pro CPU/ GPU is a high end computer? While it's a decent chip, it's essentially the same CPU as the prior generation iPad Pro with 1 additional core.
Shadow of the Tomb Raider is a PS4/XB1 game, not a PS360 game.
---
Edit 2: Please disregard my first edit, below—I was right the first time, then I got the games mixed up.
Edit 1: Oh wait, I forgot, SotTR actually did have an Xbox 360 port! It was one of the last big titles to have one. I think what they showed on screen looked better than the 360 version though, although it's admittedly hard to tell on a stream.
Is the target to match the performance of ten year old hardware? Then sure, that's matched. But it's not impressive. AMD FX CPUs have better performance than that, by a mile.
Yeah, but this is their existing CPU/GPU designed to fit into the constraints of the iPad form factor. They'll likely have something much more powerful for consumer hardware.
The iPad CPU/GPU is already thermal limited. An unlimited A12Z is right at the TDP of a laptop chip, at ~25-30W (5W per big core per anandtech, 4 cores, plus GPU and I/O, it's actually quite a generous estimate.
An unlocked A12Z is likely all you can get away with in a laptop, and inferior to SOTA x86 low power CPUs.
If the Tomb Raider game was actually running on a A12Z system (without any external GPU - note that this is the same CPU/GPU on the iPad Pro!), then that demo is actually really impressive, even if the game settings are set to low-quality and the framerate is a bit choppy.
It’s not about the game, it’s about running performance-dependent code written for x86 on an ARM chip. A lot harder to fake a stable frame rate in a game than in Photoshop.
On the other hand, it probably doesn't use much CPU just because it's single threaded. No game ever uses more than 10% of my 16-thread CPU. But that also means that emulation could seriously tank single-thread performance and ruin the game.
It might be the other way around. The might have gotten around the big issue with running arbitrary x86 code on ARM (the way weaker memory model) by pinning all x86 threads in a process to a single core. Which would be unfortunate.
Again, people need to dial back their expectations here. You aren't going to see cutting edge games running well through emulation. There is a reason Apple made such a huge emphasis on native apps, native is always going to run much faster.
They didn't demo gaming to suggest this is a great machine for gaming, they demoed it to show that it was possible at all. The previous version of Rosetta during the PowerPC->Intel transition was not known for performance.
If gaming is important to you and you want a Mac then you want an Intel Mac or whatever games are released for Mac ARM. Emulated games are not going to compete with native.
It was very odd seeing Lara walk through an area with dappled bright light, and her body remain uniformly lit. It may be that the game has a very basic lighting engine though.
It is like many triple A games in that it has a wide range of settings, all the way from full potato to RTX (it was ironically one of the first games to support that).
But does it run better than on the current intel mac mini with integrated graphics? All it needs to do is beat intel in comparable circumstances.
It doesn't really matter what the graphics performance is, on high end macs they'll still ship a dedicated GPU from AMD. What matters is that the game is GPU-limited instead of CPU-limited.
> But does it run better than on the current intel mac mini with integrated graphics? All it needs to do is beat intel in comparable circumstances.
Having gone and checked, no. Not even close.
(Nor would it be plausible to expect to. But it's clear Apple have made a choice here, and that is that if you're a user who wants legacy software or desktop gaming, Apple do not care about you compared to their margins. It's that simple.)
The maxed out mac mini cpu is a 6 core 3.2Ghz i7 with turbo boost to 4.6Ghz. I wonder if they can beat that with a newly ARM optimized MacOS? The current i7 has tons of power still as an 8th gen Intel cpu.
So, if I install numpy via conda on a Mac now, it's backed by Intel MKL and is thus amazingly fast. What will it be replaced with? Has anyone at Apple thought about use cases like this?..
I bet that they already have some low-level math library that uses ARM NEON intrinsics; you would definitely need them to port performance-demanding apps like Final Cut Pro / Maya / Photoshop / etc.
numpy is backed by BLAS and LAPACK. It just happens that on your system on macOS those libraries are provided my MKL. There are other implementations of BLAS and LAPACK out there.
In the Platforms State of the Union, they specifically listed Numpy as one of the open source projects that they built for Apple Silicon (along with Python 3 and others). Go to 20 minutes and 35 seconds on that video.
This isn't just about hardware. It's about software, and an investment in making libraries that are fully optimized for the hardware. The problem here is the long tail of needs is very, very long.
SciPy did support Accelerate, but they dropped it in 2018 because Apple’s implementation of LAPACK was so out of date. Apple have been crap at updating this stuff which is why people don’t use it.
I'm not a developer myself, but When I engaging with them on various OSS projects I always feel Windows is being treated as a second class citizen in almost every aspect (smaller projects often has no test, tool chain setup, tutorial, or all of above for Windows, as a starter).
Also Apple is committing to a long term pipeline of Intel chips. This makes sense since so many apps will take years to transition. At the same time, they're willing to put their own chips side by side with Intel and they believe people will voluntarily switch. I'm looking forward to the benchmarks.
If the performance in the demo is accurate, and considering that the actual Mac chips will have a lot bigger power headroom, silicon die size and more thermal headrooms than the A12Z, Intel will be swatted out.
I was interested by the GPU that was mentioned in the slide, does this mean that Apple is thinking that they have the expertise to take on Nvidia and AMD in that space?
Whether or not they think that or they can, they have been developing GPUs for iOS devices. It'll be interesting to see what they do for their desktops.
I agree. I know they're currently using AMD GPUs, but how much longer will they keep using AMD if they believe that their own in house chips can do a better job? Also, I'm curious as to whether they would try to sell their GPUs (iGPU?) to gamers who traditionally don't stay within the Mac ecosystem at all. Having a 3rd major player in the GPU space would be wild.
The advantage Apple has is that Nvidia and AMD and Intel have to actually make money from their chips. Apple can break even or make a loss on the chips themselves.
I don't mean in pricing, but design. Other companies have to design chips that make sense in the market. Apple makes it for themselves, and they can make chips that would not make financial sense, but perform great.
And charge you a bunch for the whole package, of course.
Given the lackluster support Apple has for the AMD GPUs they currently ship I bet it will feel like an upgrade. Plus a lot of the product line uses the integrated Intel graphics so beating that isn't much of a challenge.
When I was a young engineer, I thought management didn't matter. Man, has the Apple/AMD/Intel saga over the last 10 years proven me wrong. 10 years ago Intel had a decisive lead in talent/architecture/process. Now it hasn't been able to ship a whole new architecture or process since 2015 and is behind both Apple and AMD. Wow.
Is this the end of Linux as the host OS on Mac hardware? It's been really difficult for many years anyway, so for essentially hasn't been practical for a long time. I know there are plenty of working ARM builds of Linux so if the new Mac chips are ARM compliant then the ARM linux builds should work. I would think Apple has their own proprietary extensions or something tho, otherwise why make their own? Just very high manufacturing standards?
> so if the new Mac chips are ARM compliant then the ARM linux builds should work
That's not how it works. ARM is just the processor architecture. While Linux may very well support the processor, it's unlikely to support the rest of the hardware well, if at all.
yeah, but driver support has been a challenge for years on mac hardware (as I implied poorly) but there are ways to run if you tolerate some absurdities as a result of poor hardware support (like the fans running 100% all the time, battery draining from 100% to 0% in 60 mins, no wifi, etc). There are also workaround for some things like Broadcom where you can pull the firmware out of the windows driver. So you don't necessarily need the Linux kernel to work perfectly with all the hardware.
But my question/point was, is there likely to a total show stopper now that you can't just tolerate?
I believe that Apple won't provide its own GPU drivers for Linux and it looks like hard to implement OSS driver (the GPU is for Metal not OpenGL). So Linux on Mac Hardware looks hard unless RADEON equipped Mac.
Intel has been giving developers free libraries for many years which use code specifically optimized for Intel processors. It seems likely that any app that uses one of these libraries will break. Even if Apple's emulator or compiler can work around these optimizations, Intel might release new versions of the libraries that will not work on an ARM processor.
Also, even ordinary changes to MacOS tend to break music apps and plugins. I would expect a change to a new CPU architecture to cause havoc there.
Any app that uses assembly code optimizations that use unique Intel instructions seems likely to break as well. Maybe Apple has perfectly emulated all Intel CPU instructions, but that seems very complex and how often does new complex software work perfectly? Having to buy/upgrade apps to run on ARM may be necessary. And will drivers for USB and thunderbolt peripherals all just work on ARM?
I'm personally much more worried about the power Apple will gain from switching their hardware to ARM. This gives them such immense power, allowing them to create an even stronger walled garden environment. For all we know they could force the system to only run Mac OS?
Is this goodbye to choosing your own operating system now? Are the only people Apple cares about web and app developers? Are kernel engineers forgotten about?
What about low-level hackers who like to tinker with their hardware or poke around in UEFI?
Nothing here prevents you from doing this in a VM.
The problem with being able to tinker at will with a consumer system is that you have no way of choosing who is tinkering with your system.
If as a vendor you take the security and privacy of your customers' information seriously, you cannot deliver an "open" and "tinkerable" system. The two are fundamentally at odds.
This has been sufficiently obvious for long enough now that anyone still complaining about how they can't run their own kernel on their desktop is almost certainly a surveillance-state troll.
Not sure what the problem is. Apple is merely transitioning to a new CPU architecture. If you're a low level developer, you'll be working in ARM assembler rather than x86 assembler.
They may be moving to a new CPU architecture, but given Apple's track record of trying to lock you into their ecosystem, I wouldn't be surprised if they made your life a living hell if you tried to install anything other than Mac OS.
Think about android smartphones for instance. I can't simply load up a new operating system by downloading the .iso file into my phone and boot into the boot selection menu just by holding down a correct button combination. No, you have to go through custom ROMs, shifty looking Russian/Chinese forums, sometimes you even have to send a kind email to your phone manufacturer to get a key so that you can load a custom ROM onto your phone.
I mean, it's bad enough already on Android. But don't you know how absolutely impossible it is on iOS? I literally buy a computer (an iPhone), and I cannot write my own programs and run them on my new computer. Is this the company we can trust with such a major transition into a new platform and architecture? How on earth can I trust Apple so that they don't require me to literally jailbreak a macbook so that I can install another operating system?
Christ, look at how annoying newer versions of Mac OS are! Locked down to oblivion, everything requiring to be signed, no i-know-what-im-doing option in the settings anywhere to just disable everything and get on with your day. It's a real effort to get control of your own computer back.
There's a reason some of us still run Linux. Got a binary? Run it. Just like that. No messing about. I can never trust Apple to make it as simple as this.
And what, exactly, does all of this have to do with ARM vs x86?
> Locked down to oblivion, everything requiring to be signed, no i-know-what-im-doing option in the settings anywhere to just disable everything and get on with your day.
If you really know what you are doing, you should know how to disable and enable SIP.
>And what, exactly, does all of this have to do with ARM vs x86?
The transition from x86 to ARM gives them the opportunity to impose these restrictions. Back in the day, companies wouldn't even dream of trying to pull something as radical as this, but now that we've had a decade of smartphones proving how you can get away with not allowing customers to install their own operating system on their phone, the leverage is there. It's not ARM itself that worries me, it's the transition from x86 to anything (POWER9, ARM, etc.)
If it's like the switch to Intel then it will be $999 and you have to return it when the real versions come out but you'll get a replacement with one of those real versions. Except with the Intel switch you were shipped a slightly repackaged beige box... and that was 15 years ago so my guess is $1499.
And you most likely will have to be an established developer, meaning you’ll have to have an app in the App Store. Apple’s not gonna give these to people that just want to play around with a new machine.
"Submit a brief application for an opportunity to join the program. Selected developers will receive a link to order the Universal App Quick Start Program from the online Apple Store. Priority will be given to applicants with an existing macOS application, as availability is limited."
The terms of receiving a transition device are now available [0]. As expected, receivers of the device essentially sign away all rights to show/benchmark the device.
> Section 2.2 No Other Permitted Uses
> ... You agree that neither You nor Your Authorized Developers will:
> ...
> (d) display, demonstrate, video, photograph, make any drawings or renderings of, or take any
images or measurements of or run any benchmark tests on the Developer Transition Kit (or allow
anyone else to do any of the foregoing), unless separately authorized in writing by Apple;
> (e) discuss, publicly write about, or post any reactions to or about the Developer Transition Kit (or
Your use of the Developer Transition Kit), whether online, in print, in person, or on social media,
unless separately authorized in writing by Apple;
Apple has a chance here to go all in and make Apple ARM platform what Intel is today and standard ARM wasn't.
If they came out and said here's an open Mac platform with the ability to run macOS, Windows or Linux , open hardware specs, open source drivers and the ability to install whatever you want wherever you want it, I think most people will back it even though initially it'll be painful from a performance and transition standpoint.
But knowing Apple of the recent past or just be more and more closed, limited and Apple or nothing deal.
Apple ARM would like to be where Intel ISA as a de-facto platform is today - but not sure what in my comment implied I meant Apple want to be financially where Intel is today.
Also that would not preclude them from being financially where they want to be - more Macs sold at Apple premium - even if they are to run Windows and Linux - still means more Apple hardware sales and more money.
but the two things are inextricably tied - intel's open platform resulted in their financial performance to a hugely meaningful degree, and the same with apple's strategy vs. their performance. obviously it's not apples to apples in any number of ways, but apple's strategic decision to operate as a closed platform empowers many of the things they see as key to their performance. i understand what you meant, but i don't think apple views that as a goal they should pursue.
Mac is not currently a closed platform however and making it closed would be a gamble on Apple's part - far from certain they will succeed like they did with iOS. Your logic implies they are closing it down and that will lead to more monetization. That's not a given. We don't know that they will take that gamble yet - assuming they won't, the only way to make more money on the Macs is selling more of them - sales have been fairly stagnant on the Mac side btw.
The aarch64 ISA has left cruft and legacy than x64, with all unoptimal support for the complete history of SIMD evolution over several decades...
On one side, I think this switch is a good thing for the industry, as I always believed that ARM was a better choice and that we should get rid of the legacy at some point.
On the other hand, Apple is again tempted to play in their own garden, with a complete disregard of standards and interoperability. They're doing that with Metal, they will certainly do the same with their custom silicon.
Beware this is an explicitly long migration. They mentioned it's supposed to take two years but when Apple switched to Intel back in 2005 I remember buying my very first MacBook one year later and still many apps were not available, conversion was everywhere etc (I believe this is why they announced Office right away). I am really excited, but I wouldn't buy a new full-apple-soc hardware until the end of next year for the sake of compatibility, and most importantly, rock solid stability.
I hope it works out for them, at least they are pushing envelope a bit more for a change.
The best fit is for Macbooks I think - low power and being able to run iOS software is pretty nice.
For real heavy lifting (6k or 8k editing in Resolve, with raw video codecs, or heavy duty 3D tasks) I don't think I'll be changing from a desktop with AMD Ryzen and top end Nvidia in the next 2 years. Apple high end GPUs in particular are a question mark.
But Apple will probably get there eventually with their own chips - would not put it past them.
Reminds me of a recent LTT video where Linus suspected that the reason Apple was severely under utilising Intel CPUs on their Macs by having poor thermals could be because they wanted to release their own silicon for future Macs and have them compare favourably against previous Intel based Macs.
The same video where he gains ~13% performance by turning the bottom of the laptop into into a heatsink, rendering it a surface too hot to be placed on a lap. Also where chilled-water cooling had no performance gains over the un-lappable laptop.
If anything, he debunked his own theory with that experiment.
Looks like the devkits are $500 and need to be sent back at the end of the year. Also curious that the name ARM wasn't mentioned even once. I wonder if that's just marketing or whether they're plotting to do their own ISA at some point in the future also.
It may be to avoid appearing to be misleading. They'll almost certainly be able to virtualize ARM-based Windows, but that's not what normal users are looking for when they want to virtualize Windows.
Especially – and funnily enough – in business contexts, where you need to run software which are each only available on either Mac or Windows, but not both.
Do you have any examples? I was running VMWare Fusion until about 2015 for my office laptop for Visio and Project, at which point I felt no need on my next refresh to simply not request it due to cloud-based alternatives.
Yes, the NURBS (surface) modeler "Rhino" is one – or rather was; meanwhile it got a Mac version, but that one still has no feature parity, so you would still see this running in Parallels somewhere, I guess.
I used Parallels for a while, and it was great, but then we kind of standardized on VirtualBox at work. It was okay.
But then our dev environments got too big to run on laptops, so now they're all cloud-hosted VMs. We still run Docker (which uses a VM under the covers for Mac) but that uses Apple's Hypervisor framework and isn't really "user-facing" virtualization.
Keep in mind that current macOS runs ONLY 64-bit x86 apps and interestingly Windows on ARM emulates ONLY 32-bit apps. Since 64-bit extensions were designed by AMD, maybe they have some deal with them?
Perhaps it's just to temper expectations then. If the performance or user experience isn't comparable to what today's users expect, it's probably better not to celebrate that use case.
I noticed that too. Might have trademark or legal reasons though. Technically, if they're able to dynamically translate x86 to ARM as they've explicitly stated (with JIT of JS and the JVM as the example) they should be able to dynamically translate x86 VMs regardless of what OS they contain, which would allow x86 Windows to be virtualized with better-than-interpreted performance.
Or at least that's what I hope ;-) I'm relying on running x86 Windows and Linux within Parallels in addition to native MacOS Apps to do my daily work, which involves compiling and testing x86 binaries for these platforms, so whether this virtualization thing actually runs x86 OSes transparently is an absolute make-or-break feature for me to continue my usage of the MacOS platform for work.
I don’t think that would perform very well without some hardware support for it as well. Not an expert on this by any stretch but as I understand it modern virtualization is almost always hardware accelerated which I can’t imagine is a viable option if you’re translating the binaries with Rosetta.
Indeed - and I'm hoping they give some more details about that, whether it's ARM only virtualisation, or if their Rosetta 2 also supports x86 virtualisation.
I was seriously considering upgrading my 2012 MacBook to the new 16" model soon, but now I wonder what kind of longevity or support I can expect from that hardware...
Given Apple's track record of supporting older Mac hardware, I would expect Intel Mac hardware to be supported for many years and many OS versions after the last Intel Mac has been produced.
Also given Apple's recent MacBook track record, the first version of ARM Macs may not be as much of a slam dunk as people hope.
Which track record are you talking about? Apple discontinued OS support for PPC only 3 years after intel was announced. That means no more support for xcode and new commandline tools after 3.5 years.
I just got a 16" MBP...
Still, the waiting game sucks too. Apple said their first Arm mac will come out at the end of the year, but that could well be a very low end Mac mini, or Macbook Air. It's unknown when the pro line will come out.
That might explain why all but the top line version of the barely-refreshed 13+ MacBook Pro are missing Intel's 10th-gen processors.
Sandbag the upgrade for the 13" so it's not that desirable and you won't cannibalize your own demand for the release of the first Apple Silicon machine
My personal philosophy is to buy the third gen of every new type of product. The second gen tends to fix al big issues in the first. The third irons out all the small issues left. After that there probably won't be many changes except for some performance bumps.
I think we’ve got some time. I’m on the last leg of a 2012 too. Just like the sibling comment said, since we don’t know if the pro line will get a refresh there could possibly be a longer runway.
Will BootCamp continue to be supported for the next ten years on my June 2019 MacBook Pro so I can install an operating system that continues to develop new features with the assumption I'm running on x86-64?
I mean, there's nothing too magical about boot camp—even if apple removed all support for it from macOS, you can probably install windows on a partition the normal way without too much trouble. I think the only thing that might break is if Windows stopped being compatible with the Mac hardware drivers, and Apple didn't update them.
What do you mean by "continue to be supported"? Your 2019 MBP will only need new drivers for Windows if Microsoft decides to change things. Otherwise, Apple doesn't have to do anything at all for your current MBP to continue to be able to run Windows. While Windows may be something of a moving target, your particular machine isn't.
Assuming they build the new Macs around the SoC - does this effectively kill any hope of dual boot support for Linux/Windows etc. It's not just the ARM processor in there, those SoCs have a bunch of Apple proprietary stuff (inc. GPU) that I very much doubt will have open source drivers.
I know the 2016-2020 macs are pretty much terrible for Linux anyway due to hardware issues (audio, keyboard, wifi....) so it's no surprise there - but I fear this shift to "Apple Silicon" effectively kills it.
Meanwhile, Intel can't seem to fix the issue with i915 causing GPU hangs. I guess Linux desktop users, while not exactly a massive market share, will move to AMD.
Interesting to see Big Sur + iOS merging. It's the opposite of what MS attempted in some ways. MS had a completely different look and feel on the phone and tried to shoe-horn it into Windows 8 in one release, whereas Apple have gradually moved things between the desktop and phone in both directions (dock, curved icons, notifications), and gradually aligned the two.
Maybe one more release until they're merged completely.
> Apple will release the first Mac with Apple silicon end of this year, and it expects the transition to take two years.
That's more than twice as long as the transition from PPC. Sounds like they've not yet figured out how to do high-end. Hopefully, they won't be as quick to drop support on the $6k 2019 Mac Pro as they were on the 2005 Power Mac G5 quad (<4 years from release to unsupported by OS X).
The real story here, I think, is that Apple is making moves to end the bifurcation of their product line. Having the same ISA in both lines means eventually we'll see the lines between them blur.
Whether that's good for consumers remains to be seen -- I fear it may lead to Macs whose architecture is locked down in a manner similar to iOS devices. That would be a worrying trend.
It's amazing that in their history of making computers Apple has used the 6502/65816, the 68k series, the PowerPC, and now ARM. And along the way there were backwards compatibility options for all of them (although not many people bought and used the Apple II compatibility cards for the Mac, they were really two separate user bases).
It is kind of satisfying in a way to see the ARM architecture come full circle and back to the 'home computer' segment it started in. I look forward to seeing someone port RISC-OS to a Mac, I'm sure it will happen. :-)
I was surprised to hear that the transition would take two years. I was expecting it to take a shorter duration considering Apple’s experience with transitions and the performance levels of Apple Silicon (AFAIK, at least single core performance has been leagues ahead of other ARM processors on smartphones and tablets).
I don’t know what other people consider as the lifetime of a Mac (meaning the number of years they’d use a still-working Mac before deciding to buy a new one), but I do wonder just how long Apple’s promise of “supporting Intel Macs for years to come” will turn out to be. Things are surely a lot different now than they were in 2005, especially with much better hardware (and the full transition to 64 bit everywhere).
The demo of Rosetta 2, however convoluted or staged it may have seemed to some, was quite impressive to me.
They may be sandbagging to give themselves some wiggle room. IIRC Jobs also said the Intel transition would take two years and it only ended up taking one.
I really want to know what will happen to x86 virtualization.
For me, running Windows in a VM was the killer Mac app. There's always one or two Windows applications that don't have good Mac equivalents. I spent a good portion of my career as a Windows developer working on Mac with a Windows VM.
Here is a bit of history of the migration to Intel based CPUs.
"Apple's initial press release indicated the transition would begin by June 2006, and finish by the end of 2007, but it actually proceeded much more quickly. The first generation Intel-based Macintoshes were released in January 2006 with Mac OS X 10.4.4 Tiger, and Steve Jobs announced the last models to switch in August 2006, with the Mac Pro available immediately and with the Intel Xserve available by October 2006.[2] The Xserve servers were available in December 2006.[3]"
what makes me sad is that while Apple has done very well with these major transitions, it leaves behind instances where (for me) a completely unique application gets left in behind. I currently maintain an only system 7 computer for software that is incompatible with system 8 or 9. A MDD G4 running System 9 for software that won't run on OSX, and a Mac Pro running 10.6.8 to run PPC OSX application (through Rosetta.) I haven't upgraded to Catalina because I have some applications that will never make the transition to 64bit, and now I'll have to maintain something to keep intel compatibility on the table (assuming Apple EOLs Rosetta 2 like they did Rosetta.)
This maybe a naïve question, so feel free to educate me.
If more software developers now need to target ARM architecture, does that mean that it will be easier to port software to other hardware, such as the Raspberry Pi, or is it not that simple?
Not really. Native apps for macOS are targeting Apple specific APIs. It might help for cross platform applications that do low lever asm optimizations though.
The transition plan (dev tools, universal binaries, Rosetta 2, performance under Rosetta) looked really solid if I'm being honest. We'll see how they execute it but I'm optimistic that this will be smooth.
I think this is RIP to the Mac Pro for anyone that cares about extensibility. I can't see Apple making a board with a socketed CPU of their own silicon. I'd even say something like RAM would be a stretch.
Yea, I wonder what the Mac Pro is going to look like in a couple of years. Then again, they could stuff it with 100 A19z processors and have a 1024 thread machine (or something equally outrageous).
I have to think it's going to be tiny, cool (as in temperature) and use a fraction of the electricity that the latest Mac Pro does.
As for the pro software that currently requires a physical dongle for DRM, I'm guessing those vendors will be forced into the Mac App Store with Apple managing access to the software for them.
I was about to go get myself new MacBook Pro 16" w/ few upgrades (~4k) and now I'm not even sure anymore... does it make sense to have latest Intel powered computer or wait for ARM based version?
Unless you want to build and test ARM MacOS apps next year you should be fine. Your MBP should still work fine and in two years you should replace it anyways.
It will be interesting if one very big company can keep the whole stack going.. Even ps4/xbox went to the common architecture of x86. It could pay off big in price/performance but its risky..
after waiting for years for anyone to replace x86, with a better design, nobody could (Not even intel).
For me as a developer by far the biggest concern is definitely technical alignment with production infrastructure. It has been such a boon in the last few years to be able to fire up docker and run to such a large extent the exact stack that ships to production. It's very unclear to me how close an alignment it will now be possible to have on future MacBook Pro's in this way. Between that and lack of nVidia support, it's definitely going to give me a significant push towards a non-Apple laptop for my next purchase.
Commenters here seem to be focused on performance while doing what x86 chips do. They're missing the point. That performance is good enough.
Top left on the arch slide: audio engine. Other on-chip blocks: camera compute, neural engine, machine learning accelerator.
This is about advancing the state of the art in personal digital assistants. The Mac change is in service of the iPhone.
I'm excited. Digital assisstants have stagnated for a long time, because of internet latency and lack of dedicated silicon (and perhaps privacy concerns). This is the way forward.
Apple isn’t being hostile to Hey! They are being hostile to Apple. Imagine building a large new Paid App that you expect Apple to host and distribute for free worldwide by gaming their payment system.
It’s entirely reasonable for a free service. Apple hosts and distributes terabytes of free apps.
But Hey! isn’t free, they just charge you using a different platform. Maybe 30% is too much for Apple to charge them, but zero percent is definitely too low.
Visa/MC don’t host and distribute petabytes of data for card holders. They don’t provide hundreds of store fronts with localized content and marketing. They don’t provide extensive developer kits and support anywhere on par with what Apple does.
The payment processor argument means you really don’t have any clue to what Apple App Store services are or do.
No one isn’t saying they don’t make high margins on their App Store revenues, just that comparing the App Store to a mere payment processor is ridiculous and indicative of not even understanding what it does.
They talked about virtualization but they only showed Parallels Desktop. Is just a virtualization layer for x86 on ARM or are they going to provide something like Hyper V on Windows?
PowerPC = best at the time because IBM was a researchy CPU engineering powerhouse, but then got outclassed by Intel. AMD forced Intel to release its 1 GHz CPU a year ahead of schedule and generally pushed x86 ahead of PPC.
Apple/ARM = best now, because Apple's vast capital from iPhone success has allowed it to build a world-class CPU engineering team from scratch. Building the end products at the same company gives them integration that can't be matched by other companies.
For those of us long around enough to remember, the switch to intel was really smooth with a good balance of quick adoption and solid emulation for legacy apps. I am not team Mac anymore but IMHO they did it as smoothly as auch a major change could be conducted. I have no idea where this will take the ecosystem. But apart from pissing off power users (developers mostly) and deprecating intel based hardware in the resale market I would not worry too much.
Now Apple has reached the integration that Commodore had: they made all the processors (CPU, graphics chip, sound chip, interface adapters) for their Commodore 64 and other 8-bit machines, and for the AMIGA.
I’m worried it will bring even more close control by them of what we run and how we run it on those computers, but having lots of custom chips (many of them in one package) for all kinds of things with a unified memory architecture is amazing, a modern AMIGA.
Nobody gives a crap about the Qualcomm powered notebooks. They're a joke market segment. As long as Apple doesn't start selling A* chips to Samsung or other Android OEMs Qualcomm isn't going to worry.
> As long as Apple doesn't start selling A* chips to Samsung or other Android OEMs Qualcomm isn't going to worry
Even if Apple did sell their chips to other parties, Qualcomm will be still collecting its sweet patent royalty from both parties, so it couldn't care less.
The patent royalties are fairly small compared to selling a full chip. They would still be feeling the pinch.
Of course even if Apple did start selling those chips (which they won't) they would charge an arm and a leg for them, because they could. They would only be used in the top of the line flagship phones. Apple has shown zero interest in competing with the bargain basement hardware producers.
Different segmentation - Qualcomm powered laptops are the weakest ultrabooks, compared to Macbook Pros. I really wish Qualcomm chips were better though.
I almost didn't want to upvote this because it was at 486 points.
I'm sure anything close to the same performance as the Intel chips will have better battery life, especially since they can put in as fine-grained support for throttling and idling as they want. These should cost them loads less per unit than what Intel's selling them, too. They could cut prices a bit and still have higher margins.
This is likely a larger reason for the move than the perf/efficiency. Apple likes to ship "premium" products, which forces them into the premium/high margin price tiers with intel/etc.
So they will gain a bit with a product laser focused on its market, but alongside that, they won't be paying the kings ransom intel wants for a couple additional cores, or a couple Mhz boost. So they can simplify the binning into best in class, and good yields for the volume product, and pocket the margins on the best in class product segment rather than giving it all to intel.
I reckon this move will drive an arrow straight through the entire desktop market.
Why would I say that? Because the assumption desktop developers have made for decades is that Wintel/Lintel is the norm, everything else is a side dish. And Macs since their Intel shift have conformed to this, if not in terms of API, then certainly in the core architecture decisions. And that means that you design the software "for x86". If you try to run Linux software on a Chromebook you'll often have a nasty experience with these assumptions making things break. And nobody is paying much attention to Win10 on ARM chips.
But now you have a top-down mandate from Apple, which is in a strong position to direct the desktop platform again, possibly the strongest it's ever had. Everyone who follows Apple knows that they tend to obsolete things quickly; they have played this game before. The move they've made is to usher all the developers, not just the mobile ones, to ARM. And that means throwing out a substantial part of old codebases optimized for many generations of Intel chips. It pressures the Linux junkies to be more cross-platform, which in turn topples over some assumptions around Linux desktops themselves. The major distributions will scramble to support Apple hardware.
And a consequence of the assumptions in Linux changing is that you can start doing some ground-up rethinking too. "We're redoing this part of the codebase, let's change the design." And with Apple support may also come Surface support, Chromebook support - giving all the ARM platforms equal treatment. So it's likely that multiple operating systems will see a generational shift, not just Apple's. A big chain of events that will explode some projects and leave others untouched.
And then at the end of that, we might have RISC-V hardware coming over the horizon.
Hoping there's a way to switch to a more compact view, but as shown in the keynote it looks terrible. What is with the insistence on wasting space in modern operating systems? Were people getting too productive with new, large displays?
Already Apple is making it a headache to run non-signed apps on the Mac. I don't think they're going to slow down the "convergence" of iOS and MacOS at this point.
Also interesting that the dev kits use the same chip as the latest iPad Pros (A12Z). I’d love to see the two OSes merge to some extent because I love writing on the tablet but find iPadOS not the best for general dev work.
Oh, 1,000%. Even basic tasks like working on a file, then pasting said file to a cloud storage app and then to an email/slack channel take an order of magnitude longer than it would on a computer. Long way to go before the iPad becomes a true Pro machine, but for some fields, it's basically there. Still, such simple quality of life improves that are mainstay/muscle memory on computers essentially take time and animations and taps to execute.
I wonder what impact this will have for developers using non-Apple development tools. For example, I'm a C# .NET Core developer using Visual Studio. I'm hoping that the architecture shift won't be too disruptive to people who choose to use development environments other than Apple's Xcode.
Apple wants to put its users in chains and keep them locked to the platform for life. It is important to keep the system closed to third parties like independent repairers and other operating systems. Once they have their own CPU they can kill off virtualization and hackintosh even though they have no impact on their revenue and make it impossible to run other operating systems on their hardware. Push all software into an app store.
I thought the Mac was amazing back in the mid-80s. I was very impressed with the ipod touch and ipad. I owned an early unibody macbook pro. I can understand the benefits of the closed platform with everything micro-managed by Apple. But it is suffocating. I am more than willing to give up trivial advantages for relatively open hardware platforms and software.
Besides native apps from Apple, Microsoft, and Adobe (Universal 2), the demo included Maya and Shadow of the Tomb Raider (Rosetta 2), and Parallels Desktop (unspecified virtualization improvements). I couldn't tell if the guest O/S was Debian for x84-64 or Aarch64.
One of the About This Mac dialogs showed an A12Z with 16 GB, which is consistent with the developer transition kit, basically an iPad Pro in a Mac mini case: A12Z, 16 GB, 512 GB SSD.
Well, that new MBP I bought 4 weeks ago and planned to keep for 5 years just got a muuuuch shorter life span if I can run iOS apps on an Arm Mac, my wife is going to be livid.
I guess this also means Catalyst’s life span is pretty short and SwiftUI will becomes the focus?
I'd be surprised if the iOS apps did not work on Intel based macOS. Project Catalyst already works on Intel macs and my guess is many iOS apps have been migrated already.
I'm sure there will be emulators. But I'm also fairly confident that they'll be relatively slow. Emulating across architectures is rarely performant, and if Apple had solved the problem they would be talking a lot about it right now. In the past they've gotten away with this because the architecture they're moving to was so much faster than the previous architecture that even with a 50% or 75% performance penalty the apps would run faster than they did on the old hardware. With this new hardware it is likely only going to be marginally faster than the old Intel chips since the focus is more on power efficiency, so emulated apps are probably going to feel sluggish.
The difference is that the entire (64bit?) iOS App Store back catalogue would likely be available to run as unmodified binaries without developers having to lift a finger.
My guess, and I hope I am wrong, is that it might be a world of pain for backend/cloud developers.
Virtualization gives us Linux ARM docker images, which is nice, but not the same as running the same Linux docker images that run on the cloud. (while ARM in the cloud can be an option, it is a whole different topic)
The developers will have to find and develop with the ARM equivalent docker image of their production x86 ones, which will make the local dev and production environment unnecessarily more distant.
The good news is that the Docker ecosystem has a relatively broad ARM architecture support, but still, this will be a significant difference with prod environments.
I think gains in I/O will be more noticeable than gains in their CPUs. SSDs are being choked and held back by PCIe 3.0 interfaces at this point. I'm not sure that PCIe 4.0 is any better on latency. It would be interesting if Apple took a big leap forward with a low latency interface like OpenCAPI, or maybe some iteration of RapidIO. Something like Optane over OpenCAPI would be a huge leap in speed. Optane is wasted right now with PCIe.
The PS5 apparently has incredible disk I/O, possible due to RAD's compression technology. A super fast compression codec could make a difference too.
Okay, so Rosetta will be great for the beginning, but in the long run? How hard will it be for large Multi-OS code bases (think Photoshop/Maya/CAD etc...) to adopt and to maintain?
I don't understand why folks are doubting the possible performance gains here. The latest iPads are already faster than the vast majority of PC laptops, including almost all MacBooks in single-core performance. And that's with the thermal constraints of an iPad. Can you imagine what that exact same chip can do with better cooling?
I guess we won't have to imagine long, though, since that's exactly what we'll see with the developer kit. Can't wait to see benchmarks of those.
I think that one of the main long term goals for Apple from this switch is to increase its marketshare to increase its revenues from services.
The marketshare can grow through three drivers :
- More affordable computers,
- More vertically integrated hardware/software
- A larger ecosystem
Think about the iPhone SE and how it is supposed to strengthen Apple bottom line on the long run, its more about the recurrent revenue from services than the one shot from hardware.
One thing I haven’t seen mentioned that is extremely interesting with regards to performance is if all those demos were done on a developer test kit machine, which is itself based on the Mac mini form factor... then the fact that these systems were all driving that Pro XDR 6k display is notable.
For references in no configuration of the current Mac mini can you drive an XDR without the use of an eGPU thanks to the extremely limited integrated intel graphics.
Native virtualization and the ability to run iPhone and iPad apps directly on the Mac. This could unlock a whole new level of focus and optimizations for Apple.
Anyone know what happened to Intel over the past 5+ years? They used to be unstoppable, relentless, the best. Have they had trouble recruiting the best engineers? How have TSMC and Apple been been able to pull ahead in terms of being at the lead of CPU development?
I thought maybe Intel was losing ground to TSMC for cultural reasons – being the best while being based in the US is getting harder and harder. But I'm not sure.
Everybody seems to debating the hardware but to me the question is software. With this mean Apple's desktop software would become incompatible with the new architecture? Also would this mean less control for for the end user on the laptop / desktop form factor? I'm picturing only app store from iOS on those form factors...yuck
I'm curious if (1) that first Mac to ship with "Apple Silicon" will be pack an A14 -- A14X? -- chip and (2) if we'll get new branding to distinguish CPUs for phones, iPads, and macOS devices.
UPDATE: I don't mean the developer transition kit that has a Mac mini with an A12Z, I mean the first consumer macOS device/macBook
> The biggest addition this move to ARM-powered chips brings is the ability for iOS and iPad apps to run natively in macOS in the future. “Most apps will just work,” says Apple, meaning you’ll be able to run native macOS apps alongside native iOS apps side-by-side for the first time.
Good time for anyone who was using macOS for gaming or anything else for that matter to switch to Linux. OpenGL bit rot, refusal to support Vulkan, dropping of 32-bit, dropping of x86_64 architecture - all that should have been a hint. Backwards compatibility is not even an option there (besides for emulation).
I started using MacBooks when they switched to Intel around 2008. Maybe this is a sign to move on. Currently using a mid 2014 model, I like the slim form factor, but I don't like the absence of an ethernet port. I don't care much for the retina display, my 2008 matte model was better. Been looking at the Lenovo T470. I need something that would be easy to replace quickly anywhere in the world. What should I get?
Lenovo Thinkpad laptops are pretty good, and in my experience run Linux very well. Get something with AMD APU (there should be new models coming out soon with Zen 2 + Vega).
The only annoyance is their refusal to refund Windows tax. But now Lenovo partnered with RedHat/IBM and started selling some laptops with Linux pre-installed (Fedora), so you can avoid Windows tax there, even if you don't plan to use Fedora. I hope that will eventually extend to all their models.
I think in the next 5 years this will be in Apple's favor. The issue is more of a 10 years down the road question. Intel always seems to lag for a while and then starts leap frogging. I don't know if that cycle continues, but it might...which is a risk to Apple.
People in here are so excited but I think that Apple will take this as an opportunity to make installing of custom programs even harder. The App Store might eventually be the only way to reliably install software and Homebrew could become a thing of the past.
Maybe, but I honestly kind of doubt it. The Mac specifically serves as the development platform for iOS, so you obviously need to be able to install a compiler toolchain at least for that.
But developing for iOS also requires a bunch more than just developing for iOS. Mobile apps often require servers, and Apple doesn't have a server product any more, so it's in Apple's best interest to allow stuff like Docker (which, you'll remember, they specifically demoed).
iPhone 11 Pro Max's Geekbench multi-core score is ~3300.
An 8-core MacBook Pro's Geekbench multi-core score is ~6700.
Now imagine the iPhone's CPU with double the cores and a giant heatsink + fan. I bet it would double maybe even triple a maxed out MacBook's score.
On the other hand, most software does not utilize ARM's NEON instructions (counterpart to x86 AVX). In my tests [1], H.265 software encoding was 3.5 slower on ARM than on x86 in terms of frames encoded per watt of energy consumed.
I'd imagine Photoshop and other media apps are highly optimized for x86, so do have they have Adobe and other developers on board to port code over to ARM? Will they provide Rosetta-like instruction set translation?
Too bad the hardware is going to run macOS, well, and Mac software, both of which just keep infuriating me at every turn. I wish they'd put as much effort into their software as they do hardware.
It kind of feels like Apple makes the MacBook Pro only so there is a platform for people to make apps for thier main market - phones (and iPads?). So why not make the hardware almost the same I guess
Was planning on getting a new Mac laptop this year to replace my old 2013 Macbook Pro. Would it be better to get one of the last Intel Macs or wait a bit and get in early on the upcoming ARM Macs?
I wonder what they can do to keep a reasonable experience for locally run dev containers intended to eventually run on x64/Linux in prod. Isn't the experience already a little subpar?
Wow, after a couple of decades of pointing and laughing, I might finally have to buy a Mac. Assuming, of course, they don't stick their own Management Engine-alike in there.
As someone who's not fond of apple ecosystem I'm both excited and afraid. Processing space can definitely use more competition, though I'd be more interested in actual GPU efforts as CPUs are pretty boring unless you're running a server or in a tiny CPU intensive niche.
What scares me though is expanding of walled gardens of apple. Mac used to give back to libre culture at least a little bit and push some valuable standards but with native iPhone apps and this closed architecture I'm afraid it will take away from libre and desktop culture.
Whether it's a good thing or a bad thing we'll have to see but I'm staying optimistic.
Does anybody know if it will still be possible to run Windows via BootCamp with the new apple processors?
I use a 16" MBP and am a heavy BootCamp user (Windows 10).
Probably not, no mention at all of Bootcamp and this is a unique opportunity for Apple to lock everything down. Virtualisation will be the only option I bet.
Seems like it's still possible if Microsoft gets on board. They've already made investments in getting Windows 10 to run on Qualcomm's ARM-based Snapdragon processor.
Neon isn't the chosen one for SIMD performance in most use cases. The very basic use case of cpu encoding AV1 or h265 video requires AVX2, a much newer simd instruction set. Afaik arm is nowhere near this.
good point, I wonder will they eventually open up a dev/pro mode on iPadOS and give terminal access (as it is with macOS) but protect system folders...
"What would keep Apple from shipping machines with both ARM and Intel CPUs in them? The ARM CPU would run the OS and decide when to ship jobs over to the Intel CPU. I can’t imagine that the home-grown ARM CPU would add much to the total price of the computer."
It is a lot of work and now you have to buy two CPUs. No cost savings, possibly more power usage in worst case, and maybe even thicker computers.
It would restrict it so you can never remove the Intel CPU. It is best to just emulate the Intel CPU for a while until everyone ports their code across.
Well, in theory both AMD and ARM use MOESI so it's not as crazy as it would be with Intel's MESIF. But that's a really scary rabbit hole to be going down and I'd be terrified of memory permissions attacks striking at the boundary there. So probably not worth the risk unless each processor gets its own pool of RAM.
Ah. I saw references to "dual architecture" online but looks like that means Apple will be releasing new Intel machines even as they are encouraging everyone to port their apps to ARM. The "dual" part just means two different lines of machines that will be sold contemporaneously for a little while, not two architectures inside one machine.
This level of asymmetric processing is very difficult to achieve, at nearly every possible level, from part sourcing, through hardware design, firmware, software to user experience.
If the FCC documentation is any indication (and it may not be), the developer kits may not shop until December. The reports made available today show a testing setup with what appears to be the DTK device, with internal and external photos not available until December 8, which follows the somewhat typical pattern of order devices of making such pictures confidential until shortly before the ship date.
This probably leaves even less control to the user, kind of like iPhone where you can only dream of having root access to a recent device. Needless to say this also gives up the compatibility celebration which using x86 hardware was (running Windows apps with Wine and CrossOver, dual-booting Linux and Windows alongside MacOS). Is it going to be any better than an iPad+keyboard at all?
I doubt it, the cost of building an assembly line for chip fab is apparently one of the most expensive among all kinds of manufacturing. This link was posted on HN last week that alluded to this: https://stratechery.com/2020/apple-arm-and-intel/
I bought mine a month ago, and I'm happy to keep it. It'll be supported for (guessing) 4-5 years, at least, and the new ARM-based ones won't be available until the end of the year (and you probably don't want to be an early adopter).
OK, so should I sell all my macOS hardware (except Hackintoshes) now? Is this exclusive or half-baked like MS Surface ARM, i.e. some running on Intel, some on ARM? If it is half-baked, what's the point of fragmenting macOS and apps it can run?
That's pretty subjective depending on operating system and use case. If you want Linux, Dell makes some really good options (XPS Developer Edition), System76 and Purism make some nice hardware for libre software purists.
For Windows laptops there are tons of options and it all boils down to thermal performance and power versus weight and expense. No notebook will defy the laws of physics so there are trade-offs, especially when you start wanting to play games, which I have generally found just isn't worth the trouble on a laptop. Dell's XPS and Lenovo's Thinkpad lines are solid in terms of providing options that fit a lot of different use cases.
I've found that the new Windows Subsystem for Linux 2 environment is great for a lot of the command line Linuxy stuff I took for granted in MacOS. Apple's hardware used to be my default option for computing on the go while I would use native Linux on my desktop, because so much of my development stuff requires some flavor of Linux compatibility. Now, with WSL2, Windows works perfectly for what I need. Microsoft's Linux support is so good that I would not buy another Apple laptop if I were in the market now (it feels really weird typing that) - there is just nothing justifying Apple's premium price anymore, for the type of work I do (YMMV).
The thing that's different about buying non-Apple computing hardware is that you have many more options. Apple takes kind of a dictatorial role in giving you one option at any given screen size or price point. Want a GPU? Buy a 16" Macbook Pro. Want a desktop class CPU? Buy an iMac (or spend $10k+ on a reasonably spec'ed Mac Pro). People put up with this in large part because of Apple's marketing. MacOS Catalina was a disaster but at this point a lot of people have low grade Stockholm syndrome. The relationship is totally different on the PC side where you can practically find anything to match whatever particular use case you have; there are a lot more options, and you have a more active role in figuring that out and purchasing what works for you.
Apple always goes all-in on something like this, so while they'll continue to make Intel Macs for a while, and support them for (my wild guess) 5 years, they're definitely going 100% to ARM.
Every other major hardware company plays it safe by supporting as much of their user base as they possibly can with a wide variety of configurations. Apple doesn't.
Does anyone know if Apple's processors have any anti-features resembling Intel's management engine coprocessor? If there isn't any, this may be a good route to a less backdoored PC for many users--not just Apple's existing customer base :)
Pretty much every SoC has cores equivalent to the management engine. Currently they have a handful of ARM cores on the southbridge that fulfill the same purpose. I imagine that won't change. "BridgeOS" is the term to search for if you want to learn more.
The thought experiment is probably moot anyway though, as Apple probably won't allow any kernels that haven't been signed by them to be booted like on their iOS devices.
I ended up doing a bunch of research into and asking around about the T2 chip (which seems to be the closest thing to an IME Apple advertises) today and got a variety of responses.
The general picture I've gotten is that the T2 is probably significantly less capable of surveillance than, say, the IME. This talk [1] for example suggests (but doesn't rule out explicitly as far as I can tell) that the T2 is not connected to the PCIe interfaces for network cards, which significantly reduces the extent to which the T2 could autonomously phone home what it could learn through its direct storage access and connection to the CPU.
And yikes! No unsigned kernels would be pretty bad. I certainly wouldn't be buying if that's the case :(
I'm curious about the future, especially games, if they want to take on consoles.
PS4 and XboxOne are 7 years old and while the next gen looks really good Apple can refresh the productline much quicker (on the other hand it's a good question if people really would want to buy a new console say every year because Apple sure will be aggressive pushing out new models much more frequently)
macOS's gaming story has always been pretty abysmal. They nuked a ton of games when they dropped 32-bit support, and they refuse to implement the graphics API that the gaming industry is standardizing around. I don't foresee any improvements in Mac gaming from this announcement.
> But now games like Fortnite, Minecraft, PUBG etc. that already exist on iOS can natively run on the Mac. So I think that's a pretty big thing
With touch controls on non-touchscreen devices.
It isn't big.
I think the fact they chose to show off a fairly poorly running version of Tomb Raider as their demo goes to show that they still fundamentally do not understand the gaming market.
How bout zero since they aren’t doing anything they haven’t done before three times over in the last twenty years? Didn’t you catch their use of Rosetta2 and UniversalBinary2 to differentiate them from previous versions written in prior decades?
You can't always buy patents. If other players don't want Apple in the CPU market, they could just block Apple. Patents offer a monopoly on a technology after all. A different scenario would be if Apple built a war chest of CPU-related patents, which they could use for trading.
Because they have their own warchest of patents from acquiring about every decent processor startup over the past 15 years that would allow them to counter sue, and microarchitectual details are under incredibly strict NDAs to where it's an uphill battle to even prove that Apple is using any of those patented techniques to begin with.
Also, the PowerPC macs weren't their chips, those were IBM and Motorola for the most part (Apple did have some input into Altivec, but didn't do anything from the RTL down AFAIK).
You are contradicting yourself. Apple hasn’t been sued because of their patent portfolio, but they still have that. Little changes with this transition, they are still building computing devices around the ARM instruction set.
I wonder if this is actually even related to the original Rosetta (which was actually an external vendor - QuickTransit by Transitive), acquired by IBM. Most of the Transitive team left IBM to go to Arm and Apple.
There goes Hackintoshes after 5 years and backwards compatibility. Throw everything useful away (hardware with special kexts, niche expensive software) and start from scratch. No thanks.
Do you think they rewrote Mac OS for ARM, or altered iOS to look like Mac OS? Or have they secretly been in sync for the past 10 years? It's been 9 years since they rewrote Final Cut, and 7 years for Logic Pro, both rewritten for this day
Wonder how much code is shared between Mac OS and iOS, they can't be putting that much effort into Mac OS when it sells 10x less devices than iOS unless it's mostly shared.
It amazes me how much abuse loyal Apple customers are willing to take. How much money are you willing to spend. It's almost like this is some large-scale ritual hazing which serves to make you, the hazee, come out even more loyal on the other side.
Yes, and then you ‘upgraded’ the OS and gratuitously stopped running older software. My final Mac Pro — yes, I was a buyer of Mac Pro grade hardware — still runs 10.6.8 for that reason.
I think this is exactly the wrong direction. First no more geforce for you. Then the ram is not upgradeable anymore. Oh here goes all ports except a usb3 and you need to carry a full case of dongles. Then the escape key.(It must have escaped the non sense!!!) Then the SSD is soldered. Then this fancy useless color bar and T2. Now you can't run linux with VB or parallels or dual boot to Windoze. They are trying their earnest to drive away the ppl that build their brand by innovating. Where have the days when apple's hardware edge was used to do great features such as target display mode gone? The smartest move apple did was to move to Intel. A MBP was the best windows experience ever, circa 2008. Why can't a 5000$ device have the best processor available that can be used for other oses, plus some ARM silicone? I don't get the part about removing x64; we certainly havent run out of address space.
Hello, thanks for the comment, I might. Do you have any suggestion HW wise? If I switch again, it will be for linux as my daily driver thought. I've been happy with the MBP line, and my 2015 MBPs are still the best hardware I've owned. I like Apple's services, I use iCloud, and the calendar and contact sync with my iphones and ipads just works. I like macosx because it's real unix, integrates well with the rest just mentionned. I run ubuntu for GPUs for DL on a dedicated ws; I wish I could do some small models on the go better than CPU speed, instead of remoting. I have several VMs in VirtualBox (that can just now as of VB6 do nesting!! that's great! (real containers AND kvm on macosx thru proxmox) but that wont work with ARM... ). I still need some win7 or win10 for legacy stuff or for clients, Vbox does the trick for most of it, and if I need the metal performance of the gfx in windows I have a small bootcamp partition.
I use extensively Targed Display Mode with old imac27s, and I LOVE that feature; a 27 inch monitor with a server to do backend tasks on my desk, hard to beat.
All this to say, to repro my setup with Linux and another brand of intel powered machine, I'll waste an amazing amount of time. Intel is not going anywhere. They might have missed a beat or two, but they will pump out the best CPUs again in a short time. AMD got the upper hand for while back when the transition to x64 happened with the first real dual cores. Intel is not looking great right now I'll give you that, but chances are they will come back. Maybe. But in any case, this is a "if it works dont fix it". This CPU change will force a bunch of ppl to fix something that worked.
I supported Apple so much when they chose to adopt the Bell Labs system interface. I loved Apple even more when they made it possible to run the C and Assembly code that PC users have always known and loved, using a platform known for its product excellence.
However Apple comes across as cruel when they make decisions like these, which break software distributability of things like native machine learning code. For example one build system that enables that use case is: https://github.com/jart/cosmopolitan
At best these new POWER or ARM architecture Apple PCs are going to be like a C-class Mercedes. So I'm honestly not that concerned. Having distributable open source executables support them shouldn't be that difficult, it's just that it'd add bloat and require trading away important parts of Von Neumman architecture, such as self-modifying code, in order to be done easily.
If Apple wasn't being goofballs, they would have taken into consideration that the x86_64 patents should expire around this year, so Apple could have just as easily adapted the POWER design of which IBM divested themselves, to support x86, by simply bolting on a code morphing layer like Xed. Since it'd be pretty great to be able to buy an Apple PC that doesn't have MINIX lurking inside the chip, without making tradeoffs that could be most accurately described as breaking open source software to save money.
Is it just me or did apple just bungle this entire announcement by not announcing a consumer facing ARM MacOS device, only a hot-rodded Mac Mini with an iPad Pro chip inside.
How many devs actually have the setup in place to use a non mobile device?
I also wonder if the current+last gen iPad Pro that has the new keyboard + trackpad case will gain the ability to run Xcode and native macOS apps in the near future.
Current A12z chips are highly performant; Apple is roughly one chip cycle ahead on perfomance/watt from any other manufacturer. I presume their consumer hardware will launch with an A13Z, or maybe an A14 type chip.
Apple has consistently shipped new chip designs on time; Intel’s thrashing has cost them at least two significant update cycles on the macbook line in the last six years. Search this fine site for complaints about how new mac laptops don’t have real performance benefits over old ones —- those complaints are 100% down to being saddled with Intel.
Apple has a functional corporate culture that ships; adding complete control of the hardware stack in is going to make for better products, full stop.
Apple has to pay Intel and AMD profit margins for their mac systems. They are going to be able to put this margin back into a combination of profit and tech budget as they choose. Early days they are likely to plow all this back into performance, a win for consumers.
So, I’m predicting an MBP 13 - 16 range with an extra three hours of battery life+, and 20-30% faster. Alternately a Macbook Air type with 16 hours plus strong 4k performance. You’re not going to want an Intel mac even as of January of 2021, unless you have a very unusual set of requirements.
I think they may also start making a real push on the ML side in the next year, which will be very interesting; it’s exciting to imagine what Apple’s fully vertically integrated company could do controlling hardware, OS and ML stack.
One interesting question I think is outstanding - from parsing the video carefully, it seems to me that devs are going to want ARM linux virtualized, vs AMD64. I’m not highly conversant with ARM linux, but in my mind I imagine it’s still largely a second class citizen — I wonder if systems developers will get on board, deal with slower / higher battery draw intel virtualization, or move on from Apple.
Languages like Go with supremely simple cross architecture support might get a boost here. Rust seems behind on ARM, for instance; I bet that will change in the next year or two. I don’t imagine that developing Intel server binaries on an ARM laptop with Rust will be pleasant.