From someone who was briefed by Apple [1], this is Apple segmenting these processors. The M1 Pro and the M2 Pro were closer to their respective Max variants.
With the M3 series, the M3 Pro is more of a mainstream processor vs. being closer to the M3 Max.
What’s strange about this strategy is it’s less cost effective because Apple is now designing a completely different Pro & Max chiplet, as opposed to just scaling up the base chiplet as done in the M1 & M2.
This is a lot of extra complexity they are inflicting on themselves if accurate.
> What’s strange about this strategy is it’s less cost effective because Apple is now designing a completely different Pro & Max chiplet
That may be true for the M3, but they undoubtedly are already working on the M4 and M5; they may have bigger differences planned for future releases that require the Pro and Max designs to be separate. Could be that they are just doing the initial work to split the design into two branches for M3 so they can make more changes from M4 onwards.
if this extra complexity is mitigated by hiring a larger, separate team (rather than strain existing engineers), then they might win overall because of the customer segmentation.
I was downvoted to oblivion for suggesting that was the reason the last couple of years of intel MacBooks had botched thermal solutions. They were really indefensibly bad, but then again: the jump to m1 was enormous.
I have a hard time feeling I was the only one conspiratorial enough to suggest something like that.
Three reasons that make me disagree: Firstly M1/M2 Macbooks don't have great cooling either. The Air models use basically nothing but a thin sheet of metal, maybe they'd run longer without throttling otherwise. Secondly Intel Macbooks had subpar cooling (and power delivery) much longer than the last couple of years. A video on YouTube shows more than doubled performance on a watercooled 2015 Macbook. And let's not forget that other manufacturers are sometimes even worse than Apple: Microsoft once sold a Surface Pro where the upgraded i7 version was slower than the model with i5 CPU because the cooling was so terrible!
No, I don't see how that's more than a conspiracy theory.
I am using an M2 Air right now and it has 2-3x better performance than my older i7 Thinkpad while being completely silent and running 3x longer on battery. Apple didn't have the need to pull any tricks.
What I am saying is that the cooling solution for the intel MacBooks was designed for chips with a lot lower TDP, meaning it made the m1 look a lot better. Maybe artificially so, since the intel MacBooks throttled more or less instantly.
I am not saying the m1 wasn't amazing (I have one, I like it). I am saying that apple had subpar cooling solutions for several years before the m1. They had ample time to fix it, yet the last intel MacBook was probably the worst one.
I don't have the impression Apple would intentionally cut the performance like that if they can't also make you pay for an upgrade that alleviates the bottleneck. Want a reasonable RAM & storage capacity? Pay for the upgrade. Want more IO without dongles? Pay for the Pro model. Want a 120Hz screen for smoother drawing? Pay for the iPad Pro.
If it was intentional I'd expect Apple to have a more expensive Pro Max Ultra tier with actually good cooling, and then a year later they can pretend they invented a new paradigm of CPU cooling that works even in the cheaper models. There's no upside for them if cooling just sucks in general, and their switch to a more energy efficient architecture is clearly motivated by this problem.
The Intel MacBooks had bad thermals because Intel had promised more efficient processors and Apple had designed the new cases around that. Then Intel flubbed their die shrinks for several years in a row and Apple was stuck with the hotter chips from the larger process node.
Since they were already on the road to switching to Apple Silicon, it may have not been worthwhile to redesign the cases for the Intel machines to compensate.
Intel flubbed the die shrinks several years in a row, but from 2015 it was pretty obvious that whatever apple was designing for wasn't enough. There were better thermal solutions in thinner, cheaper laptops. The fan didn't even draw air over the CPU in the last intel MacBook air. I opened one and really tried to figure out if there was some 4d chess going on, but no. The fan did nothing. It just started spinning fast making noise for no reason. I even did the old incense test. Next to nothing was being pulled over the CPU heatsink.
It was spectacularly bad by design. This would not have been sufficient even if intel had delivered.
The pro models were better but still awful. It wasn't a matter of fitting things in. It was by design.
I have apple products. I like them. But even being charitable I can't come to any other conclusion that they just didn't care to deliver a good product.
It's a weak theory, considering how the first M1 devices used the same thermal solutions as their Intel predecessors. The M1 Air even removed the fan from the existing design that already had a fan when it was powered by Intel.
The only reason the thermal solutions were indefensibly bad was because Intel's roadmap had nowhere to go but to add more watts and heat. The 2016 MacBook Pro 15" had a perfectly adequate thermal solution for its hardware. Still, in a few short years, Intel's newer processors pushed the design to its thermal limits when they should have been delivering more performance within the same heat and power constraints. Apple had to relent and make the product thicker and add more wattage for the 16" MacBook Pro (Intel).
It's not like Apple massively upgraded the M1 thermal solution and took a victory lap on an artificially unfair playing field. Apple dropped their chips into the same or worse thermal designs and still outperformed Intel. On some benchmarks, the 2020 M1 MacBook Air with no fan beats the 96W 2019 Intel 16" MacBook Pro. [1] For Intel, it was downright embarrassing.
We can't blame Apple for designing an inadequate thermal solution for the 2016-2019 MacBook Pro 15" when it was Intel's own product line that couldn't deliver improved performance without consuming more watts.
Computers are supposed to get smaller and faster over time, not bigger and hotter.
That is exactly what I am saying: they must have seen that the thermal solution wasn't enough. MacBooks were throttling way too much since at least 2015.
The conspiracy theory is that that used a thermal solution that was enough for the m1 on the intel chips that throttled while browsing the web.
What I’m saying is that the older models (like the 2016 MacBook Pro Touch Bar) didn’t throttle very much and were thermally acceptable. Fast forward to Intel’s newer chips in 2018-2019 and the same chassis was much more limiting to the newer processor SKUs.
> Apple's 2016 MacBook Pro chassis was designed at the latest, in early 2016. We got the first glimpse of it in a photograph in May of 2016. It looks like Apple is sticking with a four-year chassis design, so it's entirely possible that this is the last year of this enclosure.
> We aren't expecting a thicker machine.
> We've also said this before — we think Apple got hosed by Intel, when they were gearing up for the 2016 MacBook Pro enclosure in 2015. We know that in 2015, Intel was promising delivery of 10nm process Core chips well before now. With any luck, Intel will finally deliver on its promises for a die-shrink that was expected nearly three years ago which will help alleviate the situation further. Or, maybe the next will be ARM-based — we don't know.
Apple’s stop-gap 16” model would prove to be thicker, and the article correctly hinted at a future ARM system.
This quote is important context because Intel had been promising lithography advancements that were delayed.
In 2016 Apple designed a chassis that they wanted to keep around for the next several years, but Intel basically didn’t deliver on their stated roadmap. There was no way to update the system without pushing the thermal solution to its limits or partially redesigning the thing.
As far as thermal performance on 2015 systems (pre-USB-C), those designs are so old that I don’t think replacing Intel was on Apple’s near-term radar.
Comments from someone that speaks as if they always expect the worst of Apple, telling everyone that they expect the worst of Apple, are a dime a dozen and do nothing to add to the conversation.
They will compare to the generation of products that yields the largest expected number of potential buyers.
The largest expected number of potential buyers equals "the probability of upgrading when properly advertised to" * (multiplied by) "the total number of macOS users currently owning the hardware of the generation in question".
Where "the probability of upgrading when properly advertised to" depends on how big the jump is, not only in performance, but cumulatively with every other change made to the product. It also depends on how old it is as people are more likely to make another expensive purchase if the previous one was long time ago, just distributing financial burden over time.
And where "the total number of macOS users currently owning the hardware of the generation in question" is merely measured as Apple has direct access to those statistics at any time.
For instance, perhaps the probability that an M2 user will upgrade is only 10%, given how recently M2 released, and how small the difference between M2 based products (as a whole, not just performance of the SoC) and M3 based products is. And perhaps there is 10 million of M2 users. Expected number of potential buyers is 10M * 0.1 = 1M.
Then perhaps the probability that an M1 user will upgrade is 30%. And perhaps there is 20 million of M1 users. Expected number of potential buyers is 20M * 0.3 = 6M.
Then perhaps the probability that an Intel user will upgrade is 40%. And perhaps there is only 10M of Intel users. Expected number of potential buyers is 10M * 0.4 = 4M.
Therefore: most focus is given to comparing to M1. Second tier amount of focus (only mention in speech occasionally, and show on charts) is given to comparisons with Intel. And third tier amount of focus is given to comparisons with M2 (never mention in speech, but do show on charts). Give 0 focus to anything else to keep the total amount of information presented manageable and avoid getting negative results by confusing and overloading people.
There is nothing malicious here. Just practicality which in this case, dare I say, benefits not only Apple themselves, but their users as well: those who are most likely to feel the need to upgrade as is will receive the most direct and clear information on what they will get out of it.
> Probably not because they will compare the M4 to the M2 (because M3 owners are less likely to upgrade).
We already had people in another HN discussion asking why on earth would owners of M2 MacBook Pros not mindlessly upgrade to M3 ones, as if not getting on that upgrade treadmill required any sort of justification.
It's not good change for AI workload that requires memory capacity and bandwidth (rather than cache). Generative AI boom (on local machine) was started just about a year ago, so maybe it was too late to take effect for chip design.
If you're serious about doing generative ai locally the Max makes more sense anyways. Significantly higher capacities, more total compute. For typical generative use cases the Pro is still decently faster memory than a traditional system but decently slower memory than a good GPU I.e. not really much different than before.
Its all about market segmentation. Most 'Pro' buyer application mixes will not feel any significant effect from this restriction. However, those that do will now have to go for the 'Max' and can afford the premium.
The answer to these sort of questions (OP) is always the same: it was done because it made all the sense in the world to do.
Most likely, 150 GB/s -> 200 GB/s results in a fairly small improvement (when paired with a processor of M2 Pro / M3 Pro overall capability) only in a fairly small and specific subset of GPU applications. In particular, it's pretty much the matter of fact that that extra bandwidth achieves nothing in CPU-bound applications. It's also a matter of fact that only some of the GPU applications benefit, I just can not attest to exactly how big (or rather, small) and important that subset is.
With every new process node nowadays the following happens:
1. The cost per unit of area increases substantially. Decades ago the cost per unit of area was practically staying the same, resulting in 2x smaller node being 2x cheaper for the same design as the same design would take up 2x less area. Not anymore. The cost per unit of area is higher, therefore, if a portion of the design doesn't shrink much, it's actually more expensive on newer node. It takes a large shrink for the design to get cheaper or even merely stay the same in per-unit manufacturing cost.
2. IO shrinks very little. 4nm -> 3nm resulted in only 10% IO shrink. 1.25x SRAM, and 1.7x logic.
3. DRAM bandwidth is just a product of bus width * DRAM frequency, where bus width is really just a number-of-DRAM-controllers * 32.
4. DRAM controllers is IO. It barely shrinks in area going 4nm -> 3nm. But 3nm is more expensive per unit of area to manufacture. Therefore, DRAM controllers of the same design and the same count and the same bandwidth cost more money now on 3nm.
Most likely, that very marginal and situational performance benefit in a subset of GPU applications that M2 Pro saw going from 150 GB/s to 200 GB/s was still large enough to justify the relatively low (on 5nm) cost of 8 DRAM controllers (in traditional 32-bit-bus-per-controller terms). On M3 Pro that performance gain probably just dropped below the threshold and became unjustifiable against the increased cost of DRAM controllers, and the number of DRAM controllers was reduced to 6.
> the M3 Pro system on a chip (SoC) features 150GB/s memory bandwidth, compared to 200GB/s on the earlier M1 Pro and M2 Pro. As for the M3 Max, Apple says it is capable of "up to 400GB/s."
Genuine question, from someone that won’t upgrade from their M1 Air: what difference does it make for most users? Where would that extra 50GB/s be felt?
The M1 doesn't really benchmark in pure CPUs tests much higher that the Intel chips it replaced... it's was an incremental upgrade, not a generational upgrade; in other word, pure CPU benchmark increases would have been realized when/if the nextgen Intel chips were used.
Instead, most of the felt responsiveness on the M1 comes from the insane memory bandwidth. Everything from launching apps to task swapping to garbage collection events in various languages gets a boost from the lower latency and higher bandwidth.
Okay, I really don’t know why the M1 feels so amazingly responsive compared to my i7. But is 12.5% loss in memory bandwidth that important for these kinds of tasks? It seems really hard to saturate, say, 100GB per second!
> Instead, most of the felt responsiveness on the M1 comes from the insane memory bandwidth.
False. Extra bandwidth is there for and meaningfully improves the performance of the GPU, not the CPU. CPU can not even access all of that bandwidth in Apple M-series designs in the first place:
"While 243GB/s is massive, and overshadows any other design in the industry, it’s still quite far from the 409GB/s the chip is capable of. More importantly for the M1 Max, it’s only slightly higher than the 204GB/s limit of the M1 Pro, so from a CPU-only workload perspective, it doesn’t appear to make sense to get the Max if one is focused just on CPU bandwidth."
Furthermore, even within the bandwidth actually available, it's just a matter of fact that extra bandwidth barely if ever really makes a dent in CPU limited (as opposed to GPU limited) applications. See https://youtu.be/omumzW1AtGE?t=500 as an example of one of the studies on the overall subject.
> a boost from the lower latency
False. M-series systems have slightly _higher_ system memory to/from CPU latency, and drastically higher system memory to/from GPU latency (especially compared to other iGPU solutions).
See https://www.anandtech.com/show/16680/tiger-lake-h-performanc... for the same data for Intel i9-11980HK as a ballpark comparison point: 101 ns. Apple's results are basically inline with everything else in the industry, and are in fact slightly worse on latency.
"DRAM access takes more than 342 ns at the 128 MB test size. Going further sends latency beyond 400 ns, perhaps due to TLB misses. M2 Pro thus has higher DRAM access latency than AMD’s Phoenix, which has similar memory access latency to recent discrete GPUs."
You will see: DRAM latency from M2 Pro's iGPU is up to 475ns, while iGPU on AMD 7840HS peaks (on the bad side) at 270ns. M2 (non-Pro) does better than M2 Pro peaking at 330ns or so, still roughly 20% worse than AMD.
I'm sorry, but the exact negation of everything you said is in fact true.
Oh good. I was planning to get a Mac for testing purposes, but I wanted something weak so that I could make sure my software is fast on slow devices! /s
Edit: my personal feeling is this is just a cost savings they are doing like they did with the base SSDs