It's Unified RAM. So that memory is also used for the GPU & Neural Cores (which is for Apple Intelligence).
This is actually why companies moved away from the unified memory arch decades ago.
It'll be interesting to see as AI continues to advance, if Apple is forced to depart from their unified memory architecture due to growing GPU memory needs.
If it's the shift I think you're referring to, I find it strange that you compare computing decisions from the 50s and 60s to today. You're correct, but that was over half a century ago. The reasons for those decisions, such as bus speeds, high latency, and low bandwidth, no longer apply.
Today, the industry is moving toward unified memory. This trend includes not only Apple but also Intel, AMD with their APUs, and Qualcomm. Pretty much everyone.
To me, the benefits are clear:
- Reduced copying of large amounts of data between memory pools.
>This is actually why companies moved away from the unified memory arch decades ago.
I don't understand - wouldn't the OS be able to do a better job of dynamically allocating memory between say GPU and CPU in real time based on instantaneous need as opposed to the buyer doing it one time while purchasing their machine? Apparently not, but I'm not sure what I'm missing.
The usual reasoning that people give for it being bad is: you share memory bandwidth between CPU and GPU, and many things are starved for memory access.
Apple’s approach is to stack the memory dies on top of the processor dies and connect them with a stupid-wide bus so that everything has enough bandwidth.
Depart? They just got there, didn't they? And on purpose. There's more memory bandwidth, and also no need to copy from main memory to VRAM. Why would they bail on it?
I think they moved away because system memory was lagging behind in speed to the memory being used on video cards?
And besides, what Apple is doing is placing the RAM really close to the SoC, I think they are on the same package even, that was not the case on the PC either AFAIK?
Apple has an arm license but they still buy memory from Samsung (and others), it's not on the m chip die but it's provided by Samsung and then packaged above the m die.
At this point it feels like (correct me if I'm am wrong) that Apple's AI is often performed "in the cloud". I suspect though that if Apple moves increasingly to on-device AI (as I suspect they will — if not for bandwidth and backend resource reasons then for privacy ones) Apple's Silicon will have adopted more and more specialized AI components — perhaps diminishing the need for use of off-board memory.
Last I checked, Apple was pretty much the only major player who does everything that they can do on device on device, that is their whole ethos behind it, no?
I mean, there is no need to speculate about any of this, they've put out a number of articles that outline their whole approach. I'm not really sure where the ambiguity lies?
It's Unified RAM. So that memory is also used for the GPU & Neural Cores (which is for Apple Intelligence).
This is actually why companies moved away from the unified memory arch decades ago.
It'll be interesting to see as AI continues to advance, if Apple is forced to depart from their unified memory architecture due to growing GPU memory needs.