>This is actually why companies moved away from the unified memory arch decades ago.
I don't understand - wouldn't the OS be able to do a better job of dynamically allocating memory between say GPU and CPU in real time based on instantaneous need as opposed to the buyer doing it one time while purchasing their machine? Apparently not, but I'm not sure what I'm missing.
The usual reasoning that people give for it being bad is: you share memory bandwidth between CPU and GPU, and many things are starved for memory access.
Apple’s approach is to stack the memory dies on top of the processor dies and connect them with a stupid-wide bus so that everything has enough bandwidth.
I don't understand - wouldn't the OS be able to do a better job of dynamically allocating memory between say GPU and CPU in real time based on instantaneous need as opposed to the buyer doing it one time while purchasing their machine? Apparently not, but I'm not sure what I'm missing.