I vaguely remember my computer arch course in college describing how everything we have today essentially suffers from the von neumann bottleneck, that even if we implemented a lambda calculus machine, it would still suffer from this bottleneck. Has this been revisited in academia lately? I am a bit out of touch, but this always interested me.