Hacker News new | past | comments | ask | show | jobs | submit login

Seems fair.

Idly speculating, if this architecture favours characteristics of higher-level languages (I'm thinking of the widely-reported measurements of primitives used in automatic memory management), relatively disfavours straight-line branchless SIMD-ready streaming algorithms, and is also shipped with a matrix-friendly neural coprocessor... could that invite a change in the types of programs that perform the best?

That is, is it possible that good algorithms nicely structured and straightforwardly written in higher-level languages with good separation of concerns might actually get the most benefit? Or is this a daydream?




Apple is in a good position to create a software-hardware symbiosis to ensure their compilers work flawlessly on their hardware and are more efficient than any other combination of hardware and software. If they ensured that combo worked perfectly with the best practice for their languages, well that's a winning combo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: