Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For many linear algebra heavy workflows, (numpy, R, Julia, etc.) I expect that AMD and especially Intel processors with AVX-512 will crush the M1 on real-world benchmarks. But this isn’t a reflection of RISC vs CISC, and Apple could choose to add hardware acceleration for wider instructions and hopefully will in the future.



Given the Intel MBA 2020 shipped with AVX512 this would even be a fair comparison.


Anandtech / Tom’s Hardware, if you’re there, please do this!!


It looks like simdjson doesn't support AVX-512, so there couldn't be a direct comparison with this article. I recently got a Tiger Lake laptop, though, so if anyone has a good means for comparison I'd be interested.


> I expect that AMD and especially Intel processors with AVX-512

AMD processors do not support AVX-512 (yet?).


(AMD processors) AND (Intel processors with AVX-512)



IIRC 2x256 on 3xxx series. Doubt a 1x512 is available on 5xxx series.


AVX 512 isn't available on consumer CPUs, it's not relevant here.


AVX 512 has been on some consumer CPUs since last year (e.g. Ice Lake in Surface Laptop 3).


It’s available in the MacBook Air 2020 Intel version, and it works well.


Ice Lake, as for example found in the 13” Intel MBP Apple stills sells, has AVX-512


As well as Tiger Lake (though those aren't in Macs)


But then again, if your workflow is linear algebra heavy, shouldn't you be doing that on a workstation or a cluster and not your little MacBook? Given you are probably doing that over Jupyter notebooks, SSH, or some cloud IDE, then the new ARM MacBooks will provide a better user experience?


> But then again, if your workflow is linear algebra heavy, shouldn't you be doing that on a workstation or a cluster and not your little MacBook?

Blender, Gimp / Photoshop, Video Editing, LTSpice / PSpice and Matlab come to mind. These are consumer-ish workflows that benefit from linear algebra, but people want to do them on their laptops.

Hell, people are doing video editing on their PHONES these days, due to the convenience.

----------

Workstations and clusters are not affordable for the vast majority of users.

GPUs probably are affordable however. But these programs aren't really operating on GPUs yet (I mean, Blender and some Video Editing programs are... but LTSpice / Matlab are CPU-only still)


Clusters are affordable, given that cloud hosting is a commodity now. Digital Ocean, for instance, charge nothing for traffic between nodes if they are hosted at the same data centre.

Julia, in particular, has the interesting-looking JuliaHub service in the pipeline: https://www.youtube.com/watch?v=JVUJ5Oohuhs&feature=youtu.be...


That's not compute and has nothing to do with the latency and bandwidth you would need to have the same interactivity.

Digital Ocean moving data internally for free has nothing to do with offloading video editing from a laptop.


A lot of cloud GPU providers charge $1000 per node per month.

Also AWS and GCP are not providing a commodity service. They charge hefty margins.


I think we effectively agree with each other and the author. The M1 is better day-to-day processor but not a AMD/Intel killer for (edit: some) compute-heavy workflows...(yet?). Discussion more pertinent for Mac Pro, where current M1 would be worse than last gen Intel for some common workflows on those devices.


The test will be the M1X Apple is apparently pushing for MBP or iMac usage[0].

0: https://news.ycombinator.com/item?id=25225764


An attached Intel processor for certain workflows is not unprecedented.

Macintosh Quadra 610 DOS Compatible: Technical Specifications https://support.apple.com/kb/SP227?locale=en_US

Pictures: http://www.applefool.com/applefool/Quadra_610_%28DOS_Compati...


I agree if you add the clarifying statement ...not an AMD/Intel killer for __some__ compute-heavy workflows...

It’s clear there are other workflows which some people characterize as “compute-heavy” where the M1 is superior.


Totally fair, edited


>if your workflow is linear algebra heavy, shouldn't you be doing that on a workstation or a cluster and not your little MacBook?

Not necessarily, such math & ML inference workloads are done even done on a Raspberry pi, other ARM SBCs for numerous CV and other projects requiring edge compute.


I do local work in numpy/pandas all the time on my MPB.

Large data size or ML training is where I use my cluster and/or GPU computers.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: