Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Zig actually has a very nice abstraction for SIMD in the form of vector programming. The size of the vector is agnostic to the underlying cpu architecture. The compiler or LLVM will generate code for using SIMD128, 256, or 512 registers. And you are just programming straight vectors.


Rust has that too, with nalgebra if you want arbitrary-sized tensors as scientific computing wants, or with glam and similar crates if your needs are more modest as in graphics. In all cases they're SIMD-accelerated.


I do generally like their approach. It's especially well suited given how easy comptime allows metaprogramming against the target register size.

I wish it had a few more builtins for commonly supported operations without me having to write inline assembly (e.g., runtime LUTs are basically untenable for implementing something like bolt [0] without inline asm), but otherwise the abstraction level is about where I'd like it to be. I usually prefer it to gcc intrinsics, fully inline asm, and other such shenanigans.

[0] https://arxiv.org/abs/1706.10283


Isn't that what std:simd is for Rust?


But zig lacks the intrinsics support, and not ever single simd spec is exposed on the abstraction.


Yeah, the article overlooked library support for SIMD. nalgebra had a decent writeup on their ability to squeeze out autovectorization for their vector and matrix types.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: