I think you're being paranoid here :-). I encourage you to download mojo and try it out. This code is all OSS, so go nuts validating it yourself. If you'd like to know how mojo works there is a lot of information on the Modular blog:
Chris, how would you respond to the remark that the article is comparing a flawed Mojo implementation against a more correct Rust implementation? https://news.ycombinator.com/item?id=39296559
> > The TL;DR is that the Mojo implementation is fast because it essentially memchrs four times per read to find a newline, without any kind of validation or further checking. The memchr is manually implemented by loading a SIMD vector, and comparing it to 0x0a, and continuing if the result is all zeros. This is not a serious FASTQ parser. It cuts so many corners that it doesn't really make it comparable to other parsers (although I'm not crazy about Needletails somewhat similar approach either).
> > I implemented the same algorithm in < 100 lines of Julia and were >60% faster than the provided needletail benchmark, beating Mojo. I'm confident it could be done in Rust, too.
As far as I know, the Mojo implementation is doing the same algorithm as the baseline rust implementation. The person commenting on that is complaining about the rust impl as well.
https://www.modular.com/blog
e.g. these might be interesting:
https://www.modular.com/blog/mojo-llvm-2023 https://www.modular.com/blog/what-is-loop-unrolling-how-you-...
If you still have doubts, you could join the 20,000+ people in discord chatting about Mojo stuff: https://discord.com/invite/modular
-Chris