Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Llama2.rs: One-file Rust implementation of Llama2 (github.com/srush)
60 points by sshroot on Aug 5, 2023 | hide | past | favorite | 10 comments


Very nice! I wanted to do something like this but then I would miss on proper CUDA acceleration and lose performance compared to using torchlib.

I wrote a forgettable llama implementation for https://github.com/LaurentMazare/tch-rs (pytorch's torchlib rust binding). Still not ideal but at least you get the same GPU performance you would get on pytorch.

...And then I spotted Candle, a new ML framework by the same author: https://github.com/huggingface/candle

It's all in Rust, self contained, a huge undertaking, but it looks very promising. They already have a llama2 example!


(since you asked for a code review)

For timing benchmarks, use Instant or a similar monotonic clock instead of SystemTime.

The original C code makes the same mistake, using clock_realtime instead of clock_monotonic.

This means the benchmarks will be wrong if the program runs while ntp is fixing up the clock. This can happen right after the system gets internet, or periodically when it checks for skew. Some systems might slowly blend in ntp fixes too, which means 1 second of calendar time is not 1 second of monotonic time over a long period of time.

At least it won't be affected by daylight saving. But it's not airtight


anyone have suggestions about where to learn about the stuff going on in this (and llama2.c repo)?

like the file formats, all the extra files like the tokenizer.bin file, the terminology in the sources comments, logits, transformers etc


This is my first Rust project, so if you are an expert I would love a code review!

Seeing a few uses of `unsafe`, a few of `expect`. Wonder if you can mmap the binary model in without unsafe??


> Wonder if you can mmap the binary model in without unsafe??

Some operating systems do provide the proper guarantees to make mmap safe, but Rust decided it would be best to assume it's unsafe and maintain a uniform API. Which is probably a good call, it is notoriously difficult to get a readonly mmap to be entirely safe on Linux.

https://docs.rs/mmap-rs/latest/mmap_rs/struct.MmapOptions.ht...


It’s not safe under Rust’s definition of safe.

If you map the file into readonly memory and than get references to it, the underlying memory can mutate (eg: by modifying the file itself).


This has 2 dependencies. Notably it depends on rayon.


is that bad, a lot of high-performance apps rely on rayon don't they


A lot of these single-file LLM scripts like the original llama.cpp and the llama2.cpp posted here a bit ago are both self-contained.


I think that with C++ projects the only goal there is to keep the build simple since building C++ sucks. This is also the reason why there's many header-only libraries.

StackOverflow| Benefits of header-only libraries: https://stackoverflow.com/questions/12671383/benefits-of-hea...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: