Hacker News new | past | comments | ask | show | jobs | submit login

A question for anyone that has tried this model: what has performance been like? I'm using an old CPU but llama2-13b performs around twice as fast as mistral-7b for some reason, and I'm not sure why.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: