Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
syntaxing
6 months ago
|
parent
|
context
|
favorite
| on:
Benchmark Framework Desktop Mainboard and 4-node c...
Kinda bummed, I get why he used Ollama but I feel like using llama cpp directly would provide better and more consistent results
RossBencina
6 months ago
|
next
[–]
I heard that ik_llama.cpp performs better for CPU use:
https://github.com/ikawrakow/ik_llama.cpp/
mkl
6 months ago
|
prev
[–]
As the article describes, most of this was done with llama.cpp, not Ollama.
syntaxing
6 months ago
|
parent
[–]
Ahh good catch, I didn’t notice if you scroll lower, he has the llama cpp results. The ollama-benchmark repo name is a misnomer.
geerlingguy
6 months ago
|
root
|
parent
[–]
I'm slowly migrating all my testing to
https://github.com/geerlingguy/beowulf-ai-cluster
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: