Hacker News new | past | comments | ask | show | jobs | submit login

Any agentic dev software you could recommend that runs well with local models?

I’ve been using Cursor and I’m kind of disappointed. I get better results just going back and forth between the editor and ChatGPT

I tried localforge and aider, but they are kinda slow with local models





I used devstral today with cline and open hands. Worked great in both.

About 1 minute initial prompt processing time on an m4 max

Using LM studio because the ollama api breaks if you set the context to 128k.


How is it great that it takes 1 minute for initial prompt processing?


Haha great as in surprisingly good at some simple things that nothing has been able to do locally for me.

The 1 minute first token sucks and has me dreaming for the day of 3-4x the bandwidth


That time is just for the very first prompt. It is basically the startup time for the model. Once it is loaded, it is much much faster in responding to your queries. Depending on your hardware of course.

Have you tried using mlx or Simon Wilson’s llm?

https://llm.datasette.io/en/stable/

https://simonwillison.net/tags/llm/


On lm studio I was using mlx

you can use ollama in VS Code's copilot. I haven't personally tried it but I am interested in how it would perform with devstral


Do you have any other interface for the model? what kind of tokens/sec are you getting?

Try hooking aider up to gemini and see how the speed is. I have noticed that people in the localllama scene do not like to talk about their TPS.


The models feel pretty snappy when interacting with them directly via ollama, not sure about the TPS

However I've also ran into 2 things: 1) most models don't support tools, sometimes it's hard to find a version of the model that correctly uses tools, 2) even with good TPS, since the agents are usually doing chain-of-thought and running multiple chained prompts, the experience feels slow - this is even true with Cursor using their models/apis


People have all sorts of hardware, TPS is meaningless without the full spec of the hardware, and GPU is not the only thing, CPU, ram speed, memory channel, PCIe speed, inference software, partial CPU offload? RPC? even OS, all of these things add up. So if someone tells you TPS for a given model, it's meaningless unless you understand their entire setup.

I’ve been playing around with Zed, supports local and cloud models, really fast, nice UX. It does lack some of the deeper features of VSCode/Cursor but very capable.


ra-aid works pretty well with Ollama (haven't tried it with Devstral yet though)

https://docs.ra-aid.ai/configuration/ollama/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: