Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If you were trying and failing to use an LLM for code 6 months ago, you’re not doing what most serious LLM-assisted coders are doing.

This sounds like the "No true Scotsman" fallacy.

> People coding with LLMs today use agents. Agents get to poke around your codebase on their own.

That's a nonstarter for closed source, unless everything is running on-device, which I don't think it is?

> Part of being a senior developer is making less-able coders productive

Speak for yourself. It's not my job.




You can run the agents on your own infrastructure (all the way down to a Mac Mini sitting on your desk), or Microsoft, OpenAI and I'm pretty sure Anthropic can sell you an Enterprise service that guarantees a certain level of confidentiality. I work in aerospace, one of the most paranoid industries, and even we got a Copilot subscription that met our needs...


> You can run the agents on your own infrastructure (all the way down to a Mac Mini sitting on your desk)

How does that work exactly? Do you have a link?

> Microsoft, OpenAI and I'm pretty sure Anthropic can sell you an Enterprise service that guarantees a certain level of confidentiality

These companies hoovered up all of our content without notice, permission, or compensation, to train their models. I wouldn't trust them one bit. My personal opinion is that it's foolish to trust them.

> I work in aerospace, one of the most paranoid industries

Paranoid about what exactly?


If you're not going to trust them when they say "here is a contract that guarantees we won't train on your data" because they trained on a scrape of the web, you're never going to get the benefit from these tools. I guess that's your call. I chose to believe companies when they contractually oblige themselves to not do things.


Microsoft and Google are of course both famously known for studiously obeying contracts, the law, and not stabbing their partners in the back when it goes against their monetary interests


> How does that work exactly? Do you have a link?

https://ollama.com lets you run models on your own hardware and serve them over a network. The you point your editor at that server, eg https://zed.dev/docs/ai/configuration#ollama


Don't use Ollama, use llama.cpp instead.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: