Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Claude Code isn’t a complete agent - it cannot open PRs autonomously AFAIK



Yeah it can. Either using MCP or git via bash. It’s a glaring omission and calls the data into question. How is attribution done? If it’s via the agent taking credit in commit messages, that’s a problem because Claude code, for example, has a config parameter that lets you tell it to not credit itself. With Claude code completely missing I’d say this is wildly inaccurate.


The problem with Claude code is it doesn’t let you walk away. You have to press yes yes yes yes yes yes yes 500 times.

Glad it’s missing until they fix this.


It has fine grained permissions configuration file. And every permission question has three answer options: "yes", "yes and don't ask again", "no". And it has option '--dangerously-skip-permissions'. Out of all 20+ AI code tools I've tried/used, Claude Code has the best permission options.


Actually you can enable everything, or have fine grained control over specifics. Or you just manually approve and ask not to be prompted again. Sounds like you're more of a dabbler.


Or you stick it in docker. Or actually configure your permissions. RTFM


Yeah sorry you got downvoted, but that's pretty much my inclination is to say RTFM. Honestly though, I'm very excited by how few developers are using the most powerful tools available. Huge opportunity for the rest of us willing to adapt to having our cheese moved, and willing to put in the work.


I do love the reaction to "here's a tool that can do everything when asked correctly" (ie, a compiler for arbitrary human artifacts), and then not read the manual. I remember a dude on this site complaining that 4o-mini only had superficial opinions of analyzing a particular poem, then it turns out the fellow didn't even supply the LLM with the text of the poem. Then the person's argument is that it is like criticizing someone for their hammer being 2.7mm off center. Utterly ridiculous; LLMs are not psychic, they just have approximate knowledge of many things. People seem to love setting them up to fail. My favorite "demonstration" is showing LLMs messing up doing multiplication of large numbers. If only the LLMs had access to some sort of machine that could do multiplication well...


I feel your pain lol just gotta let people learn on their own time I suppose


Oh. I thought they had fixed it. Nothing new. CC still not ready for prime time.


If there is gh cli tool installed it will do it without any special prompts/requests/instructions no problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: