Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn’t this the thing AI is going to claim to solve? A project exists, a user writes a feature request, the AI codes up the changes, pushes a new release, and everyone is happy. That’s the sales pitch.

The big issue with this, even if it works perfectly every time, is that there is no one at the core of the project with some vision and taste, who is willing to say “no” to bad ideas or things outside the scope of the project. We’d end up seeing a lot of bloat over time. I’m sure AI will claim to solve that too, just have it code up a new lightweight project. The project sprawl will be endless.



> The big issue with this, even if it works perfectly every time, is that there is no one at the core of the project with some vision and taste, who is willing to say “no” to bad ideas or things outside the scope of the project.

Why would any user ever care about the scope of the project or how you feel about their ideas? If they want your open source software to also play MP3s and read their email they'll just ask an AI to take your code and add the features they want. It doesn't impact anyone else using your software. What you'll probably have though are a bunch of copies of your code with various changes made (some of them might even have already been available as options, but people would rather ask AI to rewrite your software than read your docs) some listed as forks and others not mentioning you or the name of your software at all.

Most people aren't going to bother sharing the changes they made to your code with anyone but eventually you'll have people reporting bugs for weird versions of the software AI screwed up.


Why bother getting the LLM to write code to listen to MP3's? Just get it to write a new song that sounds the same as the one you want to listen to.

Hopefully you get how this is analogous.


> there is no one at the core of the project with some vision and taste, who is willing to say “no” to bad ideas or things outside the scope of the project.

That can literally be a system prompt.

"Here are the core principles of this project [...]. Here is some literature (updated monthly?). Project aims to help in x area, but not sprawl in other areas. Address every issue/PR based on a careful read of the core principles. Blah blah. Use top5 most active users on github as a voting group if score is close to threshold or you can't make an objective judgement based on what you conclude. Blah blah."

Current models are really close to being able to do this, if not fully capable already. Sure, exceptions will happen, but this seems reasonable, no?


Here is my PR, it aligns perfectly with the project goals. It contains a backdoor as binary blob that will be loaded dynamically upon execution. The models are nowhere near catching this and it would get merged. Even more simply, a subtle bug leading to a vulnerable release. They do not have logic enough to catch this stuff.


Why in the world would you arrange things in that way?

1. A project exists

2. A user forks the project

3. A user writes a feature request

4. The AI codes up the changes and puts it into the fork

5. The original project is left untouched


Everything will look like PHP functions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: