Hacker News new | past | comments | ask | show | jobs | submit login

Not OP but I've been playing with similar technology as a hobby.

>1. Are the sources used by the AI's learning phase trustworthy? (e.g. When will models be sophisticated enough to be trained to avoid some potentially problematic solutions?)

Probably not, but for most domains reviewing the code should be faster than writing it.

>2. How would an AI-generated solution be maintained over time?

I would imagine you don't save the original prompts. Rather, when you want to make changes you just give the AI the current project and a list of changes to make. Copilot can do this to some extent already. You'd have to do some creative prompting to get around context size limitations, maybe giving it a skeleton of the entire project and then giving actual code only on demand.

> When can my company host a viable trained model in a proprietary environment?

Hopefully soon. Finetuned LLaMA would not be far off GPT-3.5, but nowhere close to GPT-4. And even then there are licencing concerns.




Ok, a couple of derivative "fears" around this...

1> Relying on code reviews has concerns, IMO. For example, how many engineers actually review the code in their dependencies? (But, I guess it wouldn't take that much to develop an adversarial "code review" AI?)

2> Yes, agreed, that would work. Provided the original solution had viable tests, the 2nd (or additional) rounds would have something to keep the changes grounded. In fact, perhaps the existing tests are enough? Making the next AI version of the solution truly "agile"?

3> So, at my age (yes, getting older) I'm led to a single, tongue-in-cheek / greedy question: How to invest in these AI-trained data sets?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: