Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What prompt did you use and how much did you have to code on your own?


Probably quite a few prompts.

Should we be checking in our prompt history into version control as a kind of source code or requirements spec? Seems prompt revision and improvement would be valuable to keep a history of.


It's not even that. Those showcasing the power of LLMs to code up a whole project should be disclosing their prompts. Otherwise it's quite meaningless.


The responses to those prompts are not independently reproducible without the weights, the seed and software used to run the LLM, and in many cases, none of those are available.


So that's all part of the "source code" for projects like this.

It's amusing that on one hand, there's been a push for "reproduceable builds", where we try to make sure that for some set of input (source code files, configuration, libraries), that we can get an identical output. On the other hand, we have what we see here where, without a huge amount of external context, no two "builds" will ever be identical.


Unless the ai model is being run at build time to generate inputs to the build I disagree that the model, its prompts or its weights and seeds constitute part of “the build” for reproducibility purposes. Static source files generated by an AI and committed to source control are indistinguishable from a build perspective by source files generated by a human and committed to that same source control. We (reasonably) dont consider the IDE, the auto-complete tools or the reference manuals consulted to be requirements for a “reproducible” build and I don’t think the AI models or prompts are any different in that respect. They might be interesting pieces of information, and they might be useful for documenting intent of a given piece of code but if you can take the same source files and run them through the same compile process and get the same outputs, that’s “reproducible”.


Not OP, but if you are asking for your own reference or projects, I've found this "prompt flow" to be very effective:

https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/

The reasoning models seem to respond quite well to the "ask me one question at a time, building on my previous answer" for the sake of coming up with a solid blueprint for the project, then build from there. I used this method to make a nice little inventory/rma tracker for my job using Rails 8, and it got me MVP in a matter of hours, with most delays being just me refining the code on my own since some of it was a bit clunky. I used o3 mini to generate the initial prompts, then fed those to Claude.

The hallucinations/forgetfulness was relatively minor, but if I did not have prior Ruby/JS knowledge, I doubt I would have caught some of the mistakes, so as much as I was impressed by the method outlined in the blog post, I am not at all saying that someone with no knowledge of the language(s) they wish to use is going to create a great piece of software with it. You still have to pay attention and course correct, which requires a working understanding of the dev process.


Not the OP but I did something similar for a breakout style game. I just interactively provided details specifications like there is a ball, paddle, bricks, powerups, what they look like and behave, how collision is handled, etc. For example, if you get a multi-ball powerup, it will add additional two balls in flight. If you get a shooter power up, the paddle will be able to shoot bullets for 10s. I didn't even have to write any code to get exactly what I thought up.


Did same but for Asteroids. Had a working wireframe version after only a couple of prompts! I think using the term "Asteroids" in my original prompt was a bit of cheat/shortcut.

My lessons learned for prompting were to have project broken down into clearly defined modules (duh...), and to constantly feed the latest module source back in as context along with the prompts. This helps ground the responses to only adjust the code related to the prompt, and to not break stuff that was already working.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: