The author uses smol-developer for this. I've had mixed luck using smol-developer for actual development, but reading the code and prompts it's using, and just the general workflow is pretty fascinating and I've been playing around with adjusting it (Issue #34 https://github.com/smol-ai/developer/issues/34).
In short:
- smol-developer submits the user prompt and basically says: "Give me a list of files that would need to be written to accomplish this".
- "Now give me a list of shared dependencies".
- For each file: "Ok, write the code for $FILENAME, keeping in mind $SHARED_DEPENDENCIES" -- save file.
One could imagine even breaking it further down ("what functions need to be in file X"), which might get it closer to what the author of this post says about GPT being better at smaller chunks. But you also kind of want some shared context as well.
I broke out smol-developer's prompts and modified them to include "ask any clarifying questions you need to build the [list of files/dependencies/source]" and then was able to provide some more feedback during the development process.
I also started off with that trick of "You are an AI prompt engineer, gather requirements and ask any follow up questions that are necessary to generate the perfect prompt, optimized to feed into GPT, to achieve the user's goals." trick, did some back and forth during which it asked some fascinating questions, including at least one that wasn't even on my radar. I then used that generated prompt in the following steps with smol-developer.
> One could imagine even breaking it further down ("what functions need to be in file X")
Based on my manual usage, I usually guide it through a top-down process, something like:
- give an overview of how you would approach the problem
- describe the main steps
- do you see any problems with this approach? can you think of any alternatives?
- taking into account everything that we have discussed so far, please carefully review and check your approach, propose a revised approach
- only then would i get it to generate a list of source files and implement them one by one
- now, given your implementation as a whole, review it and note any errors or improvements that could be made
- fix the code
...
Once I have code in hand I'll paste it into a new chat with a prompt like "You are our resident compiler guru. Please review the following code:..." It helps to be more specific about the reviewer's background and what you want. Rinse and repeat until it responds with high praise. Python works better than C++. GPT4 is much better than 3.5 of course.
In short:
One could imagine even breaking it further down ("what functions need to be in file X"), which might get it closer to what the author of this post says about GPT being better at smaller chunks. But you also kind of want some shared context as well.I broke out smol-developer's prompts and modified them to include "ask any clarifying questions you need to build the [list of files/dependencies/source]" and then was able to provide some more feedback during the development process.
I also started off with that trick of "You are an AI prompt engineer, gather requirements and ask any follow up questions that are necessary to generate the perfect prompt, optimized to feed into GPT, to achieve the user's goals." trick, did some back and forth during which it asked some fascinating questions, including at least one that wasn't even on my radar. I then used that generated prompt in the following steps with smol-developer.