Hacker News new | past | comments | ask | show | jobs | submit login

These two projects were almost entirely written with LLMs:

https://github.com/williamcotton/search-input-query

https://github.com/williamcotton/guish

Both are non-trivial but certainly within the context window so they're not large projects. However, they are easily extensible due to the architecture I instructed as I was building them!

The first contains a recursive descent parser for a search query DSL (and much more).

The second is a bidirectional GUI for bash pipelines.

Both operate at the AST level, guish powered by an existing bash parser.

The READMEs have animated gifs so you can see them in action.

When the LLM gets stuck I either take over the coding myself or come up with a plan to break up the requests into smaller sized chunks with more detail about the steps to take.

It takes a certain amount of skill to use these tools, both with how the tool itself works and definitely with the expertise of the person wielding the tool!

If you have these tools code to good abstractions and good interfaces you can hide implementation details. Then you expose these interfaces to the LLM and make it easier and simpler to build on.

Like, once you've got an AST it's pretty much downhill from there to build tools that operate on said AST.




I think there’s often a disconnect between what lay-people hear when someone says “I built an app using AI” and the reality.

What it seems like a lot people assume the process is that you give the AI a relatively high level prompt that’s a description of features, and you get a back a fully functioning app that does everything you outlined.

In my experience (and I think what you are describing here), is that the initial feature-based prompt will often give you (some what impressively) a basic functioning app. But as you start iterating on that app, the high level feature-based prompts start not working very well pretty quickly. It then becomes more an exercise in programming by proxy — where you basically tell the AI what code to write/what changes are needed at a technical level in smaller chunks, and it saves you a lot of time by actually writing the proper syntax. The thing you still have know how to program to be able to accomplish this — (arguably, you have to be a fairly decent programmer who can already reasonably break down complicated tasks into small understandable chunks).

Furthermore, if you want to AI write good code with a solid architecture you pretty much have to tell it what to do from a technical level from the start — for example, here I imagine the AI didn’t come up with structuring things to work as the AST level on its own — you knew that would give you a solid architecture to build on, so you told it to do that.

As someone whose already a half decent programmer, I’ve found this process to be a pretty significant boon to my productivity, on the other hand beyond the basic POC app, I have a hard time seeing it living up the marketing hype of “Anyone can build an app using AI!” that’s being constantly spewed.


The usual workflow I see skeptic folks take is throw a random sentence and expect the LLM to correctly figure out the end result. And then just keep sending small chunks of code expanding the context with poor instructions.

LLMs are tools that need to be learned. Good prompts aren’t hard, but they do take some effort to build.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: