Using AI to assist in locating the right library function or similar to documentation has been effective to speed up my development, but I really dislike using it to autocomplete whole functions or large swathes of code, because then I have to spend time reading and understanding code I didn't write. This doesn't ultimately feel "faster" when I own what I write.
I'm using AI for grepping, analyzing flow, finding cross-project dependencies, etc. It provides a significant speedup. But the generated code is mediocre at best. Changes look like patches on fabric, not woven threads. AI generates too much redundant code.
One other use-case I've found effective for it is assisting with API development and defining API specs. For example, I've uploaded API definitions in YAML and previously provided instructions about the API standards I want to impose and to ensure consistency as I build new things, this saves some degree of effort between writing -> linting -> fixing -> linting again if I have it just validate and fix any minor inconsistencies.
I started with Repomix as an MCP server plus system prompt to reduce the scope to single packages. However, it still consumed too many tokens (and polluted the context with useless information). I used Gemini, context size wasn't an issue, but it was too expensive. Now I just use Cursor, it has built-in indexing with embeddings (I assume).
I had a similar situation - I once wanted to grab some web pages and parse them in Python, and was going to use Python's built-in libraries for that, and use BeautifulSoup to parse them. But then I realized I'd have to read enormous code bases I didn't write, which felt like it would take forever.
(Obviously, this post is tongue-in-cheek, but I'm making a real point - almost all code we use is code we didn't write. I don't think that's what differentiates Vibe coding code.)
> Using AI to assist in locating the right library function
Works mildly OK until it invents new functions or libraries for you and wastes your time or worse, you find the library exists but it only exists because of slopsquatting (enterprising scammers realized that LLMs like to recommend the same non-existent libraries and snatched up the names)
I’ve had good results using Claude Code by specifically coaching it on how its implementation should be. It’s not always perfect and sometimes I do have to try again, but it’s remarkably effective when given enough guidance.
If only I could figure out how to reliably keep it from adding useless comments or preserving obsolete interfaces for “backward compatibility”…
Me too, I’m using Roo Code and have substantially updated the system prompt to describe the project standards, correct way to restart the services and inspect the logs, specific hints about the frameworks im using and how to use DI and patterns, like think about what kind of code it’s writing and where to put it in the project structure, admonitions not to do certain things it would otherwise do (like in Python it’s common for it to generate code to import inside of functions which can cause runtime errors). I’m experimenting with a tree sitter MCP to see if I can make it aware of the structure of the entire project in a more compact way without all of the code in the context window, we will see where that goes.
Anyways, a year or two ago, the state of the art models couldn’t do math, and the image models couldn’t render hands or text, and those problems are broadly fixed, and I pretty much expect vibe coding to dramatically improve in the next year or two.
I haven't used Claude Code, but I have used Windsurf, Cursor, and Continue. They all do well with their own "rules" files. I essentially understand that as something similar to a System prompt sent before a chat session. I even have pretty specific rules on styling that are unique to me, and it generally follows those.
It's also worth asking what rule it would need in order to follow the rule. On occasion, a rule I've added isn't quite followed. So I'll respond immediately pointing out what it did, that the rule is in the file, and then will ask it to tell me how I should modify, or add to, the rule in order for it to be easier to follow.
I'd imagine Claude Code has something similar that might be worth looking into.
Another thing Claude loves is fixing type errors by vomiting up conditionals 10 levels deep that check for presence, and type, and time of day, and age of the universe before it fixes the actual type issue..