Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is what I heard from many people online.

I have tried beign specific, I have even gone as far as to feed its prompt with a full requirement document for a feature (1000+ words), and it did not seem to make any significant difference.



Need to use the Wittgenstein approach: “What can be shown cannot be said.” LLMs need you to show it what you want. Its model has seen nothing. Built up a library of examples. Humans also work better this way. Just good practice.


I think that's why cursor works well for me. I can say, write this thing that does this in a similar way to other places in the codebase and give it files for context.


You are experiencing the Dunning-Kruger effect of using AI. You used it enough to think you understand it, but not enough to really know how to use it well. That's okay, since even if you try and ignore and avoid it for now, eventually you'll have enough experience to understand how to use it well. Like any tool, the better you understand it and the better you understand the problems you're trying to solve, the better job you will do. Give an AI to a product manager and their code will be shit. Give it to a good programmer, and they're likely to ask the right questions and verify the code a little bit more so they get better results.


> Give an AI to a product manager and their code will be shit. Give it to a good programmer, and they're likely to ask the right questions and verify the code a little bit more so they get better results.

I'm finding the inverse correlation: programmers who are bullish on AI are actually just bad programmers. AI use is revealing their own lack of skill and taste.


You can literally stub out exactly the structure you want, describe exactly the algorithms you want, the coding style you want, etc, and get exactly what you asked for with modern frontier models like o3/gemini/claude4 (at least for well represented languages/libraries/algorithms). The fact that you haven't observed this is an indicator of the shallowness of your experience.


> modern frontier models like o3/gemini/claude4 (at least for well represented languages/libraries/algorithms). The fact that you haven't observed this is an indicator of the shallowness of your experience.

I'm not chasing the AI train to be on the bleeding edge because I have better things to do with my time

Also I'm trying to build novel things, not replicate well represented libraries and algorithms

So... Maybe I'm just holding it wrong, or maybe it's not good at doing things that you can't copy and paste from github or stackoverflow




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: