Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How are you using AI to double your coding productivity? Are you using ChatGPT, or Claude, or GitHub Copilot? I am an AI-skeptic, so I am curious here. Thanks!


I've tried various AI coding solutions and have found at best a mild boost but not the amazing multipliers I hear about online.

Copilot gives you some autofill that sometimes can be helpful but often not that helpful. I think the best it did for me was helping with something repetitive where I was editing a big list of things in the same way (like adding an ID to every tag in a list) and it helped take over and finish the task with a little less manual clicking.

ChatGPT has helped with small code snippets like writing a regular expression. I never got 100% regex mastery, usually I would have to look up a couple things to write one but GPT can shortcut that process. I get a little paranoid about AI provided code not actually working so I end up writing a large number of tests to check it, which could be a good thing but can feel tedious.

I'm also curious how other people are leveraging them to get more than I am. I honestly don't try too hard. At one point I did try really hard to get AI to do more heavy code lifting but was disappointed with my results so I stopped... but maybe things have improved a bit since then.


It can get a little tedious if you are just using ChatGPT or Claude as it is. Also you are limited by lack of context on existing codebase.

That's why there are a lot of tools that help to setup a proper workflow around these LLMs.

For terminal-based workflow, you can checkout aider or plandex.

For GUI-based workflow, you can try 16x Prompt (I built it).


I don’t know if I’ve done something wrong my my copilot is so wrong I just turned it off. I don’t understand the appeal at all.

I don’t remember the last time I thought one of its suggestions was useful. For me LSP has been the real game changer.


> I get a little paranoid about AI provided code not actually working so I end up writing a large number of tests to check it, which could be a good thing but can feel tedious.

This is a good thing. We need more tests on such critical places like regexes because they can be finicky and non-obvious. Tedious or not, we are not artists; the job must be done. Kudos for sticking to the good practices.


Ok I jumped on copilot when it first came out so I have been using it for a long time.

Since I have been using it so long, I have a really good intuition of what it is “thinking” in every scenario and a pretty good idea of what it can do for me. So that helps me get more use out of it.

So for example one of the projects I’m doing now is a flutter project - my first one. So I don’t remember all the widgets. But I just write a comment:

// this widget does XYZ

And it will write something that is in the right direction.

The other thing it knows super well is like rote code, and for context, it reads the whole file. So like Dart, for example is awful at json. So you have to write “toMap” for each freaking class where you do key values to generate a map which can be turned into json. Same goes for fromMap. So annoying.

But with copilot? You just write “toMap” and it reads all your properties and suggests a near perfect implementation. So much time saved!


I don't think you need an LLM just to parse class properties and turn them into a map. Not that familiar with Dart, but that's the kind of thing IDEs have been able to do for a while now just by parsing syntax the old-fashioned way.


The thing is, when you dig into the claims many people make when they say that they get a 10x productivity boost using "AI" its usually some basic tasks that either generates boilerplate code or performs a fancy autocomplete and while those are great, in no way it supports their original claim.

I think people just want to be part of the hype and use the cool new technology whenever possible. We've seen this over and over again: Machine Learning, Blockchains, Cryptos, "Big Data", "Micro Services", "Kubernetes", etc.

I just don't think the current design of "AI" will take us there..


> they get a 10x productivity boost using "AI" its usually some basic tasks that either generates boilerplate code or performs a fancy autocomplete and while those are great

And that are just a tiny upgrade over what IDEs can do. When I used Android Studio, the code basically write itself due to the boilerplate surrounding your business logic. And once I got a basic structure down, I feel like I only write 5 to 10 characters each line (less for data types). And the upgrade is both positive and negative at the same time, it boils down to luck to actually get good suggestions.


I’m not the OP and I wouldn’t say that AI has doubled my productivity, but the latest Claude models in particular have made me less of a skeptic than I was a few months ago.

I’m an experienced backend dev who’s been working on some Vue frontend projects, and it’s significantly accelerated my ability to learn the complexities of e.g. Vue’s reactivity model. I can ask a complex question that involves several niche concepts and get a response that correctly synthesizes those concepts. I spent an hour the other night trying to understand a bug in a component to no avail; once I understood the problem well enough to explain it in a few sentences, Claude diagnosed the issue and explained it with more clarity than the documentation and various stack overflow answers.

My default is no longer to assume that the model has a coin flip’s chance of producing bs. I still verify and treat answers with a certain degree of skepticism, but I now reach for it as my first tool rather than a last resort or a gimmick.


I want to double tap this point. In my experience Claude out performs GPT-4o, Llama 3.1 and Gemma 1.5 significantly.

I have accounts for all three and will generally try to branch out to test them with each new update. Admittedly, I haven’t gotten to Grok yet, but Claude is far and away the best model at the moment. It’s not even close really.


Exactly. It’s insanely helpful when u are a dev with experience in another language. You know what you want, you just don’t know the name of the functions, etc. so you put a comment

// reverse list

And it writes code in the proper language.


> I now reach for it as my first tool

The manual is my first tool.


In this case, the manual is rather poor, so a tool that can cobble together an answer from different sections of the documentation plus blog posts and stack overflow is superior to the manual.


I don't use AI at work at all.

I pay for Leetcode, which usually gives editorial examples in Python and Java and such, and paste it into ChatGPT and say "translate this to a language I am more familiar with" (actually I have other programs that have been doing this for some language to language conversions for years, without AI). Then I say "make it more compact". Then again "make it more compact". So soon I have a big O(n) time, big O(1) space solution to Leetcode question #2718 or whatever in a language I am familiar with. Actually sometimes it becomes too compact and unreadable, and I back it up a little.

Sometimes it hallucinates, but it has been helpful. In the past I had problems with it, but not recently.


What does it mean to be a skeptic here? Have you tried ChatGPT? Copilot?


Perhaps I should have said "AI hype-skeptic"? I am just not seeing the productivity gains that others claim ITT.


Got it. Are you using the latest models? Like, GPT-4o ? I find it significantly more useful when I'm stuck than copilot's autocomplete.


AI is a tool, if you don't know how to use a tool you can't expect to get good results with it. That means both how to interact with the AI and how to structure your code to make the AI's generations more accurate.


If all you got is a LLM hammer, then every problem is a nail.


GPT-4o is the greatest hammer ever invented.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: