Amazed at all the negative rhetoric around coding with LLMs on HN lately. The coding world is deeply split about their utility.
I think part of that comes from the difficulty of working with probabilistic tools that needs plenty of prompting to get things right, especially for more complex things. To me, it's a training issue for programmers, not a fundamental flaw in the approach. They have different strengths and it can take a few weeks of working closely to get to a level where it starts feeling natural. I personally can't imagine going back to the pre LLM era of coding for me and my team.
Yes, the coding world is deeply split about their utility, since the coding world encompasses everything from a 13 year old modifying CSS files to a senior engineer building satellite guidance systems, and everything inbetween. The 13 year old with no knowledge, doing simple, well documented, non-critical things, where quality engineering isn't a concern, would find AI assistance absolutely mindblowingly amazing, whereas the senior dev doing hard, critical, high-quality required, work at the other end of the spectrum may find it nearly useless.
Since developers working a tplaces like Parkes Observatory use LLMs regularly, it seems like experience ("13-year-olds" versus "senior engineers" at the two extremes) doesn't explain this gap as well as you imply.
The other hypotheses in this thread (e.g. that it's largely a matter of programming language) seem much more plausible.
When it comes to full on vibe coding (Claude Code with accept all edits), my criteria is whether I will be held responsible for the complexity introduced by the code. When I've been commissioned to write backend APIs, "the buck stops with me" and I will have to be able to personally explain potentially any architectural decision to technical people. On the other hand, for a "demo-only" NextJS web app that I was hired to do for non-technical people (meaning they won't ever look at the code), I can fully vibe code it. I don't even want to know what the complexity and decisions AI has made for me but as far as I am concerned this will be a secret forever.
Everyone can use these tools to deepen knowledge and enhance output.
But, there is a difference between using LLMs and relying on LLMs. The hype is geared toward this idea that we can rely on these tools to do all the work for us, we can fire everyone, but its bollocks.
It becomes an increasingly ridiculous proposition as the work becomes more specialized indepth, cross functional, regulated and critical.
You can use it to help at any level of complexity, but nobody is going to vibe code a flight control system.
I'm surprised this doesn't get brought up more often, but I think the main explanation for the divide is simple: current LLMs are only good at programming in the most popular programming languages. Every time I see this brought up in the HN comments section and people are asked what they are actually working on that the LLM is not able to help with, inevitably it's using a (relatively) less popular language like Rust or Clojure. The article is good example of this, before clicking I guessed correctly it would be complaining about how LLMs can't program in Rust. (Granted, the point that Cursor uses this as an example on their webpage despite all of this is funny.)
I struggled to find benchmark data to support this hunch, best I could find was [1] which shows a performance of 81% with Python/Typescript vs 62% with Rust, but this fits with my intuition. I primarily code in Python for work and despite trying I didn't get that much use out of LLMs until the Claude 3.6 release, where it suddenly crossed over that invisible threshold and became dramatically more useful. I suspect for devs that are not using Python or JS, LLMs have just not yet crossed this threshold.
As someone working primarily with Go, JS, HTML and CSS, I can attest to the fact that the choice of language makes no difference.
LLMs will routinely generate code that uses non-existent APIs, and has subtle and not-so-subtle bugs. They will make useless suggestions, often leading me on the wrong path, or going in circles. The worst part is that they do so confidently and reassuringly. I.e. if I give any hint to what I think the issue might be, after spending time reviewing their non-working code, then the answer is almost certainly "You're right! Here's the fix..."—which either turns out to be that I was wrong and that wasn't the issue, or their fix ends up creating new issues. It's a huge waste of my time, which would be better spent by reading documentation and writing the code myself.
I suspect that vibe coding is popular with developers who don't bother reviewing the generated code, either due to inexperience or laziness. They will prompt their way into building something that on the surface does what they want, but will fail spectacularly in any scenario they didn't consider. Not to speak of the amount of security and other issues that would get flagged by an actual code review from an experienced human programmer.
If you've sufficiently mastered performing the skill directly, the idea of going the "difficult" and circuitous route of asking a probabilistic tool in just the right way makes no sense. It would slow you down. It's also a training issue that I wouldn't be able to code well on a Dvorak keyboard, but I don't plan on making that switch either.
"Mastering" is a strong term and is even misleading, especially when talking about tools that give you leverage. I mean if someone masters running, does it mean you never use a car? There are thousands and thousands of instances in everyday programming where AI is going to be 10x-100x faster than any human, especially at the function level and even file/script level.
I can give you a concrete example since things sometimes can be so philosophical. The other day I needed a LIS code (Longest Increasing subsequence) with some very specific constraints. It would've honestly taken me a few hours to get it right as it's been a while I coded that kind of thing. I was able to generate the solution with o3 in around 10 minutes, with some back and forth. It wasn't one shot, but took me 2-3 iteration cycles. I was able to get highly performant code that worked for a very specific constraint. It used Fenwick trees (https://en.wikipedia.org/wiki/Fenwick_tree) which I honestly hadn't programmed myself before. It felt like a science fiction moment to me as the code certainly wasn't trivial. In fact I am pretty sure most senior programmers would fail at this task, let alone be fast at it.
As a professional programmer, I deal with 20 examples every day where using a quality LLM saved me significant time, sometimes hours per task. I still do manual surgery a bunch of times everyday but I see no need to write most functions anymore or do multi-file refactors myself. In a few weeks, you get very good at applying Cursor and all its various features intelligently, like an amazing pair programmer who has different strengths than you. I'll go so far as to say I wouldn't hire an engineer who isn't very adept at utilizing the latest LLMs. The difference is just so stark - it really is like science fiction.
Cursor is popular for a reason. Lot of incredible programmers still get incredible value out of it, it isn't just for vibe coding. Implying that Cursor can be a net negative to programmers based on an example is a lot of fear mongering.
>It would've honestly taken me a few hours to get it right as it's been a while I coded that kind of thing. I was able to generate the solution with o3 in around 10 minutes, with some back and forth.
>which I honestly hadn't programmed myself before.
How can you be sure it is correct if you haven't mastered the data structure yourself?
It is far game to criticize cursor's marketing copy and I don't think it is fear mongering to point out that it is not exactly confidence inspiring when the first thing they show you as an example of its utility is both low quality and wrong
I mean the headline "Net-Negative Cursor" is a pretty far reaching conclusion. The article does try to generalize on the implications from a code snippet for AI powered programming. The headline isn't "The example on Cursor's website is incorrect".
Do you really look at the title of this piece and think “damn that’s a far reaching conclusion”? I look at it and think “here is an instance of Cursor not delivering on its marketing promises”
That said, this article is very obviously not rhetoric. It seems almost dumb to argue this point. Maybe we should ask an AI if it is or not. I mean, I don’t know the author nor do I have anything to gain from debating this, but you can’t just go calling everything “rhetoric” when it’s clearly not. Yes there’s plenty of negative rhetoric about LLMs out there. But that doesn’t make everything critical of LLMs negative rhetoric. I’m very much pro-AI btw.
"But then I look at what these tools actually manage to do, and am disillusioned: these tools can be worse than useless, making us net-negative productive." It starts from this premise right in the first paragraph. And goes on to illustrate an example that proves their point ("Let's pick one of the best possible examples of AI-generated code changes.").
anyways, it doesn't matter that much :) we could be both right.
Those of us that consider software engineering to be “engineering” do not like LLMs, you are correct. Engineering requires that you face reality, evaluate the problem, and choose a solution in a deterministic way, then later return to evaluate the efficacy of the solution, changing the solution if required.
Those of us that consider software development to be “typing until you more or less get the outcome that you want” love LLMs. Non-deterministic vibes all around.
This is also why executives love LLMs; executives speak words and little people do what was asked of them, generally, sometimes wrong, but are later corrected. An LLM takes instructions and does what was asked, generally, sometimes wrong, and is later corrected, but much faster than unreliable human plebs who get sick all the time and demand vacation and time to mourn deaths of other plebs.
Curious. Do you write deterministic code? Because I don't think I can write the same code for any non-trivial task twice. Granted, I would probably remember which algorithm or design pattern I used before, and I can try and use the same methods, but you can also prompt that information to an LLM.
Another question: Can you hire software developers who write code in a deterministic way? If you give the same task to multiple developers with the same seniority level, do you always get the same output?
> "typing until you more or less get the outcome that you want”
For the record, I don't use LLMs for anything that is beyond auto-completion, but I think you are being unfair to them. They are actually pretty good at getting atomic tasks right when prompted properly.
Yes, I write deterministic code. Given the same input, I work hard to make sure the functions I write do the same work and give the same output every time.
Now, if you’re going to hold me to some comp-sci definition of “deterministic” then I don’t know if I do or not but I can tell you that I don’t think I’ve ever come across a problem in recent times where randomness was a desirable property.
Do I write code deterministically? I don’t know. I approach problems that look alike in like ways, though. LLMs definitely give you different results for the same problem on different days, which says that LLMs are not fully aware of the context of what you’re doing, or they are ignoring that context.
Any solution which does what is needed of it is going to be fine, so long as the solution doesn’t consume too much RAM, CPU, or time to implement, which is why we see a lot of distinct bridges in the world. And I would not trust an LLM to design a safe bridge. I would like for software engineering to adopt practices of other engineering fields. I want performance and reliability of the software I use to go WAY up. As long as we are using LLMs to author things, we will never get there, because they have no idea what they are doing. LLMs are made to make you think they know what they’re doing.
I am not sure what you're implying. The first sentence makes no sense. LLMs aren't giving you non-deterministic code. The code is shown to you and have complete control over how it looks and operates. Not understanding the mechanics of how the code is generated by the LLM doesn't make the output non-deterministic.
If you choose to accept bad code, that's on you. But I am not seeing that in practice, especially if you learn how to give quality prompts with proper rules. You have to get good at prompts - there is no escaping that. Now programmers do suck at communicating sometimes and that might be an issue.
But in my experience, it can write far higher quality code than most programmers if used correctly.
It all makes sense. You just don’t understand what I am saying, probably because I am not being 500% obviously clear on the internet, and without that no one knows what you’re getting at.
I think part of that comes from the difficulty of working with probabilistic tools that needs plenty of prompting to get things right, especially for more complex things. To me, it's a training issue for programmers, not a fundamental flaw in the approach. They have different strengths and it can take a few weeks of working closely to get to a level where it starts feeling natural. I personally can't imagine going back to the pre LLM era of coding for me and my team.