Hacker News new | past | comments | ask | show | jobs | submit login

Does anyone feel like the biggest selling point of LLMs so far is basically for programmers? Feels like most of the products that look like could generate revenue are for programmers.

While you can see them as a productivity enhancing tool, in times of tight budgets, they can be useful to lay off more programmers because a single one is now way more productive than pre-LLM.

I feel that LLMs will increase the barrier to entry for newcomers while also make it easier for companies to layoff more devs as you don't need as many. All in all, I expect salaries for non FAANG devs to decrease while salaries for FAANG devs to increase slightly (given the increased value they can now make).

Any thoughts on this?




I see worrying trends in my office.

Developers (often juniors) use LLM code without taking time to verify it. This leads to bugs and they can't fix it because they don't understand the code. Some senior developers also trust the tool to generate a function, and don't take the time to review it and catch the edge cases that the tool missed.

They rely on ChatGPT to answer their questions instead of taking time to read the documentation or a simple web search to see discussions on stack overflow or blogs about the subject. This may give results in the short term, but they don't actually learn to solve problems themselves. I am afraid that this will have huge negative effects on their career if the tools improve significantly.

Learning how to solve problems is an important skill. They also lose access to the deeper knowledge that enable you to see connections, complexities and flows that the current generation of tools are unable to do. By reading the documentation, blogs or discussions you are often exposed to a wider view of the subject than the laser focused answer of ChatGPT

There will be less room for "vibe coders" in the future, as these tools increasingly solve the simple things without requiring as much management. Until we reach AGI (I doubt it will happen within the next 10 years) the tools will require experienced developers to guide them for the more complex issues. Older experienced developers, and younger developers who have learned how to solve problems and have deep knowledge, will be in demand.


> They rely on ChatGPT to answer their questions instead of taking time to read the documentation or a simple web search.

Documentation is not written with answers in mind. Every little project wants me to be an expert in their solution. They want to share with me the theory behind their decisions. I need an answer now.

Web search no longer provides useful information within the first few results. Instead, I get content farms who are worse than recipe pages - explaining why someone would want this information, but never providing it.

A junior isn’t going to learn from information that starts from the beginning (“if you want to make an apple pie from scratch, you must first invent the universe.”) 99.999% of them need a solution they can tweak as needed so they can begin to understand the thing.

LLMs are good at processing and restructuring information so I can ask for things the way I prefer to receive them.

Ultimately, the problem is actually all about verification.


> Documentation is not written with answers in mind. Every little project wants me to be an expert in their solution. They want to share with me the theory behind their decisions. I need an answer now.

I have an answer now, because I read the documentation last week.


This is kind of dismissive.

As a real example, I needed to change my editor config last month. I do this about once every 5 years. I really didn’t want to become an expert in the config system again, so I tried LLM.

Sad to report, it told me where to look but all of the exact details were wrong. Maybe someday soon, though.


It can be dismissive but also true.

I used to make fun of (or deride) all the "RTFM" people when I was a junior too. Why can't you just tell me how to do whatever thing I'm trying to figure out? Or point me in the right direction instead of just saying "its in the docs lol"?

Sometime in the last few years I've started doing more individual stuff, I've started reading documentation before running npm i. And honestly? All the "RTFM" people were 100% right.

Nobody here is writing code that's going to be used on a patient on the surgical table right now. You have time to read the docs and you'll be better if you do.

I'm also a hypocrite because I will often point an LLM at the root of a set of API docs and ask how to do a thing. But that's the next best thing to actually reading it yourself, I think.


I'm in total agreement, TM does wonders. Even if you don't remember all of it you get a gist of what's gong on and can find things (or read them) faster.

In Claude I put in a default prompt[1] that helps me gain context when I do resort to asking the LLM for a specific question.

[1] Your role is to provide technical advice in developing a Java application. Keep answers concise and note where there are options and where you are unsure on what direction that should be taken. Please cite any sources of information to help me deep dive on any topics that need my own analysis.


Ah yes, LLM is very good at giving me information from documentation that was out of date 15 years ago instead of using the documentation from 2025.


Most LLMs, especially the paid tiers, will fetch updated information. This was a valid complaint perhaps 8-12 months ago.


You mean they will DOS the servers of the open source projects? That's even worse!


Funny that that comment is itself out of date for approx. 15 month ago ><


Mostly made up information in my experience.


it's been enormously useful for my Qt3 work though, it really understands it well.


Same could be said for every language abstraction or systems layer change. When we stopped programming kernel modules and actually found a workable interface it opened the door to so many more developers. I'm sure at the time there was skepticism because people didn't understand the internals of the kernel. That's not the point. The point is to raise the level of abstraction to open the door, increase productivity and focus on new problems.

When you see 30-50 years of change you realise this was inevitable and in every generation there's new engineers entering with limited understanding of the layers beneath. Even the code produced. Do I understand the lexers and the compilers that turn my code in to machine code or instruction sets? Heck no. Doesn't mean I shouldn't use the tools available to me now.


No, but you can understand them if given time. And you can rely on them to be some degree of reliable approaching 100% (and when they fail it will likely be in a consistent way you can understand with sufficient time, and likely fix).

LLMs don’t have these properties. Randomness makes for a poor abstraction layer. We invent tools because humans suffer from this issue too.


> it opened the door to so many more developers. [...] That's not the point. The point is to raise the level of abstraction to open the door, increase productivity and focus on new problems.

There are diminishing returns. At some point, quoting Cool Hand Luke, some men you just can't (r|)teach.


This is just one more turtle up. In college, I took a class where they taught us how to code in Assembler. I haven't looked at Assembler until this morning and here is a summary of my 5 minutes of work.

Here's an overview of what we've done:

1. *Created Assembly Code for Apple Silicon*: We wrote ARM64 assembly code specifically for your Apple M1 Max processor running macOS, rather than x86 assembly which wouldn't work on your architecture.

2. *Explained the Compilation Process*: We covered how to compile and link the code using the `as` assembler and `ld` linker with the proper flags for macOS on ARM64.

3. *Addressed Development Environment*: We confirmed that you don't need to install a separate assembler since it comes with Apple's Command Line Tools, and provided instructions on how to verify or install these tools.

4. *Optimized the Code*: We refined the code with better alignment for potential performance improvements, though noted that for a "Hello World" program, system call overhead is the main performance factor.

5. *Used macOS-Specific Syscalls*: The assembly code uses the appropriate syscall numbers and conventions specific to macOS on ARM64 architecture (syscalls 4 for write and 1 for exit).

This gives you a basic introduction to writing assembly directly for Apple Silicon, which is quite different from traditional x86 assembly programming.


Aren't the insufficiencies of the LLMs a temporary condition?

And as with any automation, there will be a select few who will understand it's inner workings, and a vast majority that will enjoy/suffer the benefits.


> Developers (often juniors) use LLM code without taking time to verify it. This leads to bugs and they can't fix it because they don't understand the code

Well... is this something new? Previously the trend was to copy and paste Stackoverflow answers, without understanding what it did. Perhaps with LLM code it's an incremental change but the concept is fairly familiar.


So the scope of answers are single function or single class ? I have people nearby that are attempting generating whole projects, I really wonder how they will ensure anything about it beside the happy paths. Or maybe they plan to have an army of agents fuzzing and creating hotfixes 24/7 ..


> Or maybe they plan to have an army of agents fuzzing and creating hotfixes 24/7

There are absolutely people who plan to do exactly this. Use AI to create a half-baked, AI-led solution, and continue to use AI to tweak it. For people with sufficient capital it might actually work out halfway decent.

I've had success with greenfield AI generation but only in a very specific manner:

    1. Talk with the LLM about what you're building and have it generate a detailed technical specification. Iterate on this until you have a good, human-readable explanation of the entire application or feature.
    2. Start a completely new chat/context. If you're using something like Gemini, turn temperature down and enable external search.
    3. Have instructions¹ guiding the LLM, this might be the most important step, even moreso than #1.
    4. Create the base/blank project as its own step. Zero features or config.
    5. Copy features one at a time from the spec to the chat context OR have them as separate documents and say things like "we're creating Feature 3A.1" or whatever.
    6. Iterate on each feature until you're happy then repeat.
¹ https://www.totaltypescript.com/cursor-rules-for-better-ai-d...


I have a hypothesis for this.

1. Developers are building these tools/applications because it's far faster and easier for them to build and iterate on something that they can use and provide feedback on directly without putting a marketer, designer, process engineer in the loop.

2. The level of 'finish' required to ship these kinds of tools to devs is lower. If you're shipping an early beta of something like 'Cursor for SEO Managers' the product would need to be much more user friendly. Look at all the hacking people are doing to make MCP servers and get them to work with Cursor. Non-technical folks aren't going to make that work.

So then, once there is a convergence on 'how' to build this kind of stuff for devs. There will be a huge amount of work to go and smooth out the UX and spread equivalents out across other industries. Claude releasing remote MCPs as 'integrations' in their web ui is the first step of this IMO.

When this wave crashes across the broader SaaS/FAANG world I could imagine more demand for devs again, but you're unlikely going to ever see anything like the early 2020s ever again.


Shift feels real. LLMs don't replace devs, but they do compress the value curve. The top 10% get even more leverage, and the bottom 50% become harder to justify.

What worries me isn't layoffs but that entry-level roles become rare, and juniors stop building real intuition because the LLM handles all the hard thinking.

You get surface-level productivity but long-term skill rot.


> juniors stop building real intuition because the LLM handles all the hard thinking. You get surface-level productivity but long-term skill rot.

This was a real problem pre-LLM anyway. A popular article from 2012, How Developers Stop Learning[0], coined the term "expert beginner" for developers who displayed moderate competency at typical workflows, e.g. getting a feature to work, without a deeper understanding of lower levels, or a wider high-level view.

Ultimately most developers don't care, they want to collect a paycheck and go home. LLMs don't change this; the dev who randomly adds StackOverflow snippets to "fix" a crash without understanding the root cause was never going to gain a deeper understanding, the same way the dev who blindly copy&pastes from an LLM won't either.

[0] https://daedtech.com/how-developers-stop-learning-rise-of-th...


> Ultimately most developers don't care, they want to collect a paycheck and go home. LLMs don't change this; the dev who randomly adds StackOverflow snippets to "fix" a crash without understanding the root cause was never going to gain a deeper understanding, the same way the dev who blindly copy&pastes from an LLM won't either.

I read this appraisal of what "most devs" want/care about on HN frequently. Is there actually any evidence to back this up? e.g. broad surveys where most devs say they're just in it for the paycheck and don't care about the quality of their work?

To argue against myself: modern commercial software is largely a dumpster fire, so there could well be truth to the idea!


> I read this appraisal of what "most devs" want/care about on HN frequently. Is there actually any evidence to back this up? e.g. broad surveys where most devs say they're just in it for the paycheck and don't care about the quality of their work?

https://en.wikipedia.org/wiki/Sturgeon%27s_law

Almost every field I've ever seen is like that. Most people don't know what they're doing and hate their jobs in every field. We managed to make even the conceptually most fulfilling jobs awful (teaching, medicine, etc).


You could say the same sort of thing about compilers, or higher-level languages versus lower-level languages.

That's not to say that you're wrong. Most people who use those things don't have a very good idea of what's going on in the next layer down. But it's not new.


I think everything will shift more towards winner takes all.


Complex technology --> Moat --> Barrier to entry --> regulatory capture --> Monopoly == Winner take all --> capital consolidation

A tale as old as time. It's a shame we can't seem to remember this lesson repeating itself over and over and over again every 20-30-50 years. Probably because the winners keep throwing billions at capitalist supply-side propaganda.


> All in all, I expect salaries for non FAANG devs to decrease while salaries for FAANG devs to increase slightly (given the increased value they can now make).

I find it interesting how these sort of things are often viewed as a function of technological advancement. I would think that AI development tools would have a marginal effect on wages as opposed to things like interest rates or the ability to raise capital.

Back to the topic at hand however, assuming these tools do get better, it would seemingly greatly increase competition. Assuming these tools get better, a highly skilled team with such tools could prove to be formidable competition to longstanding companies. This would require all companies to up the ante to avoid being outcompeted, requiring even more software to be written.

A company could rest on their laurels, laying off a good portion of their employees, and leaving the rest to maintain the same work, but they run the risk of being disrupted themselves.

Alas, at the job I'm at now my team can't seem to release a rather basic feature, despite everyone being enhanced with AI: nobody seems to understand the code, all the changes seem to break something else, the code's a mess... maybe next year AI will be able to fix this.


LLMs are a solution in search of a problem.

The first problem they have gained traction on is programming auto complete, and it is useful.

Generating summaries, pretty marginal benefit (personally I find it useless). Writing emails, quicker just to type "FYI" and press send than instruct the ai. More problems that needed solving will emerge, but it will take time.


This is a bad take to have, because it blinds you to the reality that is happening. LLM's are auto complete for pros, but full on programmers for non-tech folk. Like when GUI's first came out, the pros laughed and balked because of how much more powerful the CLI was. But look were the world is today.

At my non-tech job, I can show you three programs written entirely by LLMs that have allowed us to forgo paid software solutions. There is still a moat, IDE's are not consumer friendly, but that is pretty solvable. It will not be long before one of the big AI houses is doing a direct code to offline desktop app IDE that your grandma could use.


Deep research has saved me weeks worth of man hours in the last couple of months…


Out of curiosity, which vendor? The deep research is somewhat new to me but I am open minded.


OpenAI on o3 and Gemini 2.5. Like the user below I use multiple providers.


I use two or three at a time and then have another LLM merge and synthesize the output


How did you measure this?


I've been using LLMs as learning tools rather than simply answer generators. LLMs can teach you a lot by guiding your thinking, not replacing it.

It's been valuable to engage with the suggestions and understand how they work—much like using a search engine, but more efficient and interactive.

LLMs have also been helpful in deepening my understanding of math topics. For example, I’ve been wanting to build intuition around linear algebra which for me is a slow process. By asking questions to LLM I find explanations make the underlying concepts more accessible.

For me it's about using these tools to learn more effectively.


I think it's an enabler for everyone.

So many people benefit from basic things like sorting tables, searching and filtering data etc.

Things were I might just use excel or a small script, they can now use an LLM for it.

And for now, we are still in dire need for more developers and not less. But yes I can imagine that after a golden phase of 5-15 years it will start to go down to the bottom when automaisation and ai got too good / better than the avg joe.

Nonetheless a good news is also that coding LLMs enable researchee too. People who often struggle learning to code.


When a company lays off a chunk of the workforce because the increased productivity due to LLMs means they don't need as many people, how is it an enabler for the laid off people.

What happens when most companies do this?

During the 10s, every dev out there was screaming "everyone should learn to code and get a job coding". During the 20s, many devs are being laid off.

For a field full of self-professed smart and logic people, devs do seem to be making tons of irrational choices.

Are we in need of more devs or in need of more skilled devs? Do we necessarily need more software written? Look at npm, the world is flooding in poorly written software that is a null reference exception away from crashing.


> What happens when most companies do this?

It also means it becomes easier to start new company and solve a problem for people.


People get laid off when money is expensive. When money is expensive, running companies is harder. Starting a new company is even harder. Without capital, all you can offer is some words, a broken demo of your v1 prototype and some sweet words. You can't start a company with just that when money is expensive.


Right now we have not enough software developers at least based on surveys.

So now LLM helps us with that.

In parallel all the changes due to AI also need more effort for now. That's what I called golden age.

After that, I can imagine fundamental change for us developers.

And at least we're I live, a lot of small companies never got the chance to properly become modern due to the good developers earning very good money somewhere else.


I like to think that AI is to code what digital electronic was to analog electronic: a step backward in term of efficiency and 10 steps forward in term of flexibility.

Some of us will always maintain code, but most will move higher in the stack to focus on products and their real world application.


You'll commonly see new technologies utilized by people that have the ability to make use of that technology for their own gain. Programmers are (for the most part) the only ones that can unlock LLMs to solve very specific personal problems. There are workflow automation tools allowing non-programmers the ability to do workflows but that's only one way to utilize them and it will always be constrained by the already developed integrations and the constraints of the workflow platform.

In regards to jobs and job losses I have no idea how this is going to impact individual salaries over time in different positions, but I honestly doubt its going to do much. Language models are still pretty bad at working with large projects in a clean and effective way. Maybe that will get better, but I think this generational breakthrough of technology is slowing down a lot.

Even if they do get better, they still need direction and validation. Both of which still require some understanding of what is going on (even vibe coding works better with a skilled engineer).

I suspect there is going to be more "programmers" in the world as a result, but most of them will be producing small boutique single webpage tools and designs that are higher quality than "made by my cousin's kid" that a lot of small businesses have now. Companies > ~30 people with software engineers on staff seem to be using it as a performance enhancer rather than a work replacement tool.

There will always be shitty managers and short-sighted executives that are looking to replace their human staff with some tool, and there will be layoffs but I don't think the overall pool of jobs is going to reduce. For the same reason I don't think there is going to be significant pay adjustments but a dramatic increase in the long-tail of cheap projects that don't make much money on their own.


I don't get why making engineers more productive would decrease their salaries. It should be the reverse.

You could argue that it makes the bar lower to be productive so the candidate pool is much greater, but you're arguing the opposite, increasing the barrier to entry.

I'm open to arguments either way and I'm undecided, but you have to have a coherent economic model.


> I don't get why making engineers more productive would decrease their salaries. It should be the reverse.

You need less engineers to do the same, demand gets lower, offer remains as high.


But they're more productive. Your assumption is there is a fixed amount of engineering work to do so you need to hire fewer programmers, which is untrue. Every organization I worked at could have invested a lot more in engineering, be-it infrastructure, analytics, automation, etc.

Even if there were a fixed amount of work to do and we're already near that max amount, salaries still wouldn't necessarily go down. Again, they're more productive. Farming used to be 90% of the workforce in the US in the early 1900s. Now farmers are more productive and they're only 2% of the workforce. Do these farmers today earn a lower salary adjusted for inflation than 100 years ago? Of course not, because they're much more productive now with tools.

Generally wages track productivity. The more productive, the higher the wage.

Another example is bank tellers. With the advent of the ATM, somehow bank teller salaries didn't drop in real terms.

Show me an example of where this played out. Someone was made much more productive through technology and their salary dropped considerably


> Your assumption is there is a fixed amount of engineering work to do so you need to hire fewer programmers, which is untrue. Every organization I worked at could have invested a lot more in engineering, be-it infrastructure, analytics, automation, etc.

True. Problem is investment is a long-term action (cost now, for gains later). Literally every company can benefit from investment. The key question is whether how valuable are the gains over a given time period relatively to the cost you are incurring between now and the moment the gains are actualised.

LLMs wouldn't have helped Meta/Microsoft/Google lay off less people in the last 2 years. In fact, you could argue that they would have helped lay off MORE people as with LLMs you need less people to run the company. Do you think Zuckerberg would have INCREASED expenses (that's what productivity investments are) when their stock was in freefall?

Companies can't afford to spend indefinite amounts of money at any time. If your value has been going down or is going down, increasing your current expenses will get you fired. Big problems now, require solutions now. The vast majority of the tech companies in the world chose to apply a solution now.

Maybe you are right, but a look at the tech world in the last 3 years should be telling you that your decision would have been deeply popular with the people that hold the moneybags. And at the end of the day, those are the people you don't want to anger no matter how smart you believe yourself to be.


In the real world experiment we're living through you're being proven wrong. Tech companies have been laying off engineers continuously for several years now and wages are down.


Layoffs started before the rise in llms and all the tooling around coding using llms. They were never used as a justification. What happened was musk bought Twitter, cut 80% headcount and it was still up which showed you can be leaner and other tech ceos took note. That and the stock crashed as we were post COVID bubble.


Layoffs track with the end of ZIRP, so that is a possible confounder.


> Does anyone feel like the biggest selling point of LLMs so far is basically for programmers? Feels like most of the products that look like could generate revenue are for programmers.

No, you're in a tech bubble. I'm in healthcare, and you'd think that AI note takers and summary generators were the reason LLMs were invented and the lion's share of use. I get a new pitch every day, "this product will save your providers hours every day!" They're great products, and our providers love ours, but it's not saving hours.

There's also a huge push for LLMs to work in search and data-retrieval chatbots. The push there is huge, and now Mistal just released Le Chat Enterprise for that exact market.

LLMs for code are so common because they're really easy to create. It's notepad plus chatGPT. Sure, it's actually VS Code and CoPilot, but you get the idea, it's actually not more complicated than regular chatbots.


People forget that software engineers are already speculated to come in 10x and 100x variants, so the impact that one smart dedicated person could make is almost certainly not the problem and not changed at all by AI.

The fact is you could be one is the most insanely valuable and productive engineers in the planet might only write a few lines of code most days, but you'll be writing them in a programming language, OS, or kernel. Value is created by understanding direction and by theory-building, and LLMs do neither.

I built a genuinely new product by working hard as a single human while all my competitors tried to be really productive with LLMs. I'm sure their metrics are great, but at the end of the day I, a human working with my hands and brain and sharpening my OWN intelligence have created what productivity metrics cannot buy: real innovation


Imagine the problem is picking a path against an unexplored desolate desert wasteland. One guide says that he's the fastest. Runs not walks, at a fork in the way always picks a path within 5 seconds. They promise you that they are the fastest guide out there by a factor of two.

You decide on a second opinion, and find an old wizened guide who says they always walk not run, never picks a path more quickly than 5 minutes, and promises you that no matter what sales pitch the other guide gives they can get you across the desert in half the time and half the risk to your life.

Both can't be true. Who do you believe and why?


My father pays for ChatGPT and it’s his personal consultant/assistant for everything - from troubleshooting appliance repair, to finding the correct part to buy, to guiding him step by step to track down lost luggage and drafting the email to airline asking for compensation (and got it).

It does everything for him and it gives him results.

So no, I don’t think it’s most useful for programmers, in fact I feel people who are not very techy and not good at Googling for solutions benefit the most as chatGPT (and LLM in general) will hand hold them through every problem they have in life, and is always patient and understanding.


I learned to program as a child in the 1960s (thanks Dad!) so I have some biases:

Right now there seem to be two extremely valuable LLM use cases:

1. sidekick/assistant for software developers

2. a tool to let people rapidly explore new knowledge and new ideas; unlike an encyclopedia, being able to ask questions, suggest references and get summaries, etc.

I suspect that the next $$$ valuable use case will be scientific research assistants.

EDIT: I would add that AI in k-12 education will be huge, freeing human teachers to spend more 1 on 1 time with kids while AIs will be patient teaching kids, providing extra time and material as needed.


The most valuable LLM use case right now is allowing people who don't know how to program to get their computer to do what they want it to do.

They might not be aware of this, they don't know how to use an IDE, but the hardest part - the code writing part, is solved.

Every week Rachel in [small company] accounting is manually scanning the same column in the same structured excel documents for amounts that don't align with the master amount for that week. She then creates another excel document to properly structure these findings. Then she fills out a report to submit it.

Rachel is a paragraph prompt away from not ever having to do that again, she just hasn't been given the right nudge yet.


It is a bit like incandescent light was early selling point of electricity.

Stable odourless on-demand light was in short supply, so it helped to jump-start a new industry and network.

The real range of possible uses is near endless, for tech available today. It is just a coincidence that coding is in short supply today.


It can backfire though.

There is some mental overhead switching projects. Meaning even if a developer is more efficient per project he wont get more money (usually less actually) while increasing mental load (more projects, more managers, more requirements, etc).

Will be interesting to watch


> I expect salaries for non FAANG devs to decrease while salaries for FAANG devs to increase slightly (given the increased value they can now make).

Are you implying that non-FAANG devs aren't able to do more with LLMs?


I'm non-FAANG and I'm so much more productive now. I am a fullstack dev, I use them for help with emails to non tech individuals, analyzing small datasets, code review, code examples.....it is wild how much faster I can develop these days. My job is actually more secure because I can do more, and OWN more mission critical software, vs outsourcing it.


> Feels like most of the products that look like could generate revenue are for programmers.

Don’t discount scamming and spreading misinformation. There’s a lot of money to be made there, specially in mass manipulation to destroy trust in governments and journalists. LLMs and image generators are a treasure trove. Even if they’re imperfect, the overwhelmingly majority of people can’t even distinguish a real image from a blatantly false one, let alone biased text.


LLM's don't increase programmer productivity. In fact, they actively harm it.

Programmers aren't paid for coding, they're paid for following a formal spec in a particular problem domain. (Something that LLM's can't do at all.)

Improving coding speed is a red herring and a scam.


In my 30 years of software development, maybe 5 of them were in places were getting people to provide a formal spec was ever an option.

It's also irrelevant if LLM's can follow them - the way I use Claude Code is to have it get things roughly working, supply test cases showing where it fails, then review and clean up the code or go additional rounds with more test cases.

That's not much different to how I work with more junior engineers, who are slower and not all that much less error-prone, though the errors are different in character.

If you can't improve coding speed with LLM's, maybe your style of working just isn't amenable to it, or maybe you don't know the tooling well enough - for me it's sped things up significantly.


You don't understand.

The fact that getting a formal spec is impossible is precisely why you need to hire a developer with a big salary and generous benefits.

The formal spec lives only in the developer's head. It's the only way.

Does an LLM coding agent provide any value here?

Hardly. It's just an excuse for the developer to waste time futzing around "coding" when what they're really paid to do is cram that ineffable but very much important formal spec into their heads.


> The formal spec lives only in the developer's head.

You and I have different ideas of what a formal spec is.


Programming language code is a kind of formal spec.



Nonsense.

It works just fine to use an LLM coding agent in cases like this, but you need to be aware of what you're actually trying to do with them and be specific instead of assuming they'll magic up the spec from thin air.


I don't know. The other day I wanted to display an Active Directory object to the user. The dict had around 20 keys like "distinguishedname" and the "createdat" with timestamps like 144483738. I wanted friendly display names in a sensible order and have binary values converted to human readable values.

Very easy to do, sure, but the LLM did this in one minute, recognized the context and correctly converted binary values where as this would have taken me maybe 30 minutes of looking up standards and docs and typing in friendly key names.

I also told it to create five color themes and apply them to the CSS. It worked on the first attempt and it looks good, much better than what I could have had produced by thinking of themes, picking colors and copying RGB codes back and forth. Also I'm not fluent in CSS.

Though I wasn't paid for this, it's a hobby project, which I wouldn't have started in the first place without an LLM performing the boring tedious tasks.


Yes, these sorts of tasks are where LLM's are exceedingly useful.

But I was talking specifically about coding agents.

(A.k.a. spend four hours micromanaging prompts and contexts to do what can be done in 15 minutes manually.)


Yes, these sorts of tasks (classification and summarizing and generally naming things) are where LLM's are exceedingly useful.

But I was talking specifically about coding agents.

(A.k.a. spend four hours micromanaging prompts and contexts to do what can be done in 15 minutes manually.)


It depends on what you consider "coding".

For me it's mainly adding quick and dirty hooks to Wordpress websites from berating marketing c-suits for websites that are gonna disappear or never visited anymore in less than a few months.

For that, whatever Claude spits out is more than enough. I'm reasonably confident I'm not going to write much better code in the less-than-30-minutes I'm allowed to spend to fix whatever issue comes up.


It's very marmite. I used to hate it when it was vscode's crappy copilot. Now with Cursor and Windsurf, after some onboarding, I find it indispensable. I have used AI for coding for 3 separate roles: - freelancer - CTO - employee

And in all 3 cases, AI has increased my productivity, and I could ship things even when I'm really sleepy or if I have very little time between things, I can send a prompt to an agent and then review things, and then when I have more time, I can clean up some of the mess.

Now my stance is really at "Whoever doesn't take advantage of it is NGMI"

You're specifically very wrong at "LLM's cannot do: following a formal spec in a particular problem domain". It does take skill to ensure that they will, though, for sure.

TLDR: Skill issue


> TLDR: Skill issue

What's the skill set here? Spending four hours to massage prompts to painstakingly do what can be done manually in 15 minutes?


No, it's more like knowing the strengths and weaknesses, and if the work is good, accepting, and if not good, directing in the right way. The latter may take some time to learn, for sure, but not that much, and once you know, it's faster and faster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: