Hacker Newsnew | past | comments | ask | show | jobs | submit | demorro's commentslogin

Absurd take. The response was completely measured, and even if it wasn't The Document Foundation has no obligation either legal or moral to present as professional. They are not a business.

As a Brit this is so embarrassing I wish they would stop.

Doesn't really seem like there's an anti-authoritarian party available to us either.


The most embarrassing part of that article is the comment section. Not a single dissenting voice.

You're missing the point, and also demonstrating it. This blog isn't about personal experience, and it makes no claims about LLM capability at all. It is simply about whether code, in either volume or quality, should be used as a proof claim.

> LLMs entice us with code too quickly. We are easily led.

Arguably _is_ your argument. That people aren't doing the above and it's causing problems. You probably agree that just spinning up Claude code on the regular plan without doing the above can still generate a fuck-ton of code but that shouldn't be used as evidence either for or against AI effectiveness.


Hey, I like your writing. You got an rss feed or anything?



Thanks!


Would you entertain the idea that "work was never the bottleneck", or even "building products was never the bottleneck"?

We need to address Jevons' Paradox somehow.


I love Jevons’ paradox too, but if we apply it here don’t we still end up with more software?

Definitely would entertain -- I do agree with your framing. I just think the article undersells the impact of fast+cheap codegen.

Lowering the cost of implementation will (has) expose new bottlenecks elsewhere. But imho many of those bottlenecks probably weren’t worth serious investment in solving before. The codegen change will shift that.


I think that's where a heck of a lot of the frustration on this topic is coming from. Some engineers claim to have solved the code generation issue well enough that it hasn't been the bottleneck in their local environment, and have been trying to pivot to widening the new bottlenecks for a while now, but have been confounded by organisational dynamics.

To see the other bottlenecks starting to be taken seriously now, but (if I'm to be petulant) all the "credit" of solving the code bottleneck being taken by LLM systems, it's painful, especially when you are in a local domain where the code gen bottleneck doesn't matter very much and hasn't for a long time.

I suspect engineers that managed to solve the code generation bottlenecks are compulsive problem solvers, which exacerbates the issue.

That isn't to say there are some domains where it still does matter, although I'm dubious that LLM codegen is the best solve, but I am not dubious that it is at least a solve.


A well considered article, despite the author categorizing it as a rant. I appreciate the appendix quotations, as well as the acknowledgement that they are appeals to authority.

Whilst the author clearly has a belief that falls down on one side of the debate, I hope folks can engage with the "Should we abandon everything we know" question, which I think is the crux of things. Evidence that AI-driven-development is a valuable paradigm shift is thin on the ground, and we've done paradigm shifts before which did not really work out, despite massive support for them at the time. (Object-Oriented-Everything, Scrum, etc.)


I didn't set out to teach you anything, change your behavior, or give you practical takeaways, so it's a rant (: Emotions can be expressed with citations.

I am fully on board with gen AI representing a paradigm shift in software development. I tried to be careful not to take a stance on other debates in the larger conversation. I just saw too many people talking about how much code they're generating as proof statements when discussing LLMs. I think that, specifically---i.e., using LOC generated as the basis of any meaningful argument about effectiveness or productivity---is a silly thing to do. There are plenty of other things we should discuss besides LOC.


I guess I over-diagnosed your stance, apologies.

I wonder if you have a take on measuring productivity in light of the potential difficulty of achieving good outcomes across the general population?

You mention in the second appendix (which I skipped on my first read), that you are a rather experienced LLM user, with experiences in all the harnesses and context management which are touted as "best practice" nowadays. Given the effort this seems to take, do you think we're vulnerable to mis-measuring.

My mind is always thrown to arguments about Agile, or even Communism. "True Communism has never been tried" or "Agile works great when you do it right", which are still thrown about in the face of evidence that these things seem impossible, or at least very difficult, to actually implement successfully across the general population. How would we know if AI-driven-development had a theoretical higher maximum "productivity" (substitute with "value", "virtue", "the general good", whatever you want here) than non AI-driven-development, but still a lower actual productivity due to problems in adoption of the overall paradigm?


Measuring productivity in software development is a hard problem, beyond the typical categorizations used in computer science. Unfortunately, I think my best answer is to go read the book I linked in the conclusion: https://link.springer.com/chapter/10.1007/978-1-4842-4221-6_...

That is an unsatisfying answer. I can point to anecdotes that suggest AI is hurting productivity or improving it, but those don't make an argument. And the extremes on either side make it very difficult to consider. How do you weigh "An LLM deleted my production database" against "I built a business on the back of AI-assisted software"?

I think we have to wait and see. And we should revisit questions of cost and value continuously, not just about LLMs, but generally in life. Most of my motivation (though not an overwhelming majority) around using LLMs right now is a mix of curiosity and wanting to avoid the fate of the steam shovel.


That’s my entire issue with AI. How quickly people are pushing adoption without the evidence to back that up. My buddy works for block and he said they fired 70% of their engineers in a bid to force the remaining 30% to use AI in order to keep up.

My very large tech company has made it a goal for each engineer to spend their salary in tokens.

You can make a big bet on AI without risking the entire company. How about we wait for some evidence that shows measurable productivity increases before betting the farm.


There's plenty of evidence of this line of thinking even from before the turn of the Millennium. Mythical Man Month, No Silver Bullet, Code Complete, they all gesture at this point.


Writing the code can definitely feel like the bottleneck when it's a single-person project and you're doing most of the other hard parts in your head while staring at the code.


Seconding this. Revelation happens subtly, often far removed from what you might later unpick as its "primary source". Immediate interpretation tend to be plastic and shallow.


Also it might be hard to grasp for most of us, used to constant stimulation and lack of space for contemplation and incorporation of information (I recommend the works of philosopher Byung-Chul Han on the matter) with yet unknown effects on our psyche and creative output. It takes days or weeks for one to sit and digest novel viewpoints; asking a machine to skip all that work for us is just another example of seeking instant gratification. I have no time to think, do it for me, so I can scroll to the next post already.


I quit a job 8 years ago because I learned my code had been deployed inside missiles. Many of my colleagues had similar red lines. I doubt many would now.


> Q: "Isn't it your job as an open-source maintainer/developer to foster a welcoming community?"

The answer to this implies that the requirement to be welcoming only applies to humans, but even in this hostile and sarcastic document, it doesn't go far enough.

Open source maintainers can be cruel, malicious, arbitrary, whatever they want. They own the project, there is no job requirements, you have no recourse. Suck it up, fork the thing, or leave.


The bigger issue is that that kind of statement is highly manipulative, and indicates someone who is playing politics instead of focusing on results.

The better response is to call the bluff, something along the lines of: "Running an open-source project is quite time consuming. Please don't waste our time with emotional manipulation to get your way. Instead, take the time to understand why your LLM-generated pull request is not useful. You can start by understanding that we have access to LLMs too, and realize that a significant amount of work needs to happen after an LLM proposes changes."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: