I feel with people that say that "AI have take the fun out of programming" for them, but at the same time I think to myself: is it about doing, or is it about getting things done? Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no?
For the great majority of work, it is not about fun, but about doing something other people need or want.
For me, AI allows me to realize my ideas, and get things done. Some of it might be good, some of it might be bad. I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.
Who says the thing is done? there is a massive danger now, with the sheer amount of complexity & speed brought by ai, in that it's increasingly harder to verify / do proof-of-work.
>> AI allows me to realize my ideas
sure for a personal/pet project. however, when working for a customer/client, they've ideas, needs, wants and usually have their own users and shareholders to satisfy - need proof.
>> lighting up the gas-powered street lights
ok, no this metaphor may well be loved by ai companies, but doesn't actual work in so many levels. For one, ai (as actually provided) is not electricity or a physical system, a brain, or a mind, it's software (I use it v-selectively). Second, the job being done (lighting, or coding) is ultimately to produce / output the desired outcome for whoever ordered it - a solution to a problem - failing that it's just work and wages for the worker but no effective solution (lighting the dark side of the moon, kinda).
I agree with the OP, as system complexity went up, so does the ability to keep up.
> Like I imagine someone in the past loved their job walking each night through their city, lighting up the gas-powered street lights. And then one day someone else implemented electric street lights, and the first person lost the job they loved. But in the end, its about providing light to the city streets, no?
Lighting or extinguishing a gas lamp does not allow for creative expression.
Writing a program does.
The comparison is almost offensive.
> For the great majority of work, it is not about fun, but about doing something other people need or want.
Some of us write code for reasons that are not related to employment. The idea that someone else might find the software useful is speculative, and perhaps an expression of hubris; it's not the source of motivation.
> I put at least as much time, attention and effort as the "real" programmers do, but my time goes into thinking and precisely defining what I want, cutting it up into smaller logical modules, testing, identifying and fixing bugs, iterating all the time.
Ok but then none of this is about you. People still make art even though artists don’t make any money, and that is wonderful. Improving productivity for actual work could let everyone have more time for creative self expression. Hasn’t seemed to work out that way in practice but maybe this time is different!
Under any current / capitalist system by design it won’t be different because managers/capital want to increase gains, which contradicts allowing you to keep the gains.
>For the great majority of work, it is not about fun, but about doing something other people need or want
The essence of this, I think, is that a sense of craftsmanship and appreciation for craft often goes hand in hand with the ethos of learning and understanding what you are working with.
So there is the issue of who rightly deserves to get the satisfaction out of the getting things done. But there's also the fact that satisfaction goes hand in hand with craft, with knowledge. And that informs a perspective of being able to do things.
I finally read Adrift, 76 Days at Sea, a fantastic book about surviving in a life raft while drifting across the ocean. But the difference between life and death was an individual with an abundance of practical survival, sailing and navigation knowledge. So there's something to the idea of valuing the ability to call on hard earned deep knowledge, and a relationship to knowledge that doesn't abstract it away.
Almost paralleling questions of hosting your own data or entrusting it in centralized services.
> It sounds like the person who made this repo didn’t need help but used the help anyway and had a bad time.
tbh, it would've taken me 10x the time, the docs are not very obvious rp2350 is fairly new, and its riscv is not used as much and is afterthought, if I was writing it for arm it would've been much easier as the arm swd docs are very clear.
I am also new to the pico world.
It is not easy to make myself do something when I know its going to take 10 times longer and its going to be 10 times harder, even if I know I will feel 10 times better.
You know when they say "find what for you is play and for others is work"? well..
Well, for what it's worth (maybe nothing), I think you can feel relatively good about your accomplishment.
The technical leader who essentially dictated to me how to build one of my recent deliverables down to nearly the exact architecture was basically treating me like an AI. If they didn't have that deep knowledge I would have also taken 10x longer to arrive at the endpoint. I followed their architecture almost exactly, and due to their much more deep knowledge than mine I encountered very few issues with that development process as a result. Had I been on my own I would have probably tried multiple things that simply didn't work.
That person also has to be a little bit willfully ignorant about the code that I am going to produce. They don't know what I'm going to write or if it's going to suck, and maybe they won't even understand it because it's spaghetti. And they won't actually have the time to fix it because they have a zillion management-level priorities and multiple layers of reporting chain below them.
Is this AI world kind of shitty and scary how it might just screw our industry over and be bad for the world? It might be, we might be like the last factory workers before Ford Motor Company goes from 100,000 workers on the line to 10,000 or 1,000.
But like every cordless drill given to engineers, it's tough not to use it.
> I’ve never even able to make a mobile app before. My skillset was just a bit too far off and my background more in the backend.
> Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.
AI is such an existential threat to many of us since we value our unique ability to create things with our skills. In my opinion, this is the source of immediate disgust that a lot of people have.
A few months ago, I would've bristled at the idea that someone was able to write a mobile app with AI as that is my personal skillset. My immediate reaction when learning about your experience would've been, "Well, you don't really know how to do it. Unlike myself, who has been doing it for many, many years."
Now that I've used AI a bit more, like yourself, I've been able to do more that I wasn't able to before. That's changed my perspective of how I look at skills now, including my own. I've recognized that AI is devaluing our unique skillsets. That obviously doesn't feel great, but at the same time I don't know if there's much to be done about that. It's just the way things are now, so the best I can do is lean into the new tools available and improve in other ways.
It's entirely possible that this will turn us all into much less of a special highly-compensated profession, and that would suck.
Although when you say "AI is devaluing our unique skillsets," I think it's important to recognize that even without AI, it's not our skillsets that ever held value.
Code is just a means to translate business logic into an automated process. If we had the same skillset but it couldn't make the business logic do the business, it has no value.
Maybe this is a pedantic distinction, but it's essentially saying that the "engineer" part of "software engineer" is the important bit - the fact that we are just using tools in our toolbox to get whatever "thing" needs to get done.
At least for now, it seems like actually possessing a skillset is helpful and/or critical to using these tools. They can't handle large context, and even if that changes, it still seems to be extremely helpful to be able to articulate on a detailed level what you want the AI to develop for you.
An analogy to that is that if you just put your customer in front of a development team and tell them how to make the application, versus putting a staff engineer or experienced product manager in front of them. The AI might be able to complete the project in both cases, but with that experienced person it's going to avoid a lot of pitfalls and work faster/better.
This analogy reminds me of a real-life instance where I built something that someone higher than director level basically spelled out exactly, essentially dictating the architecture to me that I was to follow. They don't really see my code, they might even hate my code, I am like an AI to them. And indeed, by dictating to me a very good architecture, I was able to basically follow that blindly and ran into very few problems.
> Now I have a complete app thanks to AI. And I do feel a sense of accomplishment.
It's the sense of accomplishment of a toddler who sits on the daddy's neck while all aunties around make round eyes and babble about how tall our boy is.
Re: craft vs git 'er dun, I don't think these have to be mutually exclusive. AI-boosted development is definitely different from the old ways, but the craft approach is a mindset and AI is just another tool.
In some ways, I find that agent-assisted development opens doors to producing even higher quality code. Little OCD nitpicks, patterns that appear later in development, all the nice but not really necessary changes...these time-consuming refactors are now basically automated in 1-shot by an agent.
People who rush to ship the minimum were writing slop long before LLMs. At least now we have these new tools to refactor the slop faster and easier than ever.
I truly enjoy programming, but the most frustrating part for me was that I had many ideas and too little time to work on everything.
Thanks to AI I can now work on many side projects at the time, and most importantly just (as you mentioned) get stuff done quickly and most of the time in good enough (or sometimes excellent) results.
I'm both amazed and a bit sad, but the reality is that my output has increased significantly - although the quality might have dropped a bit in certain areas.
Time is limited, and if I can increase my results in the same way as the electric street lights, I can simply look back at the past and smile that I lived in a time where lighting up gas-powered street lights was considered a skill.
As you perfectly put it, it's not about the process per se, it's about the result. And the result is that now the lights are only 80% lit. In a few months / years we'll probably reach the threshold where the electric street lights will be brighter than the gas-powered ones, and you'd be a fool if you decide to still light them up one by one.
8h work, 1 or 2h commute, then a little bit of self care etc - there is not much time to work on "sideprojects", unfortunately: AI is a superbooster here, as it allows to much forward much quicker than before.
AI Coding has the same problem as "self driving cars".
Until the car can be completely trusted to drive itself and never need human intervention, the human has to stay in a weird state of not driving the car, but being completely alert and attentive and ready to resume control in an instant. This can be more tiring and stressful than just driving yourself.
Vibe coding is very similar. The AI can generate code at an astounding rate. But all of it has to be examined carefully for strange errors that a human would be very unlikely to make.
In both cases, it's very questionable whether there is significant savings in the time or attention of the human still in the loop vs just performing the activity completely by herself.
Making things is often not just about making the thing right in front of you, but about building the skills to make bigger and better things. When you consider the long view, the struggle that makes it harder to make the thing at hand is well worth it. We have long considered taking shortcuts that don’t build skills to be detrimental in the long term. This pretty much only stops being the case when the thing you are short cutting becomes totally irrelevant. We have yet to see how the atrophying of programming skills will affect our collective ability to make reliable and novel software.
In my experience, I have not seen much new software that I’m happy about that is the fruit of LLMs. I have experienced web apps that I’ve been using for years getting buggier.
I feel that too much reliance on LLMs will leave engineers with at best a linear increase in skill over time, compared to the exponential returns of accumulated knowledge. For some I fear they will actually get negative returns when using AI.
Historically, many master painters used teams of assistants/apprentices to do most of the work under their guidance, with them only stepping in to do actual painting in the final details.
Similar with famous architects running large studios, mostly taking on a higher level conceptual role for any commissions they're involved in.
Traditionally in software (20+ years ago) architects typically wouldn't code much outside of POC work, they just worked with systems engineers and generated a ton of UML to be disseminated. So if we go back to that type of role, it somewhat fits in with agentic software dev.
That's where we're at a marked disagreement. "It's just a way to get paid" reduces every human knowledge to a monetary transaction, so the value of any type of learning is only worth what is being paid for.
Thankfully the people that came before us didn't see it that way otherwise we wouldn't even have anything to program on.
> they just worked with systems engineers and generated a ton of UML to be disseminated. So if we go back to that type of role, it somewhat fits in with agentic software dev.
I've never met one of those UML slingers that added much value.
I fully and absolutely agree future is bright. Soon we can outsource both the work and the ideas to LLMs. Make fully automated system to produce fully complete novels, music, movies, videos and software. Just prompt AI to make a movie, book, music or even SaaS. No humans involved. Absolute superior system. Just instruct LLM to start producing programs and monetizing them. No ideas needed, no effort. No thought.
You can even source ideas from it. No need to think or have any personal input anymore.
And then we can have a second LLM read and digest the book for us. In fact, we can create a pipeline, where LLM writes, LLM reads, and then LLM leaves reviews and reddit comments, all without any human input or oversight on any of these steps, while you can do the fun stuff, like uhm, washing dishes or something.
The thing is that a large portion of what people are using AI (and tech in general) to do simply doesn't need to be done. We don't need a "smart" dental floss dispenser, or something that automatically buys toilet paper for you, or little Clippy-the-paper-clip bots popping up everywhere to ask if you need help. A lot of the tech that's coming out is a through-and-through waste of everyone's time and energy --- its users' as well as its makers'.
What if the real reason for recent softness in software engineer hiring, is that we have almost all the software we really need?
I feel like it's been a while since I even saw some software and thought "oh, I really need that!" Vs "here is something we will force you to download and install on your phone in order to do something that previously didn't require software". Like online menus in restaurants or event tickets or parking meter apps.
I think we've had almost all the software we need for years now. Most new software is small variations on existing software. In terms of software that would make you go "oh I really need that" because it's a genuinely novel type of functionality. . . it's hard for me to think what's the most recent software I use that falls into that category. Actually more common is that I use an old, working program but then it stops working for whatever reason (e.g., not compatible with latest upgrades) and then I need to look for "new" software to do the same old thing, which it often doesn't do as well as the old software.
Programming really is fascinating as a skill because it can bring so much joy to the practitioner on a day-to-day problem-solving level while also providing much value to companies that are using it to generate profit. How many other professions have this luxury?
As a result, though, I think AI taking over a lot of what we're able to do has the dual issue of making your day to day rough both as a personally-enriching experience but also as a money-making endeavor.
I've been reading The Machine That Changed the World recently and it talks about how Ford's mass production assembly line replaced craftsmen building cars by hand. It made me wonder if AI will end up replacing us programmers in a similar way. Craftsmen surely loved the act of building a vehicle, but once assembly lines came along, it no longer made sense to produce cars in that fashion since more unskilled labor could get the job done faster and cheaper. Will we get to a place where AI is "good enough" to replace most developers? You could always argue that craftspeople could generate better code, but I can see a future where that becomes a luxury and unnecessary if tools do most of the work well enough.
How people derive utility varies from person to person and I suspect is the root cause of most AI generation pipeline debates, creative and code-wise. There are two camps that are surprisingly mutually exclusive:
a) People who gain value from the process of creating content.
b) People who gain value from the end result itself.
I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Also, getting frustrated with code configuration and writing boilerplate code is not personally gratifying.
As much as I dislike not having a good mental model of all the code that does things for me, ultimately, I have to concede the battle to get things done. This is not that different from importing packages that someone else wrote, or relying on codebases of my colleagues.
That said, even if I have to temporarily give up on understanding, I don't believe there's any reason to completely surrender control. I'll call a technician when things need fixing right away, but that doesn't mean I shouldn't learn (some of) the fixes myself.
> is it about doing, or is it about getting things done?
It's both. When you climb a mountain, the joy is reaching the summit after the hard hike. The hike is hard but also enjoyable in itself, and makes you appreciate reaching the top even more.
If there's a cable car or a road leading to the summit, the view may still be nice, but I'll go hiking somewhere else.
This reminds me of the debate around Soylent when that came out. Are meals for enjoyment, flavour, and the experience or are they about consuming nutrients and providing energy?
I’d say that debate was largely philosophical with proponents on both sides. And really the answer might be that both things are true for different people at different times. Though I also observe that soylent did not, by and large, end up replacing meals for the vast majority.
Daniel Pink's book "Drive" explains that true motivation comes from intrinsic factors: autonomy, mastery, and purpose. It’s not about external rewards or doing every task yourself, but about having the freedom to direct your work, the drive to improve your skills, and a meaningful purpose behind what you do. In programming, AI can free us from routine tasks, letting us focus on creative problem-solving and realizing our ideas - this aligns perfectly with what Pink calls the deeper, more fulfilling motivation to get things done in a way that matters. So, it’s less about losing fun and more about shifting to meaningful engagement and impact.
I was reflecting on this yesterday, as I have often hated AI for generating emails and other written text, but kinda am loving it for writing code.
One realization was what you said about me just wanting the code done so I can use the app.
The second was that, for me, I care about the output of the code, not the code itself. Whereas with the written word, I care about the word. Perhaps if I used AI to summarize what someone wanted in the email then I would care less about the written word coming from a human, but right now I still want to read what they've written. You can say that there are programmers who want to read the code from someone else, but I don't think there's the equivalent of code abstracted away into a UI that exists for the written word (open to that being challenged).
The last and maybe biggest realization is that computer language exists as multiple levels of abstraction. Machine language, assembly language, high-level language, etc. I'm not sure human languages have as many layers of abstraction, or if they do, they exist within the same language.
I'll keep reflecting, just my short two cents for now.
The correct analogy would be that half of the lights wouldn't light up randomly and then you'd have to go out anyway but in a hurry and only to certain ones just do discover you need to get back 20 minutes later because there is another problem with the same light, and your boss would expect that you do everything much faster and you end up frustrated even more.
> is it about doing, or is it about getting things done?
No, this is a false dichotomy and slippery slope dangerous thinking.
It’s about building a world where we can all live in and find meaning, joy, dignity, and fulfillment, which requires a balance between pursuing the ends and preserving the means as worthwhile human pursuits.
If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.
Human society and civilization is for the benefit of humans, not for filling checkboxes above all else.
The amount of pushback for "civilization, life is for the benefit of humans and not for filling checkboxes" is bewildering. Drones not questioning what cancerours thinking they have. Life is for living, not for the economy or whatever technocratic utility function you think you're optimizing for. They are all tools, not the destiny.
I'm sadly unsurprised, but me ranting about silicon valley mentality on HN feels like yelling at a cloud. Best we can do is keep trying to make people's lives better even if it is not in the best interest of the shareholders :)
Funnily enough, the OG silicon valley vibe includes "let's make this world a better place to live", the hippie stuff imho. Nowadays it's more like "let's maximize shareholder value and extract what we can" and that's lazy. Bring back the old school SV and take more acid!
>If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.
So much infrastructure is built by people having a less than good time.
An Engineer might get the jollies designing a bridge, but the workers who work on it dont.
The goal is to give lots of people happiness from not having to drive 100km out of their way.
If we solve a lot of problems for a lot of people and all it costs is the happiness of a few software engineers, well I am not convinced they were happy to begin with. Fund it.
Same here. My father was a bricklayer. Backbreaking work. On weekends he drove us to the houses he had worked on. We didn't appreciate it as young children, but he was definitely very proud of what he built.
And why should the workers who work on the bridge be denied happiness and satisfaction from their work? Building and creating physical stuff is incredibly rewarding in concept for so many people - especially in a culture that values/glorifies physical and manual labor like parts of the US. I mean bob the builder is a popular kids show and "all boys are fascinated by big trucks and construction projects" is both an incredibly common stereotype and to a significant extent just a true statement.
> An Engineer might get the jollies designing a bridge, but the workers who work on it don't.
I think work becoming more abstract and not seeing anything concrete like a bridge or a road or a building after the work is complete is the source of a lot of mental illness, melancholy, and even suicidal ideation in modern society.
How do I know this very comment wasn't written by someone who was having a bad time, though? The tone is frustrated and critical. I'd put the odds at maybe 1 in 5.
Where do we draw the line where we have to delete our own grouchiness from the Internet for fear of letting others consume something we created in anger?
>If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.
What if the people, miserable or not, getting paid to make the meal have a fundamentally opposed world view to you and will use some amount of their wealth to try and enforce it on you in roundabout ways.
Because I assure you, some guy in a warehouse in Des Moines filling out some bullshit web form just so he can swipe his employee key card and start his forklift, doesn't want to enrich you, or me, or just about anyone else on HN. And his boss who felt compelled to buy the crap just to save a buck on insurance probably feels about the same.
I was a bit disappointed by your response because, from the way you started it, I was expecting a stronger argument. I do agree with your point, but I think a key aspect of the false dichotomy is that there is evidence that AI is not actually "getting things done"
>there is evidence that AI is not actually "getting things done"
But there is also evidence that AI is actually getting things done, right?
Most of the evidence that AI can't get things done that I've seen tends to be along the lines of asking it to do a large job and it struggling. It seems like a lot of people stop there, and either don't investigate problems where it might be a better fit.
> It seems like a lot of people stop there, and either don't investigate problems where it might be a better fit.
The AI sceptics do think deeply about where AI might be a better fit. They indeed thought deeply about this, but for every hypthetical use case they could come up with, they had to conclude that
- AI has to become much much more reliable to be suitable for this use case
- the current AI architectures (as "the thing that bigtech markets") will likely by principle never be able to achieve this kind of reliability
This is exactly why these AI-sceptical people got so sceptical about AI, and also the reason why these AI sceptics got so vocal about their opinions.
I've put far more time than i should have trying to get AI to successfully complete tasks of varying sizes in our codebase at work. It simply cannot do things reliably and adequately when working in a large codebase. It lacks sufficient context, it ignores established conventions, worst of all, it often ignores instructions (endless unnecessary comments being my personal biggest peeve).
So i think i have, in fact, tried my best to use it.
It's great for little tiny things. Give me a one-off script to transform some command's output, translate some code from Python to Typescript, write individual unit tests for individual functions. But none of that is transforming how i do my job, it's saving me minutes, not hours.
Nobody at my company is getting serious quantities of programming done with AI, either, and not for lack of trying. I've never been one to claim it's useless, just that it's usefulness (i.e. "how much is getting done") is drastically overblown.
I think we're largely in agreement here, though I wouldn't go so far as to say it's limited to "little tiny" things, but I guess that's a matter of scale. I use it for a lot of tooling, which is typically in the 500-5,000 line size, and it works really well for these sorts of things. A lot of them it will just one-shot and not break a sweat.
I have cases where it saves hours for sure, but they are fewer and further between. Last week we used it to solve 600+ linting warnings in 25 year old code, which probably saved me the better part of a day. It did a fantastic job of converting %-format strings to f-strings. I created a skill telling it how to test a %-to-f conversion in isolation, and it was able to use that skill to flawlessly convert all of our strings to modern usage.
> But there is also evidence that AI is actually getting things done, right?
Is there? I haven't seen a single AI success story that rang true, that wasn't coming from someone with a massive financial interest in it being successful. A few people subjectively think AI is making them more productive, but there's no real objective validation of that; they're not producing super-impressive products, and there was that recent study that had senior developers thinking they were being more productive using AI when in fact it was the opposite.
You seem to be setting a high bar (AI success stories don't ring true), while taking the study as fact. This feels like a cognitive bias.
I believe you are talking about the study: Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity. It is an interesting data point, but it's far from conclusive. It studied 16 developers working on large (1MLOC+) codebases, and the AI tooling struggles with large codebases (new tools like Brokk are attempting to improve performance there). The authors acknowledge that participants dropped hard AI-disallowed issues "reducing the average AI-disallowed difficulty". Some of the selected developers seem to have been inexperienced at AI use.
Smaller tools and codebases and unfamiliar code are sweet spots for the AI tools. When I need a tool to help me do my job, the AIs can take a couple sentence description and turn it into working code, often on the first go. Monitoring plugins, automation tools, programs in the few thousand lines of code, writing tests, these are all things the AIs are REALLY good at. Also: asking questions about the code.
A few examples: Last night I had Claude Code implement a change to the helix editor so that if you go back into a file you previously edited, it takes you back to the spot you were at in your last editing session. I don't know the helix code nor Rust at all. Claude was able to implement this in the background while I was working on another task and then watching TV in the evening. A few weeks ago I used Claude Code to fix 600+ linting errors in 20 year old code, in an evening while watching TV, these easily would have taken a day to do manually. A few months ago Claude built me a "rsync but for block devices" program; I did this one as a comparison of writing it manually vs vibe coding it with Claude, and Claude had significant advantages.
But, I'm guessing these will fall into the "does not ring true" category, probably also "no real objective validation". But to me, personally, there is absolutely evidence that AI is actually getting things done.
> You seem to be setting a high bar (AI success stories don't ring true), while taking the study as fact. This feels like a cognitive bias.
I think it's interesting that you jump to that. I consider a study, even a small one, to be better evidence than subjective anecdotes; isn't that the normal position that one should take on any issue? I'm not taking that study as gospel, but I think it's grounds to be even more skeptical of anecdotal evaluations than normal.
> Some of the selected developers seem to have been inexperienced at AI use.
This seems to be a constant no-true-Scotsman argument from AI advocates. AI didn't work in a given setting? Clearly the people trying it were inexperienced, or the AI they were testing was an old one that doesn't reflect the current state of the art, or they didn't use this advocate's super-awesome prompt that solves all the problems. I never hear these objections before someone tries to evaluate AI, only after they've done it and got a bad result.
> But, I'm guessing these will fall into the "does not ring true" category, probably also "no real objective validation".
Well, yes. Duh. When the best available evidence shows little objective effectiveness from AI, and suggests that people who use AI are biased to think it's more effective than it was, I'm going to go with that, unless and until better evidence comes along.
>I consider a study, even a small one, to be better evidence than subjective anecdotes
We're coming at it from very different places is the thing. The GenAI tooling is allowing me to do things that I otherwise wouldn't have time to do, which objectively to me is a clear win. So, I'm going to look at a study like that and pick it apart, because it doesn't match my objective observations. You are coming from a different angle.
> The GenAI tooling is allowing me to do things that I otherwise wouldn't have time to do, which objectively to me is a clear win. So, I'm going to look at a study like that and pick it apart, because it doesn't match my objective observations.
Ahh, we've reached the point in the discussion where you're arguing semantics...
"With a basis in observable facts". I am observing that I am getting things done with GenAI that I wouldn't be able to otherwise, due to lack of time.
While you were typing your message above, Claude was modifying a 100KLOC software project in a language I'm unfamiliar with to add a feature that'll make the software have one less rough edge for me. At the same time, I was doing a release of our software for work.
Feels pretty objective from my perspective. Yes, I realize from your perspective it is subjective.
> When the best available evidence shows little objective effectiveness from AI, and suggests that people who use AI are biased to think it's more effective than it was, I'm going to go with that, unless and until better evidence comes along.
Well you're in luck, a ton of better evidence across much larger empirical studies has been available for a while now! Somehow they just didn't get the same amount of airtime around here. You can find a few studies linked here: https://news.ycombinator.com/item?id=45379452
But if you want to verify that's a representative sample, do a simple Google Scholar search and just read the abstracts of any random sample of the results.
No, subjective experience is not reliable and is the whole reason humanity invented the scientific method to have a more reliable method of ascertaining truth.
There not much in the way of what you’d call strong positive evidence. Lots of user testimonials, which are, as always, kinda useless.
The few serious studies attempting to measure out if (vs asking people “do you think this helped you”; again, that’s not useful evidence of anything), seem to have come come out anywhere from “a wash” to “mildly detrimental”.
I guess you are a vegan too right? I get this take but it is naivety. Not everything must pass the morality purity test.
Did mass processed food production stop people from cooking or enjoying human made food? No it did not. The same is true in almost all domains where a form of industrialization happens.
> Did mass processed food production stop people from cooking or enjoying human made food?
Yeah but what if I'm getting pitted against my coworkers who are vibe coding and getting things done faster than I am. Some people write code with pride because it's their brainchild. AI completely ruins the fun for those people when they have to compete against their coworkers for productivity.
I'm not in disagreement with you or the GP comment, but this it is super hard to make nuanced comments about GenAI.
That is an issue that exists regardless of ai but i do get it. Most furniture is not hand made. But that doesnt preclude people from enjoying buying or making handmade furniture.
The fact that i think people need to get over is that you are blessed beyond measure to have a fun job that gives you creative joy and satisfaction. Losing that because of ai/new tool is not some unprecedented event signaling the end of creativity. A job is a job.
What amuses me is i have just as much fun clacking away with some ai help as i did before. But then again i like the problem solving process more than writing the same code in one specific programming language.
They are wrong however. To take the food example, the existence of processed food production creates artifacts like food deserts. If you are privileged these things don't effect you as much as you get more agency.
Just the existence of quick to eat and prepare foods are going to put limits on how long you are going to be given for lunch and dinner. Even if you wanted to prepare fresh food, the system is going to make it difficult since it becomes an unsupported activity in terms of time allowances and market access.
I made no judgement about the quality of processed food or where the different options rank in terms access to calories and nutrition, or what is actually feasible. It was simply about how changes can become mildly to severely obligate to certain populations in our economic system.
> If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.
So every restaurant you go to, you head to the back to run a purity test on the political beliefs and “happiness” of the people making your food to make sure they line up exactly with what you believe?
This just screams luxury beliefs to me, and historically, Utopianism like this has been the actual dangerous slippery slope. Like…tens of millions of people starving dangerous.
I just don’t think this fully automated luxury communism thing you are fantasizing about will make you happy. Seeking pleasure 24/7 is pathological and means you stop feeling it, and doing things for the benefit of others instead of yourself is miserable…but ultimately more fulfilling.
> So every restaurant you go to, you head to the back to run a purity test on the political beliefs and “happiness” of the people making your food to make sure they line up exactly with what you believe?
Does their argument gets invalidated if they don't verify *every* restaurant ever? Nobody has the time nor the resources to follow their moral standards with 100% precision, but if we're doing our best I'd argue we can still take that moral stance.
Recently a slave labour scheme was dismantled in my country in which some wineries were keeping slaves to produce grape juice. The companies were on the news, and although I do love some grape juice I will never ever buy from them again. Do I check *every* single source of the products I consume? Of course no. Can they eventually do some marketing tricks and fool me into buying from them again? Maybe. But I do my best and I feel like this is sufficient to claim this is a good moral stance nonetheless.
This is dumb. Every fast food meal you eat was prepared by someone having a miserable time. Guess what, theyd be more miserable without the job. Getting stuff done is what benefits humans, not feel good jobs.
What strikes me about this exchange is no one is talking about the money. In the past, you could do either and no one had to care except you. Now a lot of jobs that people could find fulfilling aren't because the economy is so distorted, so how are we supposed to honestly look at this? I guess let's walk these people off the plank and get this over with...
Are you seriously and earnestly arguing that harm-minimisation is useless and we should all just open the human-suffering throttle, or did you just not think that far ahead?
I am hoping the latter. Being foolish is far more temporary a condition than being cruel.
How are you “minimizing harm” by pearl clutching about not eating fast food? The front line people you are interacting with at the fast food restaurant or the grocery store have it easiest in the chain of events that it takes food to get to you. Do you think that fast food workers have it harder than the people at the grocery store?
Also, the core point is about people being able to find meaning in their work. That you've decided to laser in on this specific point to go on a tangent of whattaboutism is largely irrelevant.
The fact is that most of the 3-4 billion+ people on earth don’t “find meaning in their work” and they only work because they haven’t overcome their addiction to food and shelter. If the point was irrelevant to your argument, why make it?
I didn't actually make the point initially. I was challenging the reply's point that:
a) just because some people are miserable at work, doesn't mean we shouldn't care that other people might become miserable at work
b) Someone saying they prefer their food to be made without suffering is clearly a hypocrite in all cases because... there are miserable people in fast food jobs?
People who work in fast food may not be “passionate” about their job. But they aren’t “suffering”. You aren’t relieving anyone’s “suffering” by not eating fast food or even if there was no fast food. They aren’t “suffering” anymore than people working at the grocery store.
Cry me a river for software developers (been delivering code professionally for 30 years and before that as a hobbyist) because now we have something that makes us more efficient.
I don't know if you're intentionally being obtuse or you just failed third grade reading comprehension, but can you please go argue with the people actually making these points (rather than me, a random person who has replied to them)?
So exactly what point are you trying to make? That software developers - at least the employed ones - “are suffering” because of AI? That you don’t eat fast food because you believe the employees are being exploited? What exactly is your point?
Increasing productivity is how we minimize harm. Many people hate their job but are happy to have it because it allows them to consume things. More production = less suffering
I really don’t see how this helps the fast food workers. When less people eat their food they lose jobs and become even more miserable. Sure if you hire them as a private chef you’re helping them out but if just cook yourself you haven’t done a thing to improve their life.
> If I am eating a delicious meal but the people preparing it had a miserable time, or it was prepared entirely by robots controlled by nefarious people using the profits to harm society, I don’t want it.
Unfortunately I feel compelled to express the doomer take here, but I don't think most people care how their fast fashion or iPhones are made. And very few find it practically doable to boycott a company like Nestle. People trying to go full Stallman (sans the problematic stuff, rather along the lines of FSF) also find it just difficult.
Most people are just happy that the boot is on the other foot or someone else's back and that they have enough convenience and isolation from the rest of the world not to care. Or honestly it's hard to get by for them as well and all of those trinkets and unethically made products help them get through the day.
> Human society and civilization is for the benefit of humans, not for filling checkboxes above all else.
I really wish that was the case, instead of for the extraction of what little wealth we have by corpos and the oligarchs (call them whatever you want), to push us more towards a rat race of sorts where we get by just barely enough to keep consuming but not enough to effect meaningful change most of the time. Then again, could be better, could be worse - it's cool to see passionate people choosing to make something just for the sake of the experience and creating something unique, not always with a profit in mind.
Every programmer occasionally, when nobody’s home, turns off the lights, pours a glass of scotch, puts on some light German electronica, and opens up a file on their computer. It’s a different file for every programmer. Sometimes they wrote it, sometimes they found it and knew they had to save it. They read over the lines, and weep at their beauty, then the tears turn bitter as they remember the rest of the files and the inevitable collapse of all that is good and true in the world.
This file is Good Code. It has sensible and consistent names for functions and variables. It’s concise. It doesn’t do anything obviously stupid. It has never had to live in the wild, or answer to a sales team. It does exactly one, mundane, specific thing, and it does it well. It was written by a single person, and never touched by another. It reads like poetry written by someone over thirty.
Every programmer starts out writing some perfect little snowflake like this. Then they’re told on Friday they need to have six hundred snowflakes written by Tuesday, so they cheat a bit here and there and maybe copy a few snowflakes and try to stick them together or they have to ask a coworker to work on one who melts it and then all the programmers’ snowflakes get dumped together in some inscrutable shape and somebody leans a Picasso on it because nobody wants to see the cat urine soaking into all your broken snowflakes melting in the light of day. Next week, everybody shovels more snow on it to keep the Picasso from falling over.
You don't really get that Good Code with AI that much, or at least I haven't felt that way looking at it. Then again, I could say that about most code written by other people, not sure what that means. Maybe I just have an odd taste in code that so little of it seems pleasant.
Are you from the middle ages, or are you so out of touch with blue-collar work that you're under the impression the average sewer worker has to manually handle waste?
> is it about doing, or is it about getting things done?
For me it is getting things done while also understanding the whole building, from its foundation up. Only with such a comprehensive mental model can I predict how my code will behave in unanticipated situations. I've only ever achieved this metal model by doing.
Succinctly, "it is about doing" to guarantee I'm "getting things really done".
> my time goes into thinking and precisely defining what I want
I'm reminded of the famous quote "Programs must be written for people to read, and only incidentally for machines to execute." [1]
A programming language is exactly the medium that lets me precisely define my thoughts! I think the only way to achieve equivalent precision using human language is to write them in legalese, just as a lawyer does when poring over the words and punctuation in a legal contract (and that depends upon so much case law to make the words really precise).
> For me, AI allows me to realize my ideas, and get things done.
More power to you! Bringing our ideas to life is what we're all after.
I am reminded of Dijkstra's remark on Lisp, that it "has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."
(I imagine that this is not limited to Lisp, though some languages may yield more or less results.)
If we consider programming entirely as a means to and end, with the end being all that matters, we may lose out on insights obtained while doing the work. Whether if those insights are of practical value, or economic value, or of no value at all, is another question, but I feel there is more likely to be something gained by actually doing the programming, compared to actually lighting the street lamps.
(Of course, what you are programming matters too. Many were quick to turn to AI for "boilerplate"; I doubt many insights are found in such code.)
While I agree with your point that it's sometimes about getting things done, but your example is flawed. Your example about gas-powered street lights is arguing for technology evolution. But the people who say "AI have take the fun out of programming" are fighting for craftsmanship and love.
Nobody ever found craftsmanship or pleasure out of lighting up gas-powered street lights. But there are a lot of programmers that value "doing" programming because it's their craft or art-form.
I have never had a programming job. But I program all day to serve my customers for the products I created. Because it's my art-form. I love "doing" it (my way!).
It will get done. I just want to be the person to do it.
I agree wholeheartedly that it's about getting things done, and is what the universe cares about. As individuals we enjoy being in flow, and when the nature of the work changes we may lose our flow and shake our fists in frustration...
Change can be painful, but that's because it takes energy.
From particles to atoms to cells to people to civilizations, it seems like the whole point is to get more stuff done. Why? Probably because getting stuff done is more interesting than the alternative.
Once again no one is capable of coming up with a good analogy: the analogy here would be that someone comes up with occasional exploding electrical lights that sometimes create black holes to suck up all the surrounding light for a block, and then really work as intended under 60% of the time. But the city rushes to implement as recklessly and quickly as possible because promises and lies. Also the whole time its happening they keep saying not a single gas-lighter will lose their job because the blackholes need to be fed human flesh sometimes... so we will get them to do that
It’s about productivity and increasing it to be more competitive against other nations. Look at South Korea. No natural resources so post war plans was to base the future on human capital. It’s why the density and workforce is concentrated in Seoul.
so far the economy is not build on getting things done alone
the promise of progress is that not having to do chores will make us happier, it's partly true, and partly false
people hate doing too much of too harmful things, beside that if you need me to redo your shelves, or help you get milk in the morning, i'm happy to oblige
but back to the point of things getting done and the march of progress, we're entering a potential kurzweil runaway, where computer understand and operate on the world faster, better and longer than us, leaving us with having nothing to do, so we'll see, but i'm not betting a lot on that, it's gonna be toxic (big 4 becoming our main dependency, unstable and a potential depression frenzy)
look at how often people say "i wanna do something that matters", "i wanna help others".. it's a bit strange to say this because we spend our lives maintaining the worlds to be comfortable, but having everything done for you all the time might not be heaven on earth (even in the ideal best case)
I'm a bit late to the conversation but I'm on month 4 (?) of building a (greenfield) desktop app with Claude Code + Codex. I've been coding since Pulp Fiction hit theaters, and I'm confident I could have just written this thing from scratch without LLMs with a lot fewer headaches, but I really wanted to get my hands dirty with new tools and see what they are and aren't capable of.
Some brief takeaways:
1. I'm on probably the 10th complete-restart iteration; I had a strong vision for what it was going to be, with a very weak grasp on how to technically achieve it, as well as a tenuous-at-best grasp on some of what turned out to be the most difficult parts (clever memory management, optimizations for speed, wrangling huge datasets, algorithms, etc) -- I started with a CLI-only prototype thinking I could get it all together reasonably quickly and then move onto a hand-crafted visual UI that I'd go over with a fine-toothed comb.
I'm still working on the fundamentals LOL with a janky UI that I'll get to when the foundation is solid.
2. By iteration 4 or 5, I realized I wanted to implement stuff that was incompatible with the less-complicated foundations already laid; this becomes a big issue when you vibe code and have it write docs, and then change your mind / discover a better way to do it. The amount of sprawl and "overgrowth" in the codebase becomes a second job when you need to pivot -- you become a glorified hedge trimmer trying to excise both code AND documentation that will very confidently poison the agents moving forward if you don't.
3. Speaking of overconfidence, I keep finding myself in situations where the LLMs (due to not being able to contextualize the entire codebase at any single time) offer solutions/approaches/algorithms that work (and work well!) until you push more data at it. For validation purposes, I started with very limited datasets, so I could hand-check results and audit the database. By the time you're at a million rows, spot-checking becomes really hard, shit starts crashing because you didn't foresee architectural problems due to lack of domain experience, etc. You start asking for alternative solutions and approaches, you get them, but the LLM (not incorrectly) also wants to preserve what's already there, so a whole new logic path gets cut, and the codebase grows like a jungle. The docs get stale without getting pruned. There's conflicting context. Switch to a different LLM and sometimes naming conventions mysteriously shift like it's speaking a different dialect. On and on.
Are the tools worth it? Depends. For me, for this one, on the whole, yes; it has taken an extremely long time (in comparison to the promises of 10x productivity) to get to where I've been able to try out a dozen approaches that I was unfamiliar with, see first-hand what works and what doesn't, and get a real working grasp of how off-the-rails agentic coding can take you if you're just exploring.
I am now left with some really good, relevant code to reference, a BUNCH of really misguided code to flush down the shitter, a strong mental map of how to achieve what I'm building + where things are supposed to go, and now I'm starting yet another fresh iteration where I can scaffold and piece together the whole thing with refactored / reformatted / readable code. And then actually implement the UI I've been designing lol.
I get the whole "just bully the LLM until it seems like it works, then ship it" mentality; objectively that's not much different than "just bully the developer until it seems like it works, then ship it" mentality of a product manager. But as amazing as these tools are for conjuring something into existence from thin air, I really think the devil is truly in the details, and if you're making something you hope to ever be able to build upon and expand and maintain, you have to go far beyond "vibes" alone.