You cannot make a similar argument for any tool that makes jobs easier, because the argument is dependent on the unique attribute of LLMs: providing wrong answers confidently.
There are lots of tools that give a wrong solution that appears correct, and easier ones tend to do that the most.
Plenty of people who needed a real dev team to design an application probably hoped on Dreamweaver, were suddenly able to bumble their way some interface that looked impressive but would never scale (even to the original goal level of scale mind you).
-
Any time you have a tool that lowers the barrier of entry to a field, you get a spectrum of people from those who have right-sized expectations and can suddenly do the thing themselves, to people who massively overestimate how easy the field is to master and get in over their heads.
This isn't even a programming thing, off the top of my head Sharkbites get this kind of rep in plumbling
You could argue the RPi did this to hardware, where people are using a Linux SBC to do the job a 555 timer could do and saying that's hardware.
Point-and-shoot, then smartphone cameras, did this and now a lot more people think they can be a photographer based on shots their phone spends more processing power per image than we used to get to the moon on.
This comment seems to not appreciate how changing the scope of impact is itself a gigantic problem (and the one that needs to be immediately solved for).
It's as if someone created a device that made cancer airborne and contagious and you come in to say "to be fair, cancer existed before this device, the device just made it way worse". Yes? And? Do you have a solution to solving the cancer? Then pointing it out really isn't doing anything. Focus on getting people to stop using the contagious aerosol first.
This both has nothing to do with the linked article (beyond the use of brain rot in the title, but I'm certain you must have read the thing you're commenting on, surely) and is simply incorrect.
Brain rot in this context is not a reference to slang.
People are not kogs in a machine. You cannot simply make enough rules, enough legislation, and magically they will act the way you want them to. Humans deserve autonomy, and that autonomy includes making poor decisions around their own body/existence.
Chatgpt didn't induce suicidality into this individual. It provided resources they could seek for help. People advocating for higher guardrails are simply using this as a Trojan horse to inject more spying, construct the usefulness of the tool, and make a worse experience for everyone.
Really interesting breakdown. What jumped out to me wasn’t just the bugs (CORS wide open, incorrect Basic auth, weak token randomness), but how much the human devs seemed to lean on Claude’s output even when it was clearly offbase. That “implicit grant for public clients” bit is wild; it’s deprecated in OAuth 2.1, and Claude just tossed it in like it was fine, and then it stuck.
Oh, I got an email invitation to try it out this morning... This post reminded me to give it a go. I don't remember asking for an invitation -- not sure how I got on a list.
reply