I'm concerned that there's no real way to "opt out" of an AI future realistically. Is this something that people are seriously thinking they'll be able to do and successfully stay gainfully employed and contributing to the world?
> Is this something that people are seriously thinking they'll be able to do and successfully stay gainfully employed and contributing to the world?
No. I resisted for a bit but have started using it at work. Mostly because I believe usage is now being monitored. I'm in a very high-scale engineering environment involving both greenfield and massive brownfield codebases and the experience is largely a net loss in productivity. For me and some others who I've spoken to in my org, opting in is a theater that we're required to engage in to keep employment and not a genuine evolution of our craft.
These tools struggle with context once you get deep into a codebase with many, many millions of lines of code and sprawling dependencies. Even for isolated Python scripts or smaller, supporting .NET apps, the time spent correcting subtle bugs or bullshit, or just verifying the bullshit, often exceeds the time it would take to have written it from scratch.
Regardless, what I've observed is that these tools do nothing for the actual bottlenecks of software engineering: requirements gathering (am I writing the right thing?) and verification (does it work without side effects?). Because LLMs are great at generating text, they're actively exacerbating these issues by flooding our process with plausible looking noise.
Agreed. I think the starting comparison actually works here. It's a bit like the automobile. The advice of "just don't" doesn't work for cars. It takes a deliberate effort on every scale of society to accomplish, it's not something an individual can just do and succeed at. An American can't just not have a car the same way someone from the netherlands might be able to.
There isn't. Just like with climate change and governments, we're all effectively in one big boat together. You can stop paddling towards the waterfall, but you can't stop everyone else from paddling and you can't get off the boat.
Over hundreds of hours of actively using AI for basically every area of my life, it has just never actually achieved anything besides giving me the feeling of productivity.
Ideas are mediocre. Plans are arbitrary. Research is untrustworthy. But telling it "generate me 100 ideas for X" feels really productive.
I think a version of me with no access AI will not just stay competitive, but even outcompete the version of me with unlimited access to AI.
I'm not an OAI fanboy by a longshot - but I'd view lots of experiments that didn't work out as a healthy thing, especially for a company trying to find footing in a new industry.
Right, depends on your use cases. I was looking forward to the model as an upgrade to 2.5 Flash, but when you're processing hundreds of millions of tokens a day (not hard to do if you're dealing in documents or emails with a few users), the economics fall apart.
Context is everything. Ultimately you have to use your own judgement about what makes sense because no one can see all ends. Generalized advice from someone without skin in the game is at best a weak datapoint for any significant life decision.
That said, let me give mine. Persistence over generally pays more dividends that constantly chasing quick wins. The modern information economy has cheapened success and skewed perceptions of how much effort and luck is behind outlier winners. The success I've had in startups was not quick, was not a straight line, and honestly probably didn't net me as much as if I had joined Google or Facebook early career, but the benefits in terms of broad skills and success that I can credibly claim on a personal level are actually more valuable to me than a larger number in my bank account.
Reading an AI blog post (or reddit post, etc) just signals that the author actually just doesn't care that much about the subject.. which makes me care less too.
I read it as rolling with her own joke and lightening the load on the B+ rating (obviously also expressed as a loving ribbing given the context around it)
I agree that high turnover is a real constraint. That’s why the answer isn’t “10 years of apprenticeship” but designing scaffolds that combine learning with contribution in a shorter timeframe. Things like short rotations, micro-credentials, or mentorship stipends let juniors add value while they’re still on the job. Even if they leave after a few years, the investment isn’t wasted — both sides still capture meaningful returns.
Interesting thought — long-term contracts could indeed align incentives for growth and stability. The challenge, as you note, is trust: few employees or companies are willing to bind themselves for 5–10 years in today’s fluid market.
That’s why governance frameworks (whether in labor or in AI) matter: they provide external guarantees of trust where bilateral promises may not hold.
nobody in my life feeds me as many positive messages as Claude Code. It's as if my dog could talk to me. I just hope nobody takes this simple pleasure away
Incidental gatekeeping by leaving it on the black market isn't the way to keep it safe, quite the opposite - that poses a lot of dangerous risks.
Bringing it into the light under thoughtful consideration and openly discussing and encouraging harm prevention is the only way to make this safe. Everyone should have the right to to exploring this if they want to, and there should be plenty of open discussion, research, and education. I really appreciate the open-source approach here, the spirit of this movement feels like the right thing for humanity.
Thanks to your father for his contribution to my childhood, I used QModem every single day until my parents screamed, and it changed my life. I made friends on some local BBSes that I still have 30+ years later.
reply