People's minds will become even lazier due to prolonged daily use of LLMs. They literally won't be able to think for themselves without AI assistance (that's why OpenAI won't fall, btw). Attention spans will drop even lower, causing severe psychological problems. Think 'Digital Dementia 2.0.'
Later, LLMs will be portrayed as something evil, yet everyone will still use them. Parents will use them, while telling their kinds not to do so.
Leetcode is already standard for SWE interviews, but other industries will need to adopt similar tests to verify that an applicant's brain is functioning correctly and that they're capable of doing the job. Maybe a formal confirmation from a psychologist specializing in 'fried brains' will be required.
My knowledge and engineering has only gone up over time. I read significantly more and higher quality technical information and need to debug problems significantly harder.
I think AI will in general make everyone a lot smarter. Maybe the people who use AI as a companion will melt? I'm sure there's some kind of repetitive addiction loop that could melt your brain just like anythibg else though.
I understand it. For example, with AI you don't need to remember stuff. Like there is a command in MacOS (two actually) to flush the DNS cache. I used to memorize it because I needed it like twice a week. These days, I can't remember it. I just tell Copilot to flush the cache for me. It knows what to do.
And it's like that for many things. Complicated Git commands that I rarely need. I used to remember them at least 50% of the time, and if not, I looked them up. Now I just describe what I need to Copilot. But also APIs that I don't need daily. All that stuff that I used to know is gone, because I don't need to look it up anymore, I just tell Copilot or Claude what to do.
Is that really a bad thing? It's like saying Google Maps makes you lazier, because you don't have to learn navigation. And, heck, why stop there: cars are just insanely lazy! You lose all the exercise benefits of walking.
Why is losing the ability/interest in navigating through a paper map by hand bad, though?
Humanity has adopted and then discarded skills many times in its history. There were once many master archers, nobody outside of one crazy Danish guy has mastered archery for hundreds of years. That isn't bad, nobody cares, nothing of value was lost.
You can still use pencil and paper for the difficult things. In fact, you'll have more time for doing so, because you don't have to use pencil and paper for the simple things.
Hm, perhaps a way to export all your chats from any AI provider you use + sending it back to an LLM to just sum up all the commands that you use in a text file that you can reference?
Like I am starting to use etherpad a lot recently and although I have proton docs and similar, I just love etherpad for creating quick pads for information
Or to be honest, I search it on the internet and ddg's AI feature does give me a short answer (mostly to the point) but I think that there are definitely ways to get our own knowledge base if any outage happens basically.
lol I also had all sorts of commands memorized for k8s and pandas I don't remember at all. But let's all be honest, was it valuable to constantly lookup how to make a command do what you want?
I wasted so much time on dumbass pandas documentation search when I should have been building. AI is literally the internet all you are doing is querying the internet 2.0.
I often kept vast ugly text documents filled with random commands because I always forgot them.
I can only speak from personal experience. But as a 21 Year old, I'd definitely say that AI has made me so much more unproductive and reduced my attention span immensly. My Brain was already fried from social media and now there is always an "easy" way to do annoying but very educational tasks. And amongst my peers, especially those without a background in IT, misunderstanding and anthropomorphising has made this even worse. I think for people who already have great skills, AI will probably be helpful, not harmful. But for my generation, which has been through covid, social media and now has to figure out healthy AI usage, this is a fight already lost.
Eh, there's literally always something like this being told and doom and gloomed over. When I was 21 I heard almost the identical statement you said. Covid being the exception.
The only thing that matters is if YOU care. Do you like software? Do you want to learn and make something that was unatainable to you a year ago?
There's also a major difference between college and work so you shouldn't sweat it so much.
It's like saying in the 1970s that people would become dumber because of calculators. It's a tool. You can use it lazily and not learn much, or you use it actively as something that propels you further along in your learning.
no, its not - with calculators, it was up to the person using the tool to figure out which formula to use and which values to plug into the calculator.
The human has to have enough understanding of the problem to know which math to apply and calculate in the first place. That requires understanding and discernment. This works the brain. This mental work strengthens our problem solving ability.
Whereas with AI, you just tell it the problem and it gives an easy answer. Thus involving no further work from the human brain which causes it to atrophy just like any other underused muscle.
What atrophies with calculator usage is an ability to do long form division for example, or arithmetic operations with large numbers in your head for example.
The way you describe AI - tell it the problem and get an easy answer sounds identical to anecdotal complaints I've heard like Google search providing an answer to everything means no one has to learn anything, or everyone copying code from stack overflow articles. At the end of the day it's still another tool with pros and cons, tradeoffs, etc., and will be used and misused and abused by different people in different ways.
You give a calculator a problem and it gives an easy answer thus involving no further work from the human brain which causes it to atrophy just like any other underused muscle.
Just like a calculator can easily solve some problems so can AI. Sure the set of problems it can solve is bigger than a calculator. AI just enables you to work on bigger problems because you’re not spending so much time “calculating by hand”.
If you delegate all your thinking to AI or you use AI when the point of the activity is to do the activity (ie homework at school) then I think you’ll see problems. Just like using a calculator when you’re supposed to be learning how to add will stunt your growth.
Have you considered that you are a minority who leverages LLMs for your own personal gain? The parent is referring more to the general population who is already doomscrolling away and would LOVE a service that generates prompts for them due to the hassle this represents for them.
You do know this reads the same as every pessimistic commentary on technology ever, right? So many people were convinced that television was going to fry our brains.
I just read on a Polish automotive portal that the government has concerns about cybersecurity in Chinese cars. I wouldn't be surprised if Chinese cars were entirely banned for some businesses in the future.
I swear the most recommended way of creating a bootable Windows USB on Linux changes every year, and usually doesn't work. I keep an old Windows laptop just so I can create bootable Windows usbs, whenever needed.
Making custom Windows install media is insanely painful, even from Windows. I went through the process of creating non-interactive install media for Windows once, and was astonished at how awful it is compared to building custom Linux live media. (Not least of all because of the churn in the XML you have to maintain that basically represents clicking through all the installer menus.)
It depends on what customizations you'd like to use.
I've also had a very hard time creating an automated install media for an appliance for windows iot... Worst was the (LLM generated?) powershell scripts in the documentation that didn't work at all.
Microsoft's tooling for customizing images amounts to several gigabytes to download and install just to get started.
The Windows approach is based on a mix of relatively limited offline modifications and automating clicks and keystrokes (AutoUnattend.xml, OOBE.xml) and recording or forgetting manual changes (Audit Mode, Sysprep). Both are insanely kludgey.
New development of the tooling always comes to dism.exe first rather than the DISM PowerShell module, so you may need to use DOS commands instead of the (very lovely) modern shell that Microsoft maintains.
Depending on what kind of stuff you're trying to install, you might need to do half a dozen reboots in the course of recording your manual changes.
Mounting/unmounting a WIM file can take more than a minute (wtf?) and if you're working on modifying one of the installer images from upstream, you need dozens of gigabytes of free disk space.
If you don't just want install media, but a bootable repair environment, everything is even worse. Hardware recognition is bad, boot is slow, and only some programs can actually run in a WinPE environment.
Have you ever customized bootable Linux media?
When I had to make some custom NixOS install media for an aarch64 VPS, it required only a few lines of code in the exact same environment as I use to customize running systems, and it's completely declarative, non-interactive, requires no special toolkit, doesn't require dozens of gigabytes of scratch space, never requires me to boot anything...
Teenage interns can also shovel manure, but that doesn't make it pleasant or painless!
Gemini is about 10x cheaper per token. But for some reason it's using 8 times more input tokens than CC. They also have this thing called cached tokens, which is much cheaper than not cached tokens. It's a hot cache of your context on Google side, cached automatically.
So at the end of the day you don't know how much you'll pay.
Models
Google is good for very complex topics and when the conversation is short. But both models are great. I prefer Claude and Sonnet 4.5 is great all around
CLI tools
Gemini cli is at it's very early days. Doesn't support hooks or subagents. Often runs into loops it can't break out from, essentially gets stuck but you still pay for the tokens.
Claude is just great. Allows you to write complex workflows they way they are supposed to be written. Handles hooks and subagents. MD file can reference another MD file, so you can DRY your files.
Nested plan mode works weird, sometimes the agent gets stuck if it asks for plan approval and thinks it's executing it, but displays nothing... So plan mode is not fully supported in subagents.
A nice thing is that .Claude directory is automatically understood by codex or cursor, you should be able to run your Claude command using openai models via codex or maybe even other providers via Cursor.
Summary
Overall Claude is the best all around, but the tokens are crazy expensive and the subscription model is a joke. You don't know how many tokens you can use when you're subscribed, but it's 'something', and last week they changed the limits, it's suddenly half of 'something'...
Opencode https://github.com/sst/opencode provides a CC like interface for copilot. It's a slightly worse tool, but since copilot with Claude 4 is super cheap, I ended up preferring it over CC. Almost no limits, cheaper, you can use all the Copilot models, GH is not training on your data.
Later, LLMs will be portrayed as something evil, yet everyone will still use them. Parents will use them, while telling their kinds not to do so.
Leetcode is already standard for SWE interviews, but other industries will need to adopt similar tests to verify that an applicant's brain is functioning correctly and that they're capable of doing the job. Maybe a formal confirmation from a psychologist specializing in 'fried brains' will be required.