My personal problem with this is that I have no confidence in the answer provided by an AI. If I search the internet for help with some command line operation, I can try to find a reputable source that provides not only the command I'm looking for, but an explanation of the syntax that a) gives me confidence that it will do what I expect it to and b) allows me to learn how to construct that command so I can do it myself next time. This is, of course, less true today than it used to be, now that Google is likely to show me some convincing AI slop at the top of my results.
This isn’t the use case for the majority of people. Most are amateur developers who copy/paste the first thing they see on StackOverflow without understanding what they’re doing or reading the explanation because they just want to move forward on whatever project they’re actually working on (ex: building a REST API but they can’t install the necessary Python packages the YouTube tutorial they’re following uses). LLM-powered guidance here is tremendously useful, and if the person is curious, they can ask the LLM to explain the command, which it’s actually very good at based on my experience with GPT-4 and GPT-4o.
I think saying there’s “no confidence in the answer provided by an AI” is an overstatement and underestimates the value AI can have for the majority of users because that statement overestimates the value of “a reputable source that provides not only the command I’m looking for, but an explanation of the syntax” for the majority of users. Reputable sources and thorough explanations are great in theory, but very few have the patience to go through all that for a single CLI command when they want to make visible progress on the bigger picture project they actually care about.
> I think saying there’s “no confidence in the answer provided by an AI” is an overstatement and underestimates the value AI can have for the majority of users because that statement overestimates the value of “a reputable source that provides not only the command I’m looking for, but an explanation of the syntax” for the majority of users. Reputable sources and thorough explanations are great in theory, but very few have the patience to go through all that for a single CLI command when they want to make visible progress on the bigger picture project they actually care about.
These are exactly the people who shouldn't be running code written by a random number generator.
> These are exactly the people who shouldn't be running code written by a random number generator.
The beauty of technology and the internet is that there's near-zero gatekeeping. Anyone with enough interest and time can learn to build anything without seriously harming anyone. Dismissing LLMs as a random number generator is very clearly an overstatement for the value they're already able to provide. Ideally, how would you suggest a new developer learn to build something such as a REST API in Python?
That's why I was explicit in calling it "my personal problem" in the very beginning of my comment, and specified that I don't have confidence in the answer returned by AI. I apologize if I inadvertently appeared to speak for anybody but myself.
No worries—and it's clear that it's your personal problem. I think it's always valuable to present a counterpoint in these comment threads so that less-informed readers know the issue isn't totally clear and that the truth likely lies somewhere between your point and mine. I apologize if I came across as argumentative—my goal is to expand the scope of the discussion.
> I have no confidence in the answer provided by an AI
You don’t need to have confidence, these are shell one liners, you can generate it, run it, and verify the results in the time it would take you to open a browser window or scroll halfway down a man page.
And if it doesn’t work, then you can still fallback on those.
I verify the results by looking at the output and seeing if it did what I wanted it to do.
I don’t mean to be dismissive or reductive. If I need to do something more complex then I’m probably not trying to do it in a shell one-liner.
I used this feature earlier today and prompted it with something like “given piped input, use awk to filter just the 2nd and 5th columns, and truncate the 5th column to 25 chars”. I don’t use awk often but I know that it exists, and I could have figured out how to do this in about 20-40 seconds with a google search or man page, but this was 10x faster.
Is this a real opinion people have, or am I misunderstanding...
It sounds like you're recommending people to not understand the shell one-liners they're pasting into their CLI and instead just verify that the outcome was observed after the fact?
That's pretty much the opposite of what I've understood to be "good terminal practice" for my entire life, which can be summed up as "don't execute anything in the terminal that you don't understand".
Is "just paste it and hope you don't lost data/just installed a virus" the new modus-oprandi?
> It sounds like you're recommending people to not understand the shell one-liners they're pasting into their CLI and instead just verify that the outcome was observed after the fact?
Yes
> Is "just paste it and hope you don't lost data/just installed a virus" the new modus-oprandi?
No
Somewhere in between those two extremes is knowing enough to know that the command you’re pasting in isn’t going to install a virus, even though you don’t fully understand what the -g flag is doing
I do have this confidence if by AI we mean GPT4. I already routinely use it to generate small scripts and complex command line incantations. My memory is better spent elsewhere imo