Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the interactivity you describe might be a different thing from what your parent is talking about.

From what I understand, your parent talks about how the commands are built iteratively, with some kind of trial-error loop, which is a strength that is supposedly not emphasized enough. And I agree by the way. Nothing to do with how things are input.




That's correct. Articles/tutorials or an evangelizing fan often show the end result: the cool command/pipeline that does something cool and useful. The obvious question when someone unfamiliar with unix upon seeing something like the pipeline in this article:

    comm -1 -3 <(ls -1 dataset-directory | \
                 grep '\d\d\d\d_A.csv'   | \
                 cut -c 1-4              | \
                 python3 parse.py        | \
                 uniq                      \
                 )                         \
               <(seq 500)
is "Why would I want to write a complicated mess like that?" Just use ${FAVORITE_PROG_LANG:-Perl, Ruby, or whatever}". For many tasks, a short paragraph of code in a "normal" programming language is probably easier to write and is almost certainly a more robust, easier to maintain solution. However, this assumes that you knew what the problem was and that qualities like maintainability are a goal.

Bernhardt's (and my) point is that sometimes you don't know what the goal is yet. Sometimes you just need to do a small, one-off task where a half-assed solutions might be appropriate... iff it's the right half of the ass. Unix shell gets that right for a really useful set of tasks.

This works because you are free to utilize that powerful features incrementally, as needed. The interactive nature of the shell lets you explore the problem. The "better" version in a "proper" programming language doesn't exist when you don't yet know the exact nature of the problem. A half-assed bit of shell code that slowly evolved into something useful might be the step between "I have some data" and a larger "real" programming project.

That said, there is also wisdom in learning to recognize when your needs have outgrown "small, half-assed" solutions. If the project is growing and adding layers of complexity, it's probably time to switch to a more appropriate tool.


Just yesterday I needed to extract, sort, and categorize the user agent strings for 6 months' traffic to a handful of sites (attempting to convince a company to abandon TLS 1.0/1.1).

The first half of the job was exactly the process you described: start with one log file, craft a grep for it, craft a `grep -o` for the relevant part of each relevant line, add `sort | uniq -c | sort -r`, switch to zgrep for the archived rotated files, and so on.

The other half of the ass was done in a different language, using the output from the shell, because I needed to do a thousand or so lookups against a website and parse the results.

Composable shell tools is a very under-appreciated toolbox, IMO.


To be fair, it's possible to make this block simpler and more readable than what you have there. The problem with a lot of bash scripts I've seen is that they just duck-tape layer after layer of complexity on top of each other, instead of breaking things into smaller, composable pieces.

Here's a quick refactor for the block that I would say is simpler and easier to maintain.

  function xform() {
    local dir="$1"
  
    ls -1 "$dir"           |
    grep '\d\d\d\d_A.csv'  |
    cut -c 1-4             |
    python3 parse.py       |
    uniq
  }
  
  comm -1 -3 <(xform dataset-directory) <(seq 500)


>is "Why would I want to write a complicated mess like that?" Just use ${FAVORITE_PROG_LANG:-Perl, Ruby, or whatever}".

Some discussion on the pros and cons of those two approaches, here:

More shell, less egg:

http://www.leancrew.com/all-this/2011/12/more-shell-less-egg...

I had written quick solutions to that problem in both Python and Unix shell (bash), here:

The Bentley-Knuth problem and solutions:

https://jugad2.blogspot.com/2012/07/the-bentley-knuth-proble...


That's not a “discussion on the pros and cons of those two approaches”; that's a skewed story about just one part of a particular review of an exercise done in a particular historical context. (More on that here: https://news.ycombinator.com/item?id=18699718)

Not that there isn't some merit to McIllroy's criticism (I know some of the frustration from trying to read Knuth's programs carefully), but at least link to the original context instead of a blog post that tells a partial story:

https://www.cs.tufts.edu/~nr/cs257/archive/don-knuth/pearls-...

https://www.cs.tufts.edu/~nr/cs257/archive/don-knuth/pearls-...

(One of the places where McIlroy admits his criticism was "a little unfair": https://www.princeton.edu/~hos/mike/transcripts/mcilroy.htm)

BTW, there's a wonderful book called “Exercises in Programming Style” (a review here: https://henrikwarne.com/2018/03/13/exercises-in-programming-...) that illustrates many different solutions to that problem (though as it happens it does not include Knuth's WEB program or McIllroy's Unix pipeline).


Thanks.

>(More on that here: https://news.ycombinator.com/item?id=18699718)

>BTW, there's a wonderful book called “Exercises in Programming Style” (a review here: https://henrikwarne.com/2018/03/13/exercises-in-programming-...) that illustrates many different solutions to that problem (though as it happens it does not include Knuth's WEB program or McIllroy's Unix pipeline).

I'm the same person who referred to my post with two solutions (in Python and shell) in that thread, here:

https://news.ycombinator.com/item?id=18699656

in reply to which, Henrik Warne talked about the book you mention above.


Ah, good luck. Please consider all the viewpoints when linking to that blog post; else we may keep having the same conversation every time. :-)


>Please consider all the viewpoints when linking to that blog post;

It should have been obvious to you, but maybe it wasn't: nobody always considers all viewpoints when making a comment, otherwise it would become a big essay. This is not a college debating forum. There is such a thing as "caveat lector", you know:

https://www.google.co.in/search?q=caveat+lector

>else we may keep having the same conversation every time.

No, I'm quite sure we won't be. Nothing to gain :)


Let me put it this way: the last time the link was posted, I pointed out many serious problems with the impression it gives. Now, if the same link is posted again with no disclaimer, then either:

1. You don't think the mentioned problems are serious,

or

2. You agree there are serious problems but don't care and will just post it anyway.

Not sure which one it is, but it doesn't cost much to add a simple disclaimer (or at least link to the original articles). Else as long as I have the energy (and notice it) I'll keep trying to correct the misunderstandings it's likely to lead to.


FYI, you don't need the backslash if you end the line with a pipe... it's implied in that case.


I generalized interactivity to the Unix that most people seem familiar with.

“The interactive nature of the shell” isn’t that impressive in this day and age. Certainly not shells like Bash (Fish is probably better, but then again that’s very cutting edge shell (“for the ’90s”)).

Irrespective of the shell this just boils down to executing code, editing text, executing code, repeat. I suspect people started doing that once they got updating displays, if not sooner.


Some people figure out the utility of this right away. Many don't. Whenever I show my coworkers the 10-command pipeline I used to solve some ad-hoc one-time problem, many of them (even brilliant programmers and sysadmins among them) look at it as some kind of magic spell. But I'm just building it a step at a time. It looks impressive in the end, even though it's probably actually wildly inefficient and redundant.

But none of that is the point. The end result of a specific solution isn't the point. The cleverness of the pipeline isn't the point. The point is that if you are familiar with the tools, this is often the fastest method to solve a certain class of problem, and it works by being interactive and iterative, using tools that don't have to be perfect or in and of themselves brilliant innovations. Sometimes a simple screwdriver that could have been made in 1900 really is the best tool for the job!


> Irrespective of the shell this just boils down to executing code

Bernhardt's stated goal with that talk was get people to understand this point (and hopefully use and benefit from the power of a programmable tool). "If [only using files & binaries] is how you use Unix, then you are using it like DOS. That's ok, you can get stuff done... but you're not using any of the power of Unix."

> Fish

Fish is cool! I keep wanting to use it, but the inertia of Bourne shell is hard to overcome.


> Fish is cool! I keep wanting to use it, but the inertia of Bourne shell is hard to overcome.

Back when I tried Fish some like 5 or 6 years ago I think, I was really attracted by how you could write multiline commands in a single prompt. I left it, though, when I found out that its pipes were not true pipes. The second command in a pipeline did not run until the first finished, and that sucked and made it useless when the first command was never meant to finish on its own or when it should've been the second command to determine when the first should finish.

It seems they've fixed that, but now I found that you can also write multiline commands in a single prompt in zsh, and I can even use j/k to move between the lines, and have implemented tab to indent the current line to the same indentation as the previous line in the same prompt. Also, zsh has many features that make code significantly more succinct, making it quicker to write. This seems to go right against the design principle of fish of being a shell with a simpler syntax, so now I don't see the point of even trying to move to it.


I feel that more succinct and quicker to write does not mean simpler.

Fish tries to have a cleaner syntax and probably succeeds in doing so. It may even be an attempt to bring some change to the status quo that is the POSIX shell syntax.

I didn't try to fish anyway, because I like to not have to think about translating when following some tutorial or procedure on the Web. Zsh just works for that, except in a few very specific situations (for a long time, you could not just copy paste lines from SSH warnings to remove known hosts, but this has been fixed recently by adding quotes).


> I feel that more succinct and quicker to write does not mean simpler.

Indeed, it does not. They're design trade-offs of each other.

> Fish tries to have a cleaner syntax and probably succeeds in doing so. It may even be an attempt to bring some change to the status quo that is the POSIX shell syntax.

Indeed, it does, and it is (attempting to, though maybe not doing).

The thing is that, for shell languages, which are intended to be used more interactively for one-off things than for large scripting, I think being more succinct and quicker to write are more valuable qualities than being simpler.


If you are interested, I use a configuration for zsh that I see as "a shell with features like fish and a syntax like bash"

In .zshrc:

    ZSH="$HOME/.oh-my-zsh"

    if [ ! -d "$ZSH" ]; then
        git clone --depth 1 git://github.com/robbyrussell/oh-my-zsh.git "$ZSH"
    fi

    DEFAULT_USER=jraph

    plugins=(zsh-autosuggestions) # add zsh-syntax-highlighting if not provided by the system

    source "$HOME/.oh-my-zsh/oh-my-zsh.sh"

    PROMPT="%B%{%F{green}%}[%*] %{%F{red}%}%n@%{%F{blue}%}%m%b %{%F{yellow}%}%~ %f%(!.#.$) "
    RPROMPT="[%{%F{yellow}%}%?%f]"

    EDITOR=nano

    if [ -f /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh ]; then
        source /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh # Debian
    elif [ -f /usr/share/zsh/plugins/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh ]; then
        source /usr/share/zsh/plugins/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh # Arch
    fi

Relevant packages to install: git zsh zsh-syntax-highlighting

Then:

    zsh
WARNING: it downloads and executes Oh My Zsh automatically using git. You may want to review it before.

If it suits you:

    chsh
Works on macOS, Arch, Debian, Ubuntu, Fedora, Termux and probably in most places anyway.

You may need this too:

    export TERM="xterm-256color"


For posterity (I actually needed to install this today):

You need to install zsh-autosuggestions by installing the package from your distribution (Debian, Arch) and source it, or just run:

    git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions
in zsh and do exec zsh.


How is that not impressive for vast majority of developers?

For the past couple decades, the only other even remotely mainstream place where you could get a comparable experience was a Lisp REPL. And maaaybe Matlab, later on. Recently, projects like R, Jupyer, and (AFAIK) Julia have been introducing people to interactive development, but those are specific to scientific computing. For general programming, this approach is pretty much unknown outside of Lisp and Unix shell worlds.


The author is an MS student in statistics. Seems that Unix is well-represented in STEM university fields.

Old-timey Unix (as opposed to things like Plan 9) won. When does widespread ’70s/’80s computing stop being impressive? You say “unknown” as if we were talking about some research software, or some old and largely forgotten software. Unix shell programming doesn’t have hipster cred.


> When does widespread ’70s/’80s computing stop being impressive?

When the majority adopts it, or at least knows about it.

> You say “unknown” as if we were talking about some research software, or some old and largely forgotten software. Unix shell programming doesn’t have hipster cred.

It's unknown to those that are only experienced in working in a GUI, which I believe is still the majority of developers. In my experience, anyone of those people are always impressed when seeing me work in my screen filled with terminals, so it does seem to have some "hipster cred". :)


> When does widespread ’70s/’80s computing stop being impressive? You say “unknown” as if we were talking about some research software, or some old and largely forgotten software.

That's precisely what I'm talking about. The 70s/80s produced tons of insight into computer use in general, and programming in particular, that were mostly forgotten, and are slowly being rediscovered, or reinvented every couple years. Unix in fact was a step backwards in terms of capabilities exposed to users; it won because of economics.


Had Bell Labs been allowed to explore UNIX commercially, none of us would be having this discussion.


I'll toss a link to repl.it here.

It supports a large number of languages. I started using it while I was working through SICP. I've used the python and JS environments a little as well.


Smalltalk transcript, VB immediate window, Oberon (which inspired Plan 9), CamlLight, come to mind as my first experiences in such tooling.


The alternative is to write a program or script that does part of the job, run it (compiling if necessary), and see what happens. Then modify.

This loop is definitely slower than the shell or other REPL though.


It's much slower, and doesn't lend itself as well for building the program up from small, independently tested and refined pieces. The speed of that feedback loop really matters - the slower it is, the larger chunks you'll be writing before testing. I currently believe the popularity of TDD is primarily a symptom of not having a decent REPL (though REPL doesn't replace unit tests, especially in terms of regression testing).

BTW. there's another nice feature of Lisp-style interactive development - you're mutating a living program. You can change the data or define and redefine functions and classes as the program is executing them, without pausing the program. The other end of your REPL essentially becomes a small OS. This matters less when you're building a terminal utility, but it's useful for server and GUI software, and leads to wonders like this:

https://www.youtube.com/watch?v=gj5IzggEWKE

It's a different way of approaching programming, and I encourage everyone to try it out.


> leads to wonders like this

The engineer in me that learned about computers on a 286 with 4MB of RAM and a Hercules graphics card screams in shock and horror at the thought of letting a Cray-2's worth of computing power burn in the background. The hacker in me thinks the engineer in me should shut up and realize that live-editing shader programs is fun[1] and a great way to play with interesting math[2].

[1] http://glslsandbox.com/e#41482.0

[2] http://glslsandbox.com/e#52411.1


> The hacker in me thinks the engineer in me should shut up and realize that live-editing shader programs is fun[1] and a great way to play with interesting math[2].

Yeah, sure. My point is, I assume you're not impressed by shader technology here (i.e. it's not new), but the remaining parts are Lisp/Smalltalk 70s/80s stuff, just in the browser.


> I think the interactivity you describe might be a different thing from what your parent is talking about.

Actually no, they're not different things; both refer to the same activity of a user analyzing the information on the screen and issuing commands that refine the available information iteratively, in order to solve a problem. (I would have bought your argument had you made a distinction between "solving the problem" and "finding the right tools to solve the problem").

The thing is that the Unix shell is terribly coarse-gained in terms of what interactivity is allowed, so that the smaller refinement actions (what you call "input") must be described in terms of a formal programming language, instead of having interactive tools for those smaller trial-error steps.

There are some very limited forms of interactivity (command line history, keyboard accelerators, "man" and "-h" help), but the kind of direct manipulation that would allow the user to select commands and data iteratively, are mostly absent from the Unix shell. Emacs is way better in that sense, except for the terrible discoverability of options (based on recall over recognition).


One of the dead ends of Unix UX are all the terse DSLs. I feel that terse languages like Vi’s command language [1] get confused with interactivity. It sure can be terse, but having dozens of tiny languages with little coherence is not interactive; it’s just confusing and error-prone.

One of these languages is the history expansion in Bash. At first I was taken by all the `!!^1` weirdness. But (of course) it’s better—and actually interactive—to use keybindings like `up` (previous command). Thankfully Fish had the good sense to not implement history expansion.

[1] I use Emacs+Evil so I like Vi(m) myself.


> select commands and data iteratively ... Emacs is way better in that sense

Bind up/down to history-search-backward/history-search-forward. In ~/.inputrc

    # your terminal might send something else for the
    # for the up/down keys; check with ^v<key>
    # UP
    "\e[A": history-search-backward
    # DOWN
    "\e[B": history-search-forward
(note that this affects anything that uses readline, not just bash)

The default (previous-history/next-history) only step through history one item at a time. The history-search- commands step through only the history entries that match the prefix you have already typed. (i.e. typing "cp<UP>" gets the last "cp ..." command; continuing to press <UP> steps through all of the "cp ..." commands in ${HISTFILE}). As your history file grows, this ends up kind of like smex[1] (ido-mode for M-x that prefers recently and most frequently used commands).

For maximum effect, you might want to also significantly increase the size of the saved history:

    # no file size limit
    HISTFILESIZE="-1"
    # runtime limit of commands in history. default is 500!
    HISTSIZE="1000000"
    # ignoredups to make the searching more efficient
    HISTCONTROL="ignorespace:ignoredups"
    # (and make sure HISTFILE is set to something sane)

[1] https://github.com/nonsequitur/smex/


Both terminals, shells and Emacs suffer from the problem that you have to configure them out of their ancient defaults.


I also like setting my history so that it appends to the history file after each command, so that they don't get clobbered when you have two shells open:

  shopt -s histappend
  export PROMPT_COMAND="history -a"




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: