Hacker News new | past | comments | ask | show | jobs | submit login

I wouldn't say that they're badly designed. They make it easy to deal with text, the bread and butter of shell, while neglecting other data structures, which is fine 99% of the time, unless you're using these shell languages as general purpose.

They also neglect parallel, concurrent, async programming. Which agan is fine for a shell language. The shell does have use cases for simple parallel processing and you can delegate to dedicated unix tools for that and it works great. If you need more, you're expected to jump into a more powerful environment.

So they're poor in data structure variety, and more interesting (nonsequential) ways of executing programs, which, again, is fine for a shell language, but those were exactly the things I wanted. I wanted to pipe and tee a bunch of programs programs into one "executable graph" a la dataflow programming using bash or something similar, but quickly understood that due to the above reasons I need a different environment. I'm still on the lookout for good dataflow programming environments.




My issue is not that shell scripting languages have constraints and are not general purpose programming languages.

My issue is their tons of subtle points (e.g. behaviour of [ vs [[ blocks), blatant disregard for the principle of least surprise, lack of some very basic items (not parallel, async etc programming -- a mere list and a mere map with an obvious syntax and no BS caveats would do), inelegant ad-hoc accretion of features, bad error handling, and so on. Those things can (and frequently do) trouble programmers even an one-liner.

I don't with e.g. something like bash supports writing a full 10K program with it.

But I wish it didn't make a 100-line shell script so clumsy and frail.


You can definitely do parallel / concurrent / async programming directly in the shell, without any helper programs. It's not exactly ergonomic, but with a few patterns it's not too difficult.

Async:

    long_running_cmd & > results.txt
    cmd_pid=$!
    # other stuff
    wait $cmd_pid
    # do something with results, if you want
Parallel:

    for script in "${scripts[@]}" ; do
      "$script" &
    done
    wait
Dataflow:

    mkfifo my_channel
    mkfifo another_channel
    my_cmd > my_channel &

    tee my_channel \
      >(cmd1) \
      >(cmd2 > another_channel) &
 
    another_cmd < another_channel

But if you want a useful dataflow environment, I really recommend checking out [Nextflow](https://www.nextflow.io/). I use it at work for constructing bioinformatics pipelines, and it's really natural. The "preview" DSL2 is worth looking at, as well.


Thanks you for the recommendation, Nextflow looks exactly like what I was looking for! Thanks for the bash examples as well.


I was dismissing powershell for a great deal of time until I eventually looked into it more and I have to say, I'm impressed by it. Though it can be used on linux, it doesn't feel like a first-class citizen, but non-the-less I would like to have an object-based shell environment for linux with old tools rewritten for it (like top would output a list of objects with properties like CPU usage, name etc and a later grep could be used to sort them by whatever we want. No more awk/sed the nth column hack)


IMO shell which require developers to write code like this [1] to implement mere syntax highlighting for command you're entering has bad design.

I don't know which level of intellect is required here for a newcomer to fix a bug.

[1]: https://github.com/zdharma/fast-syntax-highlighting/blob/mas...


I burst out laughing. I've not seen spaghetti code like that in a serious program before. Why is syntax highlighting being implemented in a shell language anyway?


It's syntax highlighting for shell (as you type on the command line).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: