Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mad genius stuff, this.

However... scripting requires (in my experience), a different ergonomic to shippable software. I can't quite put my finger on it, but bash feels very scriptable, go feels very shippable, python is somewhere in the middle, ruby is closer to bash, rust is up near go on the shippable end.

Good scripting is a mixture of OS-level constructs available to me in the syntax I'm in (bash obviously is just using OS commands with syntactic sugar to create conditional, loops and variables), and the kinds of problems where I don't feel I need a whole lot of tooling: LSPs, test coverage, whatever. It's languages that encourage quick, dirty, throwaway code that allows me to get that one-off job done the guy in sales needs on a Thursday so we can close the month out.

Go doesn't feel like that. If I'm building something in Go I want to bring tests along for the ride, I want to build a proper build pipeline somewhere, I want a release process.

I don't think I've thought about language ergonomics in this sense quite like this before, I'm curious what others think.



Talking about Python "somewhere in the middle" - I had a demo of a simple webview gtk app I wanted to run on vanilla Debian setup last night.. so I did the canonical-thing-of-the-month and used uv to instantiate a venv and pull the dependencies. Then attempted to run the code.. mayhem. Errors indicating that the right things were in place but that the code still couldn't run (?) and finally Python Core Dumped.. OK. This is (in some shape or form) what happens every single time I give Python a fresh go for an idea. Eventually Golang is more verbose (and I don't particularly like the mod.go system either) but once things compile.. they run. They don't attempt running or require xyz OS specific hack.


Gtk makes that simple python program way more complex since it'll need more than pure-python dependencies.

It's really a huge pain point in python. Pure python dependencies are amazingly easy to use, but there's a lot of packages that depend on either c extensions that need to be built or have OS dependencies. It's gotten better with wheels and manylinux builds, but you can still shoot your foot off pretty easily.


Python is near the top in languages that have given me trouble in other peoples' production software. Everything can be working fine and then one day the planets fall out of alignment or something and the Python portion of the software breaks and the fix is as clear as mud.


I'm pretty sure the gtk dependencies weren't built by Astral, which, yes, unfortunately means that it won't always just work, as they streamline their Python builds in... unusual ways. A few months ago I had a similar issue running a Tkinter project with uv, then all was well when I used conda instead.


Yeah.. this is exactly the overall reality of the ecosystem isn't it? That being said I do hope uv succeeds in their unification effort, there's nothing worse than relying on a smattering of diff package managers and built streams to get basic stuff working. It's like a messy workshop, it works but there's a implicit cost in terms of the lack of clarity and focus for the user. It's a cost I'm not willingly paying.


It may not be the grand unifier if they aren't willing to compromise. Currently I'd say conda is the "grand unifier", giving users 100% what they ask for artifacts-wise, albeit rather slowly. On the other hand, uv provides things super fast, but those things may break 5% of the time in unusual ways on unusual configs. I have no issue using both for the fullest experience.


I have never experienced something like that. How did you use uv? What errors did you get?


How were the dependencies specified? What kind of files were provided for you to instantiate the venv?


I haven't had the same issue with anaconda. Give it a try.


I've had similar issues with anaconda, once upon a time. I've hit a critical roadblock that ruined my day with every single Python dependency/environment tool except basic venv + requirements.txt, I think. That gets in the way the least but it's also not very helpful, you're stuck with requirements.txt which tends to be error-prone to manage.


I know what you mean.

For me, the dividing line is how compact the language representation is, specifically if you can get the job done in one file or not.

I have no doubt that there's a lot of Go jobs that will fit in a 500 line script, no problem. But the language is much more geared towards modules of many files that all work together to design user-defined types, multi-threading, and more. None of that's a concern for BASH, with Python shipping enough native types to do most jobs w/o need for custom ones.

If you need a whole directory of code to make your bang-line-equipped Go script work, you may as well compile that down and install it to /usr/local/bin.

Also the lack of bang-line support in native Go suggests that everyone is kinda "doing it wrong". The fact that `go run` just compiles your code to a temporary binary anyway, points in that direction.


I think there is still space for an alternate Go syntax for scripting with the following constraints:

* a single file exposing a "main" package and its "func main()"

* import syntax that merges requires from go.mod (import both packages and modules)

* simplified error handling (just ignore returned errors in code, and they are caught by the transpiler to be handled as fatal)

Those are my ideas to go beyond my own goeval that already allows to run Go oneliners. https://github.com/dolmen-go/goeval


bitfield/script has some nice abstractions for bash builtins and coreutils

https://github.com/bitfield/script


> bash obviously is just using OS commands with syntactic sugar

No, bash is technically not "more" OS than e.g. Python. It just happens that bash is (often) the default shell in the terminal emulator.


Have to disagree, "technically" yes, both are interpreted languages, but the ergonomics and mental overhead of doing certain things are wildly different:

In python, doing math or complex string or collection operations is usually a simple oneliner, but calling shell commands or other OS processes requires fiddling with the subprocess module, writing ad-hoc streaming loops, etc - don't even start with piping several commands together.

Bash is the opposite: As long as your task can be structured as a series of shell commands, it absolutely shines - but as soon as you require custom data manipulation in any form, you'll run into awkward edge cases and arbitrary restrictions - even for things that are absolutely basic in other languages.


> In python, ..., calling shell commands or other OS processes requires fiddling with the subprocess module, writing ad-hoc streaming loops, etc - don't even start with piping several commands together.

You inspired me to throw something simpler together - https://pypi.org/project/shell-pilot/


That looks really cool!


The subprocess module is horrendous but even if it was great bash is simpler. I just think about trying to create a pipe of processes in python without the danger of blocking.


I love Python and dislike Bash, but just look at the difference between listing a folder in Bash vs Python, for example.


Right. Figure out if other users are logged in using Python. Get a human-readable remaining space on each partition.

It’s not like you can’t do in Python, but it’s a whole lot more work than typing <10 characters directly into shell.


bash to me is the C++ of scripting. There are a bunch of arcane rules which all have to be followed, forget a single one of them and you've got a vulnerability.


Maybe the ergonomics of writing code is less of a problem if you have a quick way of asking an LLM to do the edits? We can optimize for readability instead.

More specifically, for the readability of code written by an LLM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: