Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Slightly tangential but I've worked for several companies now that use `make` as a simple command runner, and I have to say it's been a boon.

Being able to drop into any repo at work and expect that `make init`, `make test` and `make start` will by convention always work no matter what the underlying language or technology is, has saved me a lot of time.




I've worked on a few projects that apply this pattern of using a Makefile to define and run imperative commands. A few people develop the pattern independently, then it gets proliferated through the company as part of the boilerplate into new repositories. It's not a terrible pattern, it's just a bit strange.

For many junior colleagues, this pattern is the first time they've ever encountered make -- hijacked as some kind of imperative command runner.

It's quite rare to run into someone who is aware that make can be used to define rules for producing files from other files.

I find it all a bit odd. Of course, no-one is born knowing about middle-aged build tools.


> It's quite rare to run into someone who is aware that make can be used to define rules for producing files from other files.

Is it, though?

That's literally what Make does as part of its happy path.

GNU Make even added support for pattern rules, as this use case is so pervasive.

What do you think people think make is about?


oh i agree, that's why i find the situation odd!

i'm talking working on projects with people whose first encounter with make is in a project where someone else has defined a Makefile to wrap imperative actions, e.g. `make run-unit-tests`, `make deploy`. If they think about make at all, there's a good chance they think make is for performing imperative actions, and has nothing specifically to do with producing files from other files using rules and a dependency graph, or the idea of a target being a file, or a target being out of date.


This is what I do for all non-rust projects. I knew what it was supposed to do, but wow if it took me forever to figure out how to do it (the connection between rule name and file name is really poorly documented in tutorials, probably should've just read the man page)


Yeah, I did this too. It's not that surprising considering that typically end-users only interact with phony targets (all, clean, install, etc).


This was the nicest thing about blaze at google. I'm a big believer that having a single standard tool for things is a huge value add, regardless of what the tool is. I didn't really like blaze particularly, and I don't really like make particularly, but it's amazing to just have a single standard that everybody uses, no matter what it is.


Rust's Cargo has the same appeal. There are 90,000 libraries that support cargo build/doc/run/test without fuss.


No, I believe GP is advocating against language-specific tools being the standard. In the ideal world you would have a Makefile that calls cargo so "make" still works like it does identically in the js or python or golang repos.


yup, thats the appeal - in a multi language environment a common call, not having to futz with cargo run, npm run start, python3 -m something, etc


Conventions are great, but that doesn't look like anything specific to make, a shell wrapper could do that:

    #!/bin/sh
    
    case $1 in
        init)
             ... do whatever for each project init
        ;;
        start)
             ... do whatever for each project start
        ;;
        test)
             ... do whatever for each project tests
        ;;
        *)
            echo "Usage: $0 init|start|test" >&2
            exit 1
        ;;
    esac
In my home/personal projects I use a similar convention (clean, deploy, update, start, stop, test...), I call those little sh scripts in the root of the repo "runme".

The advantage could be, maybe, no need to install make if not present, and no need to learn make stuff if you don't know it.

Sometimes they don't match the usual words (deploy, start, stop, etc) but then I know that if I don't remember them, I just type ./runme and get the help.

For my scenario, it's perfect because of it's simplicity.


A shell wrapper could do that, but Makefiles are a DSL to do exactly that with less boilerplate.


And have a nice inbuilt graph runner if you decide one task depends on another...


Complete with automatic parallelization if you ask for it! And automatic KEY=VALUE command line parsing, default echoing of commands (easily silenced), default barf on subprocess failure (easily bypassed). The variable system also interacts reasonably sensibly with the environment.

I've never rated Make for building C programs, but it's pretty good as a convenient cross-platform shell-agnostic task runner. There are also several minimal-dependency builds for Windows, that mean you can just add the exe to your repo and forget about it.


To tell the truth, make sucks incredibly for building modern C programs. There are just too many targets. It's why all of them generate their makefile with some abomination.

But it is still a great task runner.


Tbf, that particularity is easily achieved in shell scripts too:

    task1() {
        echo hello
    }

    task2() {
        task1()
        echo world
    }

    "$@"


But now update it to not re-run tasks unnecessarily - it's already wordier than a shell script right now.

Meanwhile, in Make that's

    task1:
        echo hello

    task2: task1
        echo world


True, that's where Make shines. Though given the popularity of so many Make alternatives (the strictly subset of command runner variety, like just[1]) who keep its syntax but not this mechanism, I wonder if for command runner unnecessarily re-running dependencies is really a big deal. Because quite often the tasks are simple and idempotent anyway, and then it's a bit of a hassle to artificially back the target by a dummy file in Make (which your example doesn't do here e.g.).

[1] https://github.com/casey/just


> I wonder if for command runner unnecessarily re-running dependencies is really a big deal.

I've used in in the past with python/django roughly like so (untested from memory, there may be a "last modified" bug in here that still makes something run unnecessarily):

  .PHONY: runserver

  environ:
    python -m venv $@

  environ/lib: requirements | environ
    . environ/bin/activate && pip install -r $<
    touch $@

  runserver: | environ/lib
    . environ/bin/activate && python manage.py runserver [foo]
Setting up these prerequisites takes a while and doing every time you start the dev server would be a pain, but not doing it and forgetting when requirements were updated is also a pain. This would handle both.


What is the | doing in the dependencies?


Specifies order without a hard more-recent-than dependency on everything after the |. So if the timestamp on "environ" is updated, that won't cause the "environ/lib" recipe to run, but if they both have to run then it ensures "environ" happens before "environ/lib".

It might not be necessary for this example, but I've found being more liberal with this feature and manually using "touch" has been more reliable in stopping unnecessary re-runs when the recipe target and dependency are directories instead of just files.


But now put .PHONY everywhere.

I don't think Makefiles are a bad way to go but a bash script is likely more accessible and easily reasoned about in most places.


make gives you autocomplete more easily for free. One reason I use it always.


+1 for this, free autocomplete is the reason that I love using Make as the top-level tool, even if the actual heavy lifting is done by CMake or something.


You can make "make init" work on Windows and Unix if you work at it, out of the same Makefile.

The above won't.


This is standard in Node.js ecosystem and I love it. Each package has scripts in package.json that you can run with npm run [name], and some of these like start, test or build (and more) are standardized. It's really great DX.


But it's npm, so when you switch to a java project, for example, you have different commands.


Quite a few companies I've contracted with have lifted the pattern up into Bazel or Gnu Make - for node projects `make lint` can be a pass through.

In the project repo, either work.


I did this for a while but make isn't well suited for this use case. What I ended up doing is have a shell script with a bunch of functions in it. Functions can automatically become a callable command (with a way to make private functions if you want) with pretty much no boilerplate code or arg parsing. You can even auto-generate a help menu using compgen.

The benefit of this is it's just shell scripting so you can use shell features like $@ to pass args to another command and everything else the shell has to offer.

I've written about this process at https://nickjanetakis.com/blog/replacing-make-with-a-shell-s... and an example file is here https://github.com/nickjj/docker-flask-example/blob/main/run.


Nice shell script. It’s rare to see one written so well. I’ll add you to my list of people I can still count on one hand that properly quote variables.

If I had to pick one nit, and it’s a stylistic choice, you use braces around variable names where they aren’t strictly needed.

I also like to add “set -u”.


I agree whole-heartedly with OP's use of "braces around variable names where they aren’t strictly needed". I have two reasons. First, consistency is nice. Second, they aren't needed now, but invariably you will end up coming back and adding to the script, and will end up needing them.

OK, maybe I have 3 [I mean 4] reasons. 3) if you always put the braces in, it won't break your script when they aren't required. However, if you don't put the braces in when they are required. it will break your script. 4) often putting the braces in when they are not required makes the script easier for me to read. I often use spacing that is not required for the same reason.

I'm not saying I never break my own "rules" (they are really more guidelines than rules). You will find variable names used in my shell scripts that have no surrounding braces, but I probably use more of the "unnecessary" ones than a lot of people do. And yes, I'm aware that sometimes not having them makes me less consistent. Everything has a balance, people just differ on what style provides the balance they prefer.


Thanks.

My thought process around using braces when they're not needed is mainly around consistency. If you pick and choose when to add them then you need to make a decision every time you add a variable. I've written about that here: https://nickjanetakis.com/blog/why-you-should-put-braces-aro...

That is a good call about `set -u`, it's something I've been using more recently but I haven't added it into that script yet but thanks for the reminder, I will soon. I ended up making a post about that here: https://nickjanetakis.com/blog/prevent-unset-variables-in-yo...

Another small thing I've been doing recently is defining options like this:

    set -o errexit
    set -o pipefail
    set -o nounset
It's a little more explicit on what each option does. It might be just enough context to avoid having to look up what something does. Philosophy wise that's also something I've been doing semi-recently which is to use long form flags over short flags in scripts https://nickjanetakis.com/blog/when-to-use-long-word-or-shor....


It adds too much visual noise for me, especially since you already need to double-quote the variables to protect against whitespace expansion. The rules around when braces are needed are simple so I leave them off when they aren't necessary. The rules around when double-quotes are needed are much more subtle, so I almost always use double-quotes, even when they aren't needed. e.g.

   foo=$bar  # quoting not needed but I'd still do this:
   foo="$bar"
A bug-bear of mine is unquoted variables especially with braces, even when using them for optional args:

   ${TTY}
Using your original script as an example, I'd prefer this:

    dc_exec=(docker-compose "${DC:-exec}")
    dc_run=(docker-compose run)
    if [[ ! -t ]]; then
      # we have no TTY so we're probably running under CI
      # which means we need to disable TTY docker allocation
      dc_exec=("${dc_exec[@]}" --no-TTY)
      dc_run=("${dc_run[@]}" --no-TTY)
    fi

    _dc() {
      "${dc_exec[@]}" "$@"
    }

    _build_run_down() {
      docker-compose build
      "${dc_run[@]}" "$@"
      docker-compose down
    }
Of course, this uses bash arrays and isn't POSIX. But the ergonomics of using an array for construction a long command are so much nicer than backslashes or long lines that I use them all the time now.

   cmd=(
      /usr/bin/some-command
      --option1="foo"
      --option2="bar"
      --option3="baz"
   )
   "${cmd[@]}"

I also prefer using self-documenting long-options like above when writing shell scripts.

Another thing that goes along with set -u is to fail early. So for example, your script seems to require global POSTGRES_USER, etc, so why not:

   set -o nounset
   # fail early on required variables
   : "${POSTGRES_USER}"
   : "${POSTGRES_PASSWORD}"
Happy to code-golf shell scripts. There's usually nothing but hostility toward them around here. :-)


Thanks.

The postgres variables are sourced in the line above I use them, they will always be set. It's a hard requirement of the Docker postgres image. I did end up pushing up the nounset change and it didn't complain about it since it's set from being sourced.


Doh, I missed that.

Aside, but I gotta say, lots of good stuff here:

https://nickjanetakis.com/blog/

I'm not personally a fan of videos, but I have plenty of collegeues who are that I'm going to happily start pointing at your videos. Some of these will be very handy references I can add to code reviews.


Thanks, I really appreciate it. Word of mouth helps a lot.


$ make run-dev

That command (to run an Angular/nodejs dev instance has staved off carpel-tunnel syndrome for me for maybe another 5 years.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: