One big reason I rely on Make: its old. Make has been well maintained for nearly 50 years, and is deeply integrated into the programming ecosystem as a whole.
I can start a new project that relies on Make, and be extremely confident that Make will continue to work and be maintained for the lifetime of my project. 20+ years from now, Make will still work. My Make knowledge will be relevant for my entire career.
New and shiny replacements like Just are tempting, but you have to consider the real cost of:
1) Learning a new build system
2) Onboarding new devs with an unfamiliar build system
3) Dealing with the eventual deprecation of the new shiny, once something newer and shinier comes out. Rewrite your build scripts, GOTO 1.
I've yet to find a shiny replacement that is half as well thought out as make.
I'll look at Just, but is there anything else actually in make's space (scripting language agnostic, target language agnostic, auto parallelized, declarative, ergonomic syntax that is not xml, json, etc).
Many things claim to be a make replacement, but don't meet those basic requirements.
I've been working on a tool called Knit (https://github.com/zyedidia/knit) that I think is similar to what you are looking for. Essentially, a Knitfile is a Lua program with Make's declarative rule syntax baked in. Or in other words, it is like Make (with some additional changes inspired by Plan9 mk), but where Make's custom scripting language is replaced with Lua (but it still keeps the declarative rules language). It's still in progress (I'm currently using it in some projects, and then will likely make some more changes based on my experiences), but I hope to release a stable version in the next few months. If you or others also have feedback, please let me know!
There is nothing as general purpose as make because make already fills that space well. OTOH, there are special purpose replacements that do a much better job in their own domain.
eg: I wouldn't want to maintain a large Java project with make. Gradle has a lot of helpful built-ins for managing the complexity of subprojects/dependencies and also runs on the JVM which makes debugging for Java-programmers a little more in-reach than make.
All of that being said, I do use make for some fun personal projects. Automatic failure (a more-powerful "set -e" equivalent in common shells) and dependency tracking mean I can write really powerful automation scripts which can easily parallelize for slow steps.
Do note that GNU make is still being developed. It even had breaking changes in the recent 4.4 release. Also GNU make 4.3 has at least one bug (e.g. on Ubuntu 22.04) where
tgt:
gcc hello
errors when you have a directory with the name gcc in your PATH. (This is a bug in gnulib, that’s fixed, but gnulib is a rolling release typically vendored by every project on an arbitrary commit, sigh)
Also system GNU make on Ubuntu 20.04 has a bug in —-output-sync. It doesn’t sync.
And finally GNU make on macOS is ancient by default. Like over a decade old. So what works on Linux may not always work on macOS.
It's this old and yet it cant do the most basic of things. Wheres the switch to dump what it is executing and why? Why can't it time what it is doing? Why is it simultaneously so "simple" yet no workable alternative implementations exist?
It's honestly a terrible tool to use even at medium complex tasks.
This seems like an example of not reading the manual and then blaming the tool.
The switch to dump what is executing is -d or --debug, the manpage has this listed in the list of flags you can pass, and they are even sorted alphabetically which means this is displayed near the top.
As for the time thing, it's not totally clear what you want, but you can probably just use the time command in the rule you want to measure. Most systems have an actual time command that is not a shell built-in at /usr/bin/time if for some reason the builtin doesn't work for this purpose.
The best thing about make is that it’s available everywhere on POSIX systems. macOS comes with it pre-installed, and every Linux distro or *BSD does too, or offers a package for it that’s often depended on by everything else devtools-wise. This means make skills are super transferable; possibly more so than bash/shell skills.
Sure there are more user-friendly tools written in the programming language du jour, but those will always need some special snowflake setup process for all of your developers. If you’re cross-platform, now you need encode that in a setup script before your project can build… Maybe you can offer a Makefile just to install `better-make`?
I had to resort to tricks to make Makefiles which work both on Linux and macOS. I had to mandate the use of GNU make on Macs in other cases.
This is on top of having BSD vs GNU coreutils for things like grep, awk, etc, and the ancient bash on macOS.
I really wish there was a self-contained tool that could work like make (building a dependency graph and only doing the needed things), with reasonable string / list / map processing built in. (In a limited way, GNU make is such a tool.)
Does ninja fit your needs? It’s available on just about all of the Linux distros and it’s extremely fast with very few bells and whistles. The language is (for better or worse) designed to be generated by a higher-level tool, so it strips out most of the complexity of GNU make, but it might go too far if you’re looking to do list/map processing in it.
People always claim ninja is fast, but I can't figure out what they mean by that. A typical C++ project build uses 10-10,000 CPU core minutes, and make takes (maybe, in some pessimal situation) 100 milliseconds to schedule and coordinate the build invocations.
Even if ninja is 100x faster, it really, really doesn't matter, at all.
They probably mean incremental builds, where the actual compilation doesn't overshadow the "coordination" work.
In my (artificial) benchmark, make scaled poorly, taking 70 seconds to process 100k C files worth of dependencies, vs. ninja's 1.5 seconds: https://david.rothlis.net/ninja-benchmark/
Most of make's time was spent processing the ".d" files containing header dependencies (Ninja has a special optimisation for these files, where it reads them the first time they're created, inserts the dependency information into a binary database, then deletes them so it doesn't have to parse them in future invocations).
In real world projects, you often end up "abusing" make to add behaviour such as detecting if the compilation flags have changed, and this can make your makefiles slower; whereas ninja has those features built in. Apparently this made a big difference in build times for Chromium (where ninja was born). See this comment by the ninja's author: https://news.ycombinator.com/item?id=23182469
Ninja isn't necessarily faster in the slow path of "rebuilding the whole project" vs make, but it's often significantly faster in the fast path of "incremental rebuild given a small change in the input code" vs make. Which is what you're doing most of the time. You also do not have to abuse ninja for it to record certain changes; for example tracking CFLAGS as a dependency in Make can be awkward (e.g. write it to a file and all the associated overhead from the filesystem), but in Ninja it's "just" a variable binding, and the usage of variables in commands is tracked as a dependency, and so changing that variable and re-computing the needed set of commands to run is much, much faster. Those things add up in large builds.
I have personally had Ninja turn multi-second long no-op rebuild times (e.g. run 'make' with no changes) into the 10s of milliseconds range. The no-op build is often the most extreme case but closest to the average case, which is "recompile after a small edit." The difference in interactivity is quite large in these scenarios.
If your project builds in under like 5 minutes from scratch on a modern laptop it probably is not large enough to see huge benefits (outside of pathological cases), but probably some benefit; but for larger projects the difference can be very pronounced, very quickly.
If you have a slow filesystem and you’re compiling C, make has a “lot” of overhead due to implicit rules. There are many more rules than you write, unless you set stuff like
One possible culprit is that ninja parallelizes builds by default, whereas make requires you to explicitly specify the number of jobs with a flag, although I've run into issues with this occasionally if the project happens to do manual parallelization (e.g. with the `parallel` tool). I've seen people not realize this or just happen to forget the flag often enough that I wouldn't be that surprised if a decent number of those claims stem from this.
Ninja also allows different level of parallelism for different stages which is useful if the process itself is already parallelized internally or you need to limit it for other reasons (ld consuming huge amount of memory is a typical usecase)
If you build something much smaller, a difference betwen 5 sec and 0.5 sec is pretty noticeable for interactive work, even though 5 sec is not prohibitively long at all.
Maybe tup would interest you? https://gittup.org/tup/examples.html I've been considering it for the next time I do a project that needs this kind of ordered construction.
tup comes packed with a lua parser that gets executed first, so if you need something fancy it can be expressed in a lua file with lua's tools https://gittup.org/tup/ex_lua_examples.html
The macOS version of GNU Make is stuck at 3.81, which I discovered does not print the information needed by vim's quickfix feature when traversing subdirectories using `make -C`. Installing the latest version of GNU Make (4.4) using `brew install make` fixed that problem.
The frozen macOS build also has some weird issue with my makefiles where it sometimes finishes the job but then sits there spinning at 100% CPU forever :(.
Nope, you have to install developer tools to get it. Usually that means visual studio. Historically they also ship compilers and command line dev tools as a separate package, I'm not sure if that's still a thing.
Nmake is also not very compatible with gnu or bsd make.
Note that while the download page and tools don't perform license checks, you are supposed to have a visual studio license to use that package[1]. It is intended as an easy way for VS customers to install tools on build machines and the like without doing a full VS install, not a way for non-customers to get a free compiler. Individuals can get a free license to VS community that would include this package[2], but commercial use requires a paid license.
* It's dead simple. It encodes dependency graphs and does stat() calls to check and compare timestamps. From that, you can have desirable features: minimal rebuilds of changed files, parallel build, etc ... Sure it's not perfect at this, timestamps can skew or whatever, dependencies can be improperly specified. But it does enormous heavy lifting with a very "dumb" implementation.
* It's extremely influential. Even if you're not using makefiles, chances are some build tool is stat()ing files based on some representation of dependencies. In some form or another they probably got that expectation from make.
* The original version was written by one guy over a weekend. Shows that our industry can have enormous, industry defining contributions from a small team.
I'd love to look into a parallel universe where Make didn't make so many really basic mistakes by 2023 standards. For instance, it has a lot of the "was there an error? eh, just keep going" philosophy from early days. I'd like it to be an error if a make rule claims to make a certain dependency and it fails to do so. That one change in a single stroke would eliminate a lot of Make's hostility when trying to first understand it. There's a series of similar things that could be done.
All of its replacements generally involve such a paradigm shift that it's no longer a comparable, to use a real estate term.
I've got a whole list; $ being used for both make and shell replacements, leading to $$$$(var) abominations and the general lack of clarity as to which variables are for which things, many places where lists were clearly bodged in as after thoughts when they should be designed in from the beginning, tabs as meaningful whitespace is a classic but still a problem, .PHONY being a rather ugly hack when there should be a clear distinction between "a command I want to provide" versus "this is how to make a dependency", and while this may not be a user-visible behavior change, taking away all the C defaults so the strace is no longer a nightmare of trying every implicit C rule before actually trying the rule I want it to try. At least a mode of invoking the shell that looks more like a programming language where I pass a clean array of strings rather than the shell language, already a nightmare of string interpolation on its own terms, buried in another string interpolation language sitting on top of it. A thought-out solution to recursive versus included makefiles. And I never became a make master, but by my 21st century standards, trying to use make is just a fractal of bad 1970s ideas hitting me at every turn, so I'm quite sure if I had to work with it a lot more I could go on for quite a while. As it is I think I've forgotten some.
I think people, not entirely illegitimately, have trouble separating the essential concept of make, which I think was quite solid, from the accidental comedy of errors that we have as a received standard. So a lot of make replacements end up running from both of them, and often end up with a much more annoying essential concept at their core, such as a super-heavy-duty enumeration of "these are the exact things you may ever be interested in doing", which puts them behind the eight ball no matter how good their accidental choices may be.
> For instance, it has a lot of the "was there an error? eh, just keep going" philosophy from early days.
I don't know what you're talking about. make stops when a command it has launched terminates with a non-zero exit code, and if anything, "just keep going" is rather a thing in today's needlessly-async JavaScript tool chains.
Well, I'd be talking about things like "I'd like it to be an error if a make rule claims to make a certain dependency and it fails to do so.", like I said in the next sentence.
While we're at it, undefined variables should be errors, not turned into empty strings, and like I said, were I working with it routinely I'm sure I could come up with more. There's a lot more to make than just its shell invocations.
Furthermore, embedding shell's default error handling behavior into make isn't exactly a comforting thing. It's way quirkier than a lot of people understand, and unfortunately it all comes to the surface when people start using make. "It stops at a non-zero exit code!" is, unfortunately, far, far from the simple thing it sounds like.
And encountering the "errors? pshaw, whatever" attitude in multiple languages is precisely why I know it's such a bad idea. Were it just Perl or something, I wouldn't be able to tell if it's a bad idea or if Perl is just a bad implementation, but after the decades I've been using these languages, I've come to the conclusion it's just a bad idea everywhere I encounter it.
Once you step off the beaten path, you find that errors from things like:
false | true
Get silently swallowed by bash (this is configurable, but the default ignores such errors). Also, the point about not noticing that a rule didn't create its target is a good one. (That behavior should be configurable; I don't think it is.)
Anyway, with -j, make is as async as pretty much anything else out there.
To be more precise... "make" isn't "bash". There's no problem with "make" here and it has no way to see inside bash's internals. It's a bit like asking "make" to understand python, javascript, and java code and runtimes.
I think the desire would be for that statement to fail. The pipe looks like a logical or, but it's actually for chaining IO. Always a good idea to turn on the pipefail option in your bash scripts.
$ being used for both make and shell replacements, leading to $$$$(var) abominations and the general lack of clarity as to which variables are for which things
There was another comment asking why not just use a shell script? If I don't care about dependencies that's what I would do, but I often do care about dependencies. So I call a shell script from my Makefile to do all the weird and wild things I want to do. Make checks the dependencies and I avoid having to use $$ in my Makefile.
I imagine many of Mike's points would be addressed just as well by Just or most any other task runner... but I thought his main point of "Makefile as documentation" was valuable.
After reading this way back in 2015 I decided to give it a try for a not-code-related task: downloading a book from the internet archive, copying out all the images, and running some adjustments and conversions on them with ImageMagick:
"To see more real-world examples of makefiles, see my [World Atlas](https://github.com/mbostock/world-atlas) and [U.S. Atlas](https://github.com/topojson/us-atlas) projects, which contain makefiles for generating TopoJSON from Natural Earth, the National Atlas, the Census Bureau, and other sources."
I checked those repositories because the descriptions of the makefiles sound interesting, but I couldn't find the makefiles. Am I looking wrong?
It kind of makes the whole article irrelevant. Like a house of cards that is build on a foundation of Make, but when you get to the bottom there are no actual Makefiles there.
It just means when the article was published, the need was real and make was useful. Context matters.
Based on that commit they don't need to download the data to generate the .json file, so they don't, Make became irrelevant. If anything this shows that a tool can be really useful but you don't need to marry it. Don't use if you don't have to.
I always put a Makefile in all my projects, especially because I change language often and tooling.
I always create a make help task which gives me a list of commands and I have a few conventions, make build, make release, make server (dev server). Most of the time the Makefile is very simple and just call npm, cargo, webpack, ...
Being able to enter my blog written in hugo or a phoenix project or a ruby on rails project and just hit make server has been a real help to me.
That being said, I would not use make alone as a build tool for a complex project, it gets very cryptic when complexity is added. But as a "entry point" it serves me well.
This is the way. I also it use it like that. It is nice to know that i can work on any projects at work and home, and have a consistent dev UX, no matter the toolchain, programming language, etc. A very useful interface that lowers friction.
> Note: use tabs rather than spaces to indent the commands in your makefile. Otherwise Make will crash with a cryptic error
Using this as an example, are there more modern equivalent tools that may be a bit more user friendly? I appreciate make and I get its age and the complexity and all, it's just sometimes I need something that's simple and explicit, without the historic baggage.
Edit: It's not about spaces/tabs obviously, but about "Otherwise Make will crash with a cryptic error" which I used as an example
Tabs are absolutely a valid criticism. I can't tell you how many times the editor I'm using put in 4 spaces instead of a proper tab.
.PHONY is truly a bizarre construction. There's nothing like it in modern code systems. It's only by familiarity with make that it gains some semblance of normalcy.
> Using this as an example, are there more modern equivalent tools that may be a bit more user friendly? I appreciate make and I get its age and the complexity and all, it's just sometimes I need something that's simple and explicit, without the historic baggage.
Indenting with tabs too complicated? Let me introduce you to YAML.
Both views come from the same place: the idea that the semantic scoping should match the visual scoping, rather than counting tabs for indentation but not counting visually indistinguishable spaces (make) or ignoring your indentation entirely in favour of what the braces said (perl).
YAML prohibits tabs outright. That sounds a lot simpler to me, since there's no need to be aware of the difference between a tab and the equivalent number of spaces when the former is not a possibility at all.
I started from ninja_syntax.py and then improved it based on what I needed. Ninja is fast and simple, but it's (intentionally) lower level than you might want.
It does seem like there should be a "standard" thing that is maybe not so C++ specific (like CMake and Meson which both seem to use Ninja), i.e. for the use cases in the blog.
How is OSTree working out? Are you still using it?
It seems cool, but for some reason I don't see it mentioned very much, maybe because it's focused more on embedded system image uses, rather than "the cloud" which seems to be more popular on this site?
I would like to see Docker/OCI containers "broken up" into just file formats and just content-addressed data/networking. Not a weird registry with a different local API than remote API. It should be more like git !
Not sure if OSTree fits there -- as far as I remember, it's inspired by git but it's focused on a different use case. This recent project seemed interesting but it's also for more of a machine learning use case: https://news.ycombinator.com/item?id=33969908
OSTree is still working very well for us. At the time I wrote the article we had been using OSTree (and the build system I described in the article) for 4 years; 3 years later not much has changed. In those 3 years we have started building images for different architectures and our build system / OSTree handled it just fine, as you'd expect (it already handled cross-compilation, but for a single architecture).
Our build system also builds Docker containers (for our cloud services) but instead of a Dockerfile we use a custom yaml format that lists the base image (like "FROM" in a Dockerfile), the apt packages to install, and the commands to run (like "RUN"). Then we create a lockfile of all the apt packages, download them (into a local OSTree repository, but that's an implementation detail), and install them with a custom "docker run" command + "docker commit". We end up with the base layer + the apt layer + a single layer produced by concatenating all the RUN commands + a layer with our compiled Rust binaries and Python files. We use apt2ostree to generate the lockfile (really it's our patched version of aptly doing the work) but we use docker (not OSTree) to build the Docker layers. We use Docker's standard push/pull mechanisms to deploy these containers.
To hook this up to Ninja we use "marker files" (a file in the build directory) to track whether this work has been done (e.g. you need to regenerate the apt layer if that layer's "marker file" is older than the lockfile).
def build_docker_container(name):
yamlfile = f"{name}/docker-base-image.yaml"
with copen(yamlfile) as f:
data = yaml.safe_load(f)
ubuntu_version = data["ubuntu_version"]
image = docker_pull("ubuntu:{ubuntu_version}")
image = docker_apt_install(image, data.get("apt_dependencies"),
lockfile="%s/Packages-%s-amd64.lock" % (
name, ubuntu_version),
ubuntu_version=ubuntu_version)
cmd = "..." # RUN commands from yaml
deps = [...] # dependencies from files listed in yaml
return docker_mod(image, cmd, deps)
(Where "copen" is like "open" but it tracks which files have been read by "configure" itself, to detect if we need to re-run "configure").
What I really like about the Python+Ninja combo is that you can pass these targets around as Python variables — you don't have to come up with an explicit filename for each one (the target name is generated by each helper function, e.g. `docker_pull` returns "_build/docker/${name}"). This makes it so convenient to compose these build rules. And if you ever need to debug your build system, everything is very explicit in the generated ninja file.
We don't have a ton of these containers so it has worked well enough for us because the apt layer changes rarely, and the layer above it is small/fast. The main thing we get (that you don't get from vanilla docker/apt) is lockfiles and reproducibility.
I realised this is another example of "trees" as first-class citizens in a build system. In my comment above the tree we're passing around is a docker layer; in my LWN article it's an OSTree ref. We use the former for our cloud containers, the latter for our embedded device rootfs and systemd-nspawn containers.
I suppose we could use systemd-nspawn on our cloud servers too, instead of docker, but when we wrote the build system we were already using docker so it was the expedient thing to do at the time.
I'd be interested in seeing the Python config and Ninja output, to see how it works. Right now it looks to me like the dependencies are more implicit than explicit, e.g. with your copen example
---
The system I ended up with is more like Bazel, but it's not building containers, so it's a slightly different problem (although I guess Bazel can do that now too). But I'm interested in building containers incrementally without 'docker build'.
I made some notes on #containers in the oilshell Zulip about incorrect incremental builds with docker build. It's also slow and has a bunch of ad hoc mechanisms to speed it up
I like the apt lockfile idea definitely ... However I also have a bunch of other source tarballs and testdata files, that I might not want to check into git. How do you handle those -- put them in OSTree?
I don't understand why and don't know when spaces were started to be used for indentation. It never made sense for me. Your code editor surely supports setting tab width if you are not happy with the default.
If the answer is vertical alignement I will weep and offer pity, but no sympathy. Even if that is the case you can still indent with tabs and aligns with spaces if you like to torture yourself, others and your RCS' history.
I have preferred spaces to tabs since the mid nineties because everyone sees it the same. With tabs people use 2 4 and 8, which can make my tendency to over indent code cause more trouble.
I will say I have been doing go recently and have been forced into tabs and camelCase, and have found consistency really is better than The Right Way.
For make, since using protobufs and code gen and being allergic to checking generated code in, have found make to be nicer than I remember it.
An additional reason that drives me to use make on most new projects is the polyglot nature of many code repos. There are language/ecosystem-specific build tools: grunt, rake, etc. but often real world projects are a mix if different languages and to double down on just one language-specific tool feels unnecessarily constraining.
Having a build tool like make that is more closely aligned with the system level feels more natural for orchestrating build/test/deploy tasks that by their very nature contain more cross-cutting concerns.
The Makefile syntax is beyond cryptic. And I'm speaking as someone who used the autoconf/automake chain for years to build software. The author only made a simple example that requires downloading, transforming and uploading a file. Try and do something a bit more complex, deal with m4 macros, maybe the autoconf syntax, and you'll feel like you're back in the 1970s.
Don't get me wrong: make has been a loyal friend for most of my life. I've built a lot of software using make and the whole auto* build chain. And the author doesn't even mention it's biggest strength point: portability. Every single machine with a UNIX-like system has make. But I have to admit when some technology is ready for retirement. More modern alternatives exist today. Cmake is the most obvious, Bazel is also a promising one. Even simple shell scripts can do the job for easy things.
please don't perpetuate the idea that the auto tools are somehow an extension or part of make. autotools is a frankly insane model of nested scripts that attempts to be magic and fails. make is a really interesting declarative scripting framework that collapsed from a lack of minimal language features a long time ago.
The make syntax is basically as simple as it gets; it's grammar is even shorter than the JSON spec itself... what you do with it is another story, but you don't have to use autotools.
> Try and do something a bit more complex, deal with m4 macros, maybe the autoconf syntax, and you'll feel like you're back in the 1970s.
I have been in the club from those who used the word "Autohell", and while I have used it I would most definitely _never_ write "GNU style" Autoconf script that would check even the system's max supported argv length; but frankly, just everything that has come since these "1970s" is just plain worse. These days I'm lucky if the build system doesn't have me learning languages that are _significantly_ more complex than m4 (e.g. bastardized copies of Tcl), or even truly general purpose languages like Python or JS. Have you tried debugging a failing "yarn build" for a random project recently ? It really makes me wish for the Autohell days.
Simple grammar does not mean easily understood syntax - particularly given that make is essentially dependent on the entire shell it's operating in (at a minimum, usually GNU tooling).
Plus - At least in my experience it tends to deteriorate into many developers playing "code golf" with the commands: using obscure flags and inscrutable commands to save a few lines in the file.
Worse, while splitting into several makefiles is possible - a lot of the tooling stops working particularly well (ex: autocomplete no longer works on tasks that are "included")
Basically - I find that the only consistent way to use makefiles on a moderately sized team is a rather strict "one script per task" rule, with the script ideally written in the same language as the project. But at that point I might as well just use a native task runner in that language anyways.
I haven't had that problem, even with big Makefiles.
I think the key is to acknowledge that it's a program. Just like any other program, instead of writing something that looks complicated, write it in a way that's easy to understand later. In particular, no code golf please.
Also... just use GNU make, and use its extensions. The POSIX specification is so impoverished that you end up with complicated code (it has no functions for example). There are other tools, but I find make often does the job.
The syntax of lambda calculus is quite simple as well. However, I find it hard to say that any project of reasonable size would be anything but cryptic.
A short grammar does not imply that the resulting language is comprehensible.
Make is all about producing a file (so you need PHONY for tasks that should still run if such a name already exists).
But for many modern tasks that are not producing a file, like "deploy" -- you're right. Although I'd argue that then it's easier to have multiple bash scripts, each with its own "set -euo pipeline" on top. Easier to read.
Make is old enough to have in it's implementation a great number of the ideas that are needed for build systems work or describe various kinds of build tasks. If you invent another tool you're probably going to reinvent those ideas eventually as you hit the problems they were invented for in make and if you haven't designed for them in advance your solutions might end up being just as unsatisfactory.
Examples just for the sake of it:
- phony targets - we want something to happen that has no result as a file or we want to group a lot of targets so we can refer to them easily in other lists of dependencies.
- deferred expansion assignment (you're able to say that something is X where you don't know what X is yet and will only know at the time that commands are being executed)
- pattern rules - a way to not have to write rules for every single object
- order-only prerequisites - X must happen before Y if it's happening but a change in X doesn't trigger Y
This is just a small selection and there are missing things (like how to handle rules that affect multiple targets).
It's all horrible and complex because like a lot of languages there's a manual listing the features but not much in the way of motivations for how or why you'd use them so you have to find that out by painful experience. e.g. deferred expansion is horrible in small makefiles and freaks most people out but ends up being essential in big ones quite often - especially where you're generating the makefile and don't at "this moment" know what the precise location or name of some file will be until later on in parsing.
It's also very difficult to address the warts and problems in (GNU) make because it's so critical to the build systems of so many packages that any breaking change could end up being a disaster for 1000s of packages used in your favorite Linux distribution or even bits of Android and so on.
I find pattern rules useless because I cannot limit their scope in the ways I need. There's no chance of changing that though, as it would break other makefiles.
So it's in a very constrained situation BECAUSE of it's "popularity".
Make is also not a good way to logically describe your build/work - something like Meson would be better - where you can describe on the one hand what a "program" model was as a kind of class or interface and on the other an implementation of the many nasty operating system specific details of how to build an item of that class or type.
Make has so many complex possible ways of operating (sometimes not all needed) that it can be hard to think about. You have to develop a mental model of it and in the end it can come down purely to how GNU make operates at the code level. It doesn't feel very well defined - you're essentially learning the specific implementation rather than a standard.
The things that Make can do end up slowing it down as a parser such that for large builds the time to parse the makefile becomes significant.
Make models a top down dependency tree - when builds get large one starts to want an Inverted Dependency Tree. i.e. instead of working out what the aim of the build is and therefore what sub-components need to be checked for changes we start with what changed and that gives us a list of actions that have to be taken. This sidesteps parsing of a huge makefile with a lot of build information in it that is mostly not relevant at all to the things that have changed. TUP is the first tool I know about that used this approach and having been burned hard by make and ninja when it comes to parsing huge makefiles (ninja is better but still slow) I think TUP's answer is the best:
I can start a new project that relies on Make, and be extremely confident that Make will continue to work and be maintained for the lifetime of my project. 20+ years from now, Make will still work. My Make knowledge will be relevant for my entire career.
New and shiny replacements like Just are tempting, but you have to consider the real cost of:
1) Learning a new build system
2) Onboarding new devs with an unfamiliar build system
3) Dealing with the eventual deprecation of the new shiny, once something newer and shinier comes out. Rewrite your build scripts, GOTO 1.