They usually do incremental builds. And a lot of work goes into reproducible builds so that incremental builds can be trusted. But even clean builds are typically minutes of wall-clock time. Not hours.
Of course part of the trick is that work has been distributed across a cluster, so a lot longer was spent compiling than that...
I've seen tens of thousands of lines of code compile in 30 minutes.
I've seen a hundred thousand lines of code compile in 2 minutes.
It really depends on how the project is laid out. If you're using lots of header-only libraries and try to compile monolithically (like soooooo many C++ applications do these days), you're going to have a bad (compile) time.
CMake helps a good bit, ccache goes further, and ninja helps a bit too. But none of them can help templated header-only libraries with complex interdependencies. It gets particularly bad when the developer #includes a convenience header to bring in _everything_ in the library instead of just what they need.
Agreed. Good use of PImpl when you have large external header-only libraries is also generally recommended to keep the dependency graph nicer.
Also from what I've seen, large header-only codebases tend to be external and thus generally don't change that much, so if you can limit your own internal dependency tree ccache can still save you.