At least one C++ library that I rely on (that produces an incredible number of template instantiations in the object files) ultimately results in a 2 GB shared object file for the debug version (generating DWARF info for 300,000+ objects takes up a lot of space). The test suite for this project compiles and runs about 5,000 executables that link, in some way, to that giant shared object library. Statically linking the test suite would consume on the order of terabytes (I am not sure the exact number; I have never tried) of disk space which just is not feasible for a workstation, so dynamic linking is the only reasonable option. Dynamic linking also makes testing easier since I do not have to recompile tests as often.
That said, DLL hell is definitely real (as anyone who has ever used windows knows). Things are generally better (though not perfect) with OSs like Debian where dependencies are centrally managed across the whole system. In general, doing C++ development, I have found it advantageous to dynamically link: recompiling one library does not necessitate recompiling everything that depends on it.
> Statically linking the test suite would consume on the order of terabytes (I am not sure the exact number; I have never tried) of disk space which just is not feasible for a workstation
MSVC linker also supports removing unused functions/data from the output binary: https://msdn.microsoft.com/en-us/library/bxwfs976.aspx. These can be applied irrespective of compiler optimizations, so you should still be getting good PGD debug symbols.
Haven't tried it personally, but it's worth a try.
Why don't you run your test suite with dynamic linking, then? I've certainly worked at large, successful companies with enormous C++ codebases where unoptimized tests are run with dynamic linking and production binaries optimized, stripped, and linked statically.
I would be interested to hear if this is a workflow that suits people that write mathematics papers or other complicated TeX documents. My TeX code is very macro heavy (I know some mathematicians who define almost everything through macros so that they can rewrite the paper quickly) and, as a result, I've never really seen the benefit of unicode mathematics.
To go one step further: I think writing Greek letters (or, for that matter, any Latin letter that does not have a standard mathematical meaning) is an anti-pattern in mathematical prose: I think it is much, much better to write \coercivenessSymbol than \gamma or the unicode variant for the same reasons I would not name a floating point number gamma.
I mean having any macros name you want, even Unicode.
So you can even have \γ macros. But if you want to write \coercivenessSymbol (even on the blackboard) we cannot stop you - after all it gives you pleasure.
SugarTeX is tweakable. You can write your own Panflute Pandoc filter that additionally defines new non-standard replacements alike yet defined in SugarTeX. But I dont know if you need it: LaTeX macroses are valid SugarTeX too after all and they are quite powerfull.
Rounding issues in the quadratic formula have bitten me several times: in particular, the inverse of a 2D bilinear mapping requires solving a quadratic equation which (if things are skewed) can be very badly conditioned. On the other hand, if one uses a bilinear map on a parallelogram then the quadratic term is gone and one root is zero: also annoying.
In practice I just check the coefficients and then use appropriate versions of the quadratic formula. I guess that I am luckier than Mr. Cook here since I only need one root.
This set of cases (which can probably be further improved) is equivalent to his first answer. I don't think its possible to do better in this test case due to the catastrophic cancelation in subtracting the discriminant from b.
inverse bilinear map is a good example where, even if there is an analytical solution, it's often more robust to just do a couple newton iterations until you get down to machine precision.
added benefit is that it carries on in 3D (although with a larger jacobian of course).
You are absolutely right re: robustness (and that code falls back to the iterative method), but IIRC the Newton scheme was at least one order of magnitude slower than the formula. It was also slightly less accurate simply because it did more floating point operations.
I have not yet figured out how to make the 3D formula work so we always do the iterative method in 3D.
A nice trick: we approximate the (bi/tri)linear map as affine (usually quite accurate) to get a good initial guess.
The proposed C++ tricks are useful and I think that its reasonably well-known (at least to people who do numerics) that a C++ compiler, with the right flags, can beautifully unroll loops when array sizes are known at compile-time.
> brute force Cramer's rule solution
This isn't really a fair comparison: trusty old DGESV will yield far more accurate results for badly conditioned problems since it does row swaps (partial pivoting). It might also rescale values; I don't remember the details. It would be much more reasonable to compare a hypothetical DGEC (double precision general matrix cramer's rule) routine.
My biggest criticism of this work is that it does not consider C++. Most of the big HPC projects today are written in C++, not C, and to the best of my knowledge this transition happened in the 90s.
> Most of the big HPC projects today are written in C++
My experience is limited to one single field, but all weather prediction models and climate models that I know of, are in Fortran. And I would count them as big HPC projects.
I have not done a census of the various libraries around, but all of the free finite element libraries in common use (deal.II, libmesh, and FEnICs come to mind first) are written in C++. I suspect Fluent and Abaqus are written in C (maybe Fortran?) but they are a bit older.
Trilinos is another C++ example, while PETSc and HYPRE are written in C.
My tentative conclusion from this is that newer projects tend to use C++ and slightly older projects use C.
Yes, there are things like ODEPACK, QUADPACK, and FFTPACK, but those are not under development anymore (as far as I know). The only widely used Fortran library still under development I can think of is LAPACK.
I did not count the occasional 'bespoke' code. There are still some Fortran applications in development for particular purposes (like MOM), but those are harder to survey.
I do not think free finite element libraries are representative of commercial software. They usually include a mix of different languages, and that includes lots of Fortran, which is used both in the core (in solvers and linear algebra libraries, for example) and at the highest level for user subroutines. I have seen people using either Fortran or C++ with Abaqus, LS-Dyna or MSC, but nobody writting C (except for some very experimental solvers).
Most users are usually fine with whatever the GUI allows and a bit of Python, but for those of us developing new models, Fortran is probably the most useful language, followed by C++.
There is still a lot of Fortran being written, but a lot of it is on legacy projects (scientific codes have a habit of lasting a long time). While there are some cool things like coarray Fortran and some groups require that everything be done in Fortran, C++ is a favorite for new projects afaik.
It's easier and more precise to say something like "run the quantum Monte Carlo code" than "run the quantum Monte Carlo model" or "run the quantum Monte Carlo software."
A quantum Monte Carlo code will of course include a model, but I think people don't want to call it "software" because it's so research-grade and janky. "Program" seems better, but I think that implies that it's a static thing (not in a constant state of development).
The plural "codes" is used because usually a research team has historically implemented a bunch of models into disparate codebases.
That said, DLL hell is definitely real (as anyone who has ever used windows knows). Things are generally better (though not perfect) with OSs like Debian where dependencies are centrally managed across the whole system. In general, doing C++ development, I have found it advantageous to dynamically link: recompiling one library does not necessitate recompiling everything that depends on it.