If I'm understanding this article correctly, the basic proposal is that if you enable symbol aliasing, you can enable ABI breakages without any pain whatsoever. However, they seem to be ignoring one very important issue with ABI breakages: libraries tend to pass types around.
The actual most painful part of ABI breakage is if you have, say, library A that's compiled against old-ABI and library B that's compiled against new-ABI and library A passes ABI-varying types to library B. Like, say, std::string (which changed ABI as a result of C++11). gcc managed to make it work--but the process was so painful (and required adding new features to C++ to do so!), that they have said that they do not want to have to ever do that again.
It's relatively easy to completely change the functionality of a library function and use this sort of magic to always select the right function. But when you broach changing something like intmax_t and time_t, you have to be cognizant that the issue is different libraries (neither of whom are the standard library) who aren't prepared for the type to be different communicating with the same type.
The author then argues that it is at least a step forward:
> For those of us in large ecosystems who have to write plugins or play nice with other applications and system libraries, we’re generally the last to get the benefits. But, at least we’ll finally have the chance to have that discussion with our communities, rather than just being outright denied the opportunity before Day 0.
> We could just deprecate std::regex and propose std::regex2, but the moment that proposal comes out everyone gets annoyed about the name and “why can’t we just fix regex?!” and then the ABI people pipe up and around and around the circle goes
This is the core of the issue. I continue to maintain that these issues are not insurmountable, bit we have a "people problem" that makes them effectively insurmountable. There are too many people with deep expertise in one area of C++ (compiler implementation, language design, etc.), who don't have a big-picture view, yet will let their opinions known as though they know everything. And it's not just C++, all open-source languages have this problem. Design-by-committee never did anyone any favors. It's precisely WHY we keep getting half-assed implementations like regex and nested functions approved in the standard library pr GCC extension, and it's WHY no one is ever permitted to fix them.
The only solution is for people to abandon their egos, admit that someone may know more than them (choc et horreur!), and try to do what's right for the users.
There's nothing half-assed about gcc nested functions - they work exactly as they should. The problem is that the feature was implemented back when runtime constraints were very different - in particular, there were no limits on dynamic codegen, not even on the stack. Using thunks for callbacks was a common technique back then - and why not, given the perf benefits? On top of that, ironically, it was also the only way pointers to nested functions could be transparently compatible with the C ABI.
So I don't think your criticism is valid in this case. It's more a general problem with all language extensions, and how they interplay with language evolution.
I'd just like to clarify that it's specific _compilers_ that produce code that complies with an ABI specification (or don't). Neither the C nor the C++ language standards specify an ABI format.
ABIs are not only a quirk of each compiler, rather the ABI can be covered by its own standards document. See for instance the SysV Amd64 ABI specification. One should not have the idea that ABIs are magical or impossible to specify, if it is desired to do so.
Out of interest (I'm not tracking the C++ standard much these days) such as? For example I know about the order of struct members and bit fields, but that has always been the case in C, and so in C++.
How they live inside of a structure is far into implementation-defined land. No ABI (afaik) says where bits live if they do not fill an octet. In practice, the compilers agree with themselves over time, but that is "code as spec" which is everyone's favorite thing for interop.
It's well defined in the system V ABI, isn't it? Compilers do follow that. Otherwise no layout is defined by the C and C++ standards, even outside of bitfields.
edit: It's under "Bit-Fields" in [1]. I'm not entirely happy with the wording, but it's there.
Microsoft's current policy of not adding new symbols to an existing CRT DLL is justified IMO. Any possible way of messing up a Windows machine will happen on some poor user's machine somewhere, including downgrading libraries. So if Microsoft is able to make their systems more robust in the face of misguided users (edit: and misguided third-party developers), that's good.
I recently tried to get MSI Afterburner to run on a pretty standard new-ish windows 11 gaming pc.
Two hours later, I gave up. The mysterious side-by-side manifest error wasn’t solvable. I installed all the redists, and even installed a neat redist installer I randomly found on GitHub which brute-force-installs every possible redist known to humanity.
Didn’t matter. The side by side error was just insurmountable.
Now, as a former gamedev who grew up on windows, I have a soft spot for Microsoft. I love me some visual studio. Yum yum, that IDE was cool a decade before Jetbrains showed everyone why IDEs are cool.
But it’s been… eight years now since I switched to macOS, and more generally exited from the MS development ecosystem.
I don’t know, man (or woman or they). Microsoft’s current policies are hopelessly complicated compared to unix. I’ve been programming for almost two decades. If I can’t analyze and fix an error that clearly shows exactly which manifest entry is missing, well… I think we’ll just agree to disagree that Microsoft is making nice decisions about ABIs and related ideas.
Here’s a wild idea: just let a program run, and trap errors into an error log. Jetbrains does it. Their early access IDE happily lurches along even with dozens of Frankensteinesque errors coming up.
Refusing to start because of an ABI mismatch (or indeed, dying at the first error) was one of the most unfortunate decisions in OS design.
When you get the SxS error it should be telling you exactly where to look in the message "Please see the application event log or use the command-line sxstrace.exe tool for more detail". The application event log in event viewer will have the detailed error message and sxstrace will inspect a given file and tell you what it depends on. At no part does anything suggest shotgun installing every redistributable you can find is something that is going to help e.g. it very well could be you already have the right redistributable (which is likely since they are packaged in the Afterburner installer itself) but something about the system is borked preventing it from loading.
Nothing about this or the way redists are managed seems hopelessly complicated. Also there is nothing it can do but die at the first error in this case, it's not side modules or components failing to load it's the stuff like memcpy and iostream for the main process of the app.
That's no longer the case. There's a ton of software that simply doesn't run anymore. Last time I used Windows years ago, there were at least 10 old games in my Steam library that wouldn't even start. I wouldn't be surprised if at some point Wine becomes better at running these applications than Windows itself.
If I'm not misremembering, many years ago (Windows 7 era) I had machines with 3 different 2010/2012 redistributables of the same bitness (32 or 64-bit) in Programs and Features. Was that actually a thing, or did I read it wrong at the time?
That was a thing until VS2015, as far as I can tell; the older versions let you install all minor releases in parallel.
And many Windows developers choose to rather install the specific minor version they used globally, instead of just vendoring it in the program folder (even though the license allows it, hence "redistributable").
Right now my machine has no less than 4 versions of the 2008 redist installed globally, for each x86 and x64.
Speaking of what Microsoft's VC runtime license does and doesn't allow, are there any restrictions on statically linking the runtime instead of redistributing the DLLs?
This is a well reasoned proposal to introduce a standard, zero cost way to introduce ABI indirection into C, which would enable much more rapid adoption of safer and more performant implementations.
That is a very long article that requires lots of other things to understand. But what I am confident about is that compatibility with everything is... just about the only good thing about C.
Why mess with it at all? It's portable assembly. Why are we trying to use it as more than that? Even the kernel doesn't need it or only needs tiny bits of it. See Redox OS and all the other similar projects.
Why do we still talk about ABIs? We can statically link a whole binary, and still use far less disk and ram than we do now with containers.
We have high level languages that "link" at the level of names and modules, not bits and bytes.
C works well enough for what it's used for. Rust and the others are up and coming.
Instead of saving C, why not improve Rust and JavaScript and the others?
The trouble with Zig is that it seems to be far too much like C, especially at the moment. Like, look at all the stuff it can't catch, that Rust can.
If we incrementally evolve like that, it might take 50 years to get to the kind of environment that Rust seems to be aiming for, where everything is safe and high level abstraction are cheap and available anywhere, and we have official package management.
With the same antique fossilized ABI everyone's been using this whole time. It's clearly good enough for extremely simple low level stuff. For now, until more and more of those libraries move to some future language.
That would be the C ABI, hence why we still talk about it. And new types (like int128_t) keep appearing from time to time, that need to be reflected in that ABI.
Conceptually, yes. Practically, is there a reason why ELF symbol versioning needs to be this complicated beast requiring non-trivial loader support instead of, well, the trivial name hack being proposed here (and already used, among other places, in Musl for time64 support)? I’m honestly curious, I’ve read Drepper’s description[1] multiple times and I still can’t figure out the “why” of the whole thing.
ETA: Hmm, maybe semantic interposition / the global namespace of ELF dynamic linking is at fault here? As in, if a shared object declares an import of printf and a glibc version, it should get printf@@GLIBC_whatever if the import actually ends up being from glibc but plain printf otherwise? Not sure that’s a good interpretation of what versioning should mean in this context but it’s a possible one that would force this unpleasantness.
I think the equivalent of alias solution are feature macros used to enable and disable different interfaces, like it was done ages ago on linux to switch from 32 bit off_t to 64.
On the contrary Microsoft has happily broken the C++ ABI early and often with MSVC for a very long time. They started to guarantee more stability recently because users demanded it.
In fact GCC (and indirectly Red Hat and the other Linux distros), have been the most vocal for not breaking the ABI as many still have the scars of past ABI breaks .
Part of it is that dynamic linking and globally shared libraries have been a norm on Linux for so long. On Windows, it is still more common to either statically link the stdlib or install it side by side.
Dynamic linking has a been a thing on Windows since Windows 16 bit days, and is central to COM, static libraries are seldom used.
The big difference to UNIX shared objects (with exception of Aix), is that Windows dynamic libraries are namespaced and symbols are private by default.
Dynamic linking is there, yes. But the only globally shared libraries tend to be the OS ones - everything else, even the CRT (before it also became part of the OS), is normally installed side by side with the .exe, and every app has its own copy. Thus C++ ABI compatibility was not really an issue, except when you had to use closed-source C++ libraries.
Yes, the other big difference is that every DLL has its own namespace, so you can actually load two CRTs into the same process just fine - so long as you don't pass pointers/handles from one runtime to the other. But this kind of multi-runtime arrangement doesn't happen often in practice, except in plugin/extension scenarios (where different extensions might bring in different runtimes into the same process).
These days, most COM libraries that aren't OS APIs are also installed side by side with the app, with information necessary for activation provided in .manifest - this feature has existed since WinXP. Although I would also say that third-party COM libraries are just not particularly common anymore.
I always thought that C ABI stability allows you to freely use vendor dlls even legacy dlls for legacy hardware. Won't all of that break if you change the ABI?
There is no "The C ABI"; it is determined entirely by implementors. ISO C does not define the term "ABI", nor does it define anything like it under a different name.
Some platforms have a documented ABI across the board which covers all their APIs together with the principal compiler family.
Others, like Windows, have an ABI for some core system interfaces. Development tools can have different ABIs in their ecosystem.
The rest of the article seems to be the complaint that there are problems linking code from different compilers on Microsoft Windows*, which is completely unsurprising in the light of my above remark.
The fix is: don't mix code from different compilers on Windows without adding the calling convention declaration specifiers (an language extension) and ensuring these are in the header files that are mutually used.
> The fix is: don't mix code from different compilers
It’s more than that: individual compilers can’t evolve (for example change the size of intmax_t), and therefore certain desired ISO C features couldn’t be implemented, because that would break ABI compatibility with code compiled by earlier versions, which would prevent users from upgrading their compiler until all their dependencies have been upgraded, which (a) may never happen and/or (b) may be a newer version with other compatibility breaks. It may be okay in a everything-is-open-source-recompile-the-world setting, but in general it’s absolutely not practical.
> It may be okay in a everything-is-open-source-recompile-the-world setting
That is mostly a myth, except maybe in some embedded situations.
> individual compilers can’t evolve (for example change the size of intmax_t)
Compilers can provide __my_intmax_t which is decoupled from the standard intmax_t.
If there are system libraries and third party libraries using intmax_t, you don't want to change it; that isn't really a valid form of evolution.
Long before 64 bit systems were common, compilers had local types like __int64_t. Programs could detect (or assume) their presence, and typedef them to something nicer.
GCC has __int128_t on platforms where intmax_t is 64 bits.
> Compilers can provide __my_intmax_t which is decoupled from the standard intmax_t.
That either means breaking compatibility with existing sources (which have to be changed to use __my_intmax_t instead of intmax_t), or not conforming to the standard (if __my_intmax_t is now the one with the semantics of ISO C202x intmax_t). Compiler vendors want neither.
I think I'm not qualified here, since my 33 years of continuous and ongoing C experience prevent me from seeing the problem. I'm like a C doctor, whereas this is some naturopathic or chiropractic issue that is outside of my field. The patient just needs someone to acknowledge and understand their feelings of having a problem, and go through the motions of some treatment that is entirely justified by the patient believing in its efficacy and necessity.
The whole point of intmax_t is that it's an integer type that's at least as large as any other integer type the implementation has. The utility of this is obvious: it lets you write generic code that can work with any integer type regardless of implementation. But it does mean that intmax_t either has to change every time an implementation adds a new integer type, or the implementation has to disregard the standard (and thus generic code is no longer generic).
Is C a binary standard or is it is source level standard? And if there is a fixed relationship what should it be? Any scope? Any historical legacy issue?
Btw the sample break is between two long long and one int128, why they have to the same I wonder.
The binary standard is defined in the gABI. They are independent.
I want to fix something in it, so I need to fix C++, C, and the gABI. Three separate processes, each needing about 3 years, though C and C++ are aligned now.
Probably worth checking that the compiler alias implementations do sane things when the function has a different calling convention to the default (due to some attribute on it)
Is there no way to make the asm labels be named specifically according to the return type + arguments? It doesn't fix the issues with changing order of fields/virtual functions (for C++) in types but it at least provides a "default" that may or may not be sane and I think fixes some of the problems that they're worried about and if that fails then sure bob go ahead and craft your own labels.
That said, I think this idea is great, especially the fact that literally the libraries already craft their own labels, this just makes it a part of the language.
That's specifically one of the examples they gave in the video the article references at the top. It wouldn't be breaking ABI if the symbol differs because that is the "correct" API breakage, one that callers have to be aware of, will generate an error when the application tries to load the shared library and does not find the matching symbol. The entire issue is when things change on the binary level but the application is not aware of it, and continues a subroutine call with wrong arguments or misinterprets the return value.
Also, as the video states, the issue can happen even if the type of one of the arguments is the same but the fields in the type are rearranged, leading to a different offset during derreferencing.
How does this _Alias feature differ from simply defining the local "aliased" name as a static const pointer-to-function (at file-wide scope)? Why wouldn't that work already?
Since &f is of type int_func *, an alias to f must behave such that taking its address still results in an int_func *. Otherwise, existing code is broken, as shown above, since f cannot be replaced with f_ptr in the expression.
This is what _Alias is meant to solve: it's perfectly transparent, in that the end-user has no way to distinguish whether the name comes from the original declaration or an _Alias. This allows libraries to modify the external name of a function, without breaking code that depends on the syntactic name of the function.
I mean the usage sort of is similar but this is specifically about the labels of the functions in the binary (asm or instruction really) level which is in a way a bit of a lower level. Also, the caller which in this case are library users either need to know what the different implementations are which kinda could be harry and difficult for users, or the library would have a bunch of function pointers in memory that it would assign, meaning you'd have to init libraries and can't just call functions from the library which makes almost all libraries a little more heavy (even if it's a few instructions) on top of allowing a nice new vector for hackers to exploit (I'll just make the pointer point to "evil_strlen:").
Why wouldn't it be zero cost, if _Alias can be? (Note, the fact that the function pointer would be declared as "static const" at file scope is relevant in replicating the _Alias featureset.)
Among other things, because the ways to bind to a dynamic symbol are multitudinous and frequently painful.
For example, IIRC, ELF dynamic linking occasionally forces the compiler and/or linker through spectacular contortions in order to ensure two pointers to a single function compare equal (as required by the C standard and as happens with static libraries) even if the values being compared originate from two different shared objects, the function itself resides in a third one, and the compiler has no idea which functions are external to the shared object its output will be linked into.
(Windows tells the standard to go take a hike, or more charitably doesn’t attempt to badly emulate the semantics of static linking in dynamic linking. I’m actually with Windows here, even if their executable format is a horrible underdocumented mess.)
Those contortions are not required when the function is only ever called and never has its address taken, except compilers can be pretty naive about that part. Not that they have to be, but at least it’s not an obvious sure thing.
There is an extra degree of indirection the optimizer has to get rid of. There is no extra pointer in the _Alias version. Whether this minor difference is relevant I'll leave to better informed people.
Is there, though? AIUI there's no symbol created and no pointer when you declare a "static const int" at file scope. It's simple substitution, like a #define. So the question is why wouldn't a function label be the same.
That's only an common optimization, not a guarantee, and it breaks down when you let that object or a pointer to that object leave the translation unit like you're suggesting.
So fun fact: if you use CGO, your binary effectively ends up with two versions of libc. It statically compiles musl into the binary itself for Go code and it dynamically links in libc for any C code. I'm not sure why there isn't an option with CGO to skip statically linking musl. You're depending on the platform libc regardless and at least then you end up with a smaller binary.
"To Save C" C is not going to 'die' over these issues. And frankly "new features" in the upcoming C standard will not make or break C, unless you consider C to only be the ISO standard efforts.
I get less confident in my C working with each standard release. The march towards pointer provenance combined with optimisation passes written for C++ means the pointer hackery I use C for has a finite lifespan.
I don't know what happens in the microsoft world, but here in unix-land we learned to address this issue way back when types started to work seriously in C (like the mid 1980s) we put a declaration of "do_stuff()" in a common include file (let's call it an "ABI definition") and include it into both the place where do_stuff is defined and where it is used - if one is different we expect the compiler to barf
The problem is that, as-is, there's no way to add a 128-bit integer type (which'd be very uselful!!) to the C standard (among other things like time_t or something), because that'd require changing intmax_t to 128-bit, and that'd break all existing dynamically linked code. The problem of being forced to have a "common [never ever ever ever possible to be in any way shape or form changed] include file" is exactly what this solves.
It's not that hard. You have libc.so.42 with 64 bit intmax_t. You change stdint.h to say intmax_t is 128 bits. You compile a new libc and install libc.so.43. New programs and libraries that get compiled link with the bigger intmax_t, existing programs continue loading the older libc. But there's a strange resistance to omg how can we possibly have two versions of libc installed at the same time.
Having two versions of libc installed is not the problem. The problem is when you link to two different libraries in your app that link to two different (and ABI-incompatible) versions of libc.
Versioning doesn't solve that problem. I call time() in my new code. I call some library which eventually calls futimes(). Everybody along the way needs to agree on the size of time_t. The library can't correctly use the old symbol even if it's available.
On Linux, you're correct, but only because the symbol namespace is global.
On Windows, every DLL with a different name also has its own distinct symbol namespace. Thus, the conflict you describe can only arise if your code explicitly propagates some time_t* value from one library to the other.
Why can't intmax_t stay intmax_t when bigintmax_t is introduced? What code actually needs to know the size of the largest integer type supported by the current compiler?
Also, there's already the possibility that someone has defined a struct containing two ints to act as a 128bit integer, intmax_t is already sometimes smaller than the largest integer type.
Is intmax_t supposed to be the largest integer in the standard or supported natively by the platform? If it's the second, leaving it unchanged when introducing larger ints wouldn't be a problem.
having a "bigintmax_t" would...... work, but it's absolutely horrible and defeats the purpose of intmax_t being.. um.. the maximum integer type.
A struct of two integers couldn't be used in regular & bitwise arithmetic, added to pointers, index arrays, cast up and down to other integer types, etc.
As-is, you can pass in & out intmax_t as a default case and, worst-case, you just waste space. But "uint128_t a = ~UINTMAX_C(0)" not making a variable of all 1s bits would be just straight up broken.
Right, and modern languages make it idiomatic to place even more stuff inside whatever their equivalent to a shared header is (in C++, it's either headers or modules) precisely because the raw C ABI has very limited semantics.
This is such an exhausting article to read. Useless introduction that assumes I'm familiar with their previous articles, so many long segues and needless anecdotes, it seemingly never gets to the point. I got about halfway through and gave up looking for where the actual point is.
Could someone explain what they're actually driving at?
> Just a type change! Shouldn’t change the assembly too much, right?
What reasonable person would think that? You're changing something from 64 bits wide to 128 bits wide. Of course the compiled code from the 64-bit version is looking at a single 64-bit register, with the 128-bit version looking at multiple registers. Why is this unexpected?
Who would ever expect that you could just change the width of the inputs and return type of a function and expect it not to break the ABI?
> Okay, so in C we can break ABI just by having the wrong types on a function and not matching it up with a declaration
Of course. Why wouldn't you think that?
> What if I told you this exact problem could happen, even if the header’s code read extern long long do_stuff(long long value);, and the implementation file had the right declaration and looked fine too?
Alright, I'm almost intrigued enough to keep reading. I hope they get to the point soon though.
> [pages and pages about linux package maintainers and red herrings about python and C++]
> [going on and on about "the problem" without having ever made it explicit]
At this point I give up trying to find the author's point. What needs to be saved about the ABI specifically? I haven't found the problem they're talking about and I've given up trying to skim for it. I certainly am not going to read this whole thing verbatim because they're just too much time-wasting cruft here.
It seems as though academic writing manages to simultaneously use too many words and too many references. Few papers get to the point and cover the point convincingly when they do.
I liked Adlemen's genetic computing paper. It had tons of jargon and references but was only ~ two pages long. It's a super dense two pages if you're new to molecular biology though. I think that's what most academic writers should really be aming for but they get trained by professors/TAs measuring their papers in terms of length rather than content.
What is currently not possible, which alias will fix? That’s what I couldn’t get out of the article.
If the answer is “the ability to change the types of library functions without changing their name” (which is what his first few examples were showing to be the “problem”), then of course you can’t do that, and I’m not convinced we should waste any time or effort trying to get that to work. If you want to change the types accepted/returned by a function, make a new function with a new name.
Of course, if there’s something I’m missing here I’d love to know… the article has done a terrible job outlining it if that’s the case.
The main issue is that even though C has typedefs like intmax_t that are supposed to help implementations modify them depending on compiler and platform support, in practice, ABI requirements force these typedefs never to change if programs ever link to external functions with them. This can also be seen in time_t, which could not easily be switched to 64 bits due to existing interfaces that expect 32-bit time_t values.
For how transparent aliases help solve this, suppose that there is some ancient library function called get_year:
typedef int32_t time_t;
int get_year(time_t time);
// document time_t and get_year
Then, library users will call get_year with 32-bit time_t values. But suppose that eventually, as Y2038 draws near, the library writers want get_year (and related functions) to instead take 64-bit time_t for all newly compiled programs that use get_year. Previously, this was impossible: users would have to modify their programs to call a new version of every function, or link to a new version of the library. But with transparent aliases, library writers can simply replace the headers with:
typedef int64_t time_t;
int get_year_v2(time_t time);
_Alias get_year = get_year_v2;
// document time_t and get_year
Now, existing compiled programs continue to call get_year in the library, which remains implemented for compatibility. Meanwhile, newly compiled programs instead call get_year_v2, without having to modify their source code at all! This enables types such as time_t and intmax_t to be transitioned without breaking any code.
The problem I see with that is what happens if you have dynamic library C (globe) and application A that’s rebuilt but library B (which sits between A and C) hasn’t. Now you have a linking error. What happens if you rebuild B and C but A hasn’t been rebuilt. That means B still needs to know to set up aliases for back compat (which you won’t find out until runtime). What if library B is yielding the changed ABI type and A is feeding it to C?
I think it’s a useful tool but I don’t think it’s an ABI versioning panacea (even c++ adoption of inline namespaces within libraries is limited with the standard library being the only place I’ve seen it used in a meaningful way).
IMHO, the real solution is to avoid exposing opaque types from library C in the interface of library B. Obviously, some libraries do this anyway, so your criticism is valid. Library C's headers would likely need some #define to link to the old functions and types for compatibility. But transparent aliases are still very effective in the case of shallow dynamic-library dependency graphs (e.g., most open-source projects), even if, as you note, they aren't a panacea. (I'm not even sure there is one: there'll always be a Y2038 bug or TLS deprecation or whatever that library C needs to make a breaking change to fix.)
You should mention that libraries already do this! That is a major argument for this proposal. Some libs already have these levels of indirection, this just adds it to the langauge as opposed to having to use pragmas or __attribute__((alias("_blah"))) everywhere.
Then you raise the number of libraries in your program to some power > 1, due to the proliferation of versions. Hope you're using compile-time LTO with code deduplication!
Most libraries are not shared, even when they are ostensibly "shared" libraries. Plus disk is cheap, it seems dumb to optimize for it in the 21st century.
A number of libraries are in fact shared. And RAM is not cheap when considered across the entire OS, and if every process has their own copy of the library, that’s a lot of RAM eaten up by needless duplication.
Its not nearly enough to cover scalar arguments. You've got to cover composite arguments. time_t, fileno_t, fpos_t, etc may appear in structs, arrays, etc.
Backwards compatibility is holding the evolution of C and C++ back. Design improvements can't get approved if they affect ABI. Many of those improvements affect performance. The ecosystem needs a way to move forward without breaking ABI.
This doesn't just apply to stl, this also means breaking boost or w/e and you're stuck with the same problem, users are mad and pitted against library writers.
In c++ you do this with inline namespaces (change the default inline namespace version, keep the old one around, everything continues to work correctly in theory).
The point is to get rid of the old crappy function, replacing it with the new better function, in new code while allowing old code to keep working.
Old functions should not have first mover advantages on names, nobody wants to riddle their function calls with *_v2 all over the place, and neither do library maintainers want people to keep using *_v2 when *_v3 fixes issues present in *_v2.
You're willing to type a multiparagraph complaint about the article being too long winded and yet balk at the TL;DR. A little patience could help, crtl-f'ing for alias and skimming a little more to find out why it's needed might be enough.
I agree there's a lot of fluff in the article but complaining even more when someone goes out of their way to appease you is just too much.
The actual most painful part of ABI breakage is if you have, say, library A that's compiled against old-ABI and library B that's compiled against new-ABI and library A passes ABI-varying types to library B. Like, say, std::string (which changed ABI as a result of C++11). gcc managed to make it work--but the process was so painful (and required adding new features to C++ to do so!), that they have said that they do not want to have to ever do that again.
It's relatively easy to completely change the functionality of a library function and use this sort of magic to always select the right function. But when you broach changing something like intmax_t and time_t, you have to be cognizant that the issue is different libraries (neither of whom are the standard library) who aren't prepared for the type to be different communicating with the same type.