Hacker News new | past | comments | ask | show | jobs | submit login

I don't know what happens in the microsoft world, but here in unix-land we learned to address this issue way back when types started to work seriously in C (like the mid 1980s) we put a declaration of "do_stuff()" in a common include file (let's call it an "ABI definition") and include it into both the place where do_stuff is defined and where it is used - if one is different we expect the compiler to barf



The problem is that, as-is, there's no way to add a 128-bit integer type (which'd be very uselful!!) to the C standard (among other things like time_t or something), because that'd require changing intmax_t to 128-bit, and that'd break all existing dynamically linked code. The problem of being forced to have a "common [never ever ever ever possible to be in any way shape or form changed] include file" is exactly what this solves.


It's not that hard. You have libc.so.42 with 64 bit intmax_t. You change stdint.h to say intmax_t is 128 bits. You compile a new libc and install libc.so.43. New programs and libraries that get compiled link with the bigger intmax_t, existing programs continue loading the older libc. But there's a strange resistance to omg how can we possibly have two versions of libc installed at the same time.


Having two versions of libc installed is not the problem. The problem is when you link to two different libraries in your app that link to two different (and ABI-incompatible) versions of libc.


Versioning doesn't solve that problem. I call time() in my new code. I call some library which eventually calls futimes(). Everybody along the way needs to agree on the size of time_t. The library can't correctly use the old symbol even if it's available.


On Linux, you're correct, but only because the symbol namespace is global.

On Windows, every DLL with a different name also has its own distinct symbol namespace. Thus, the conflict you describe can only arise if your code explicitly propagates some time_t* value from one library to the other.


Why can't intmax_t stay intmax_t when bigintmax_t is introduced? What code actually needs to know the size of the largest integer type supported by the current compiler?

Also, there's already the possibility that someone has defined a struct containing two ints to act as a 128bit integer, intmax_t is already sometimes smaller than the largest integer type.

Is intmax_t supposed to be the largest integer in the standard or supported natively by the platform? If it's the second, leaving it unchanged when introducing larger ints wouldn't be a problem.


having a "bigintmax_t" would...... work, but it's absolutely horrible and defeats the purpose of intmax_t being.. um.. the maximum integer type.

A struct of two integers couldn't be used in regular & bitwise arithmetic, added to pointers, index arrays, cast up and down to other integer types, etc.

As-is, you can pass in & out intmax_t as a default case and, worst-case, you just waste space. But "uint128_t a = ~UINTMAX_C(0)" not making a variable of all 1s bits would be just straight up broken.


Right, and modern languages make it idiomatic to place even more stuff inside whatever their equivalent to a shared header is (in C++, it's either headers or modules) precisely because the raw C ABI has very limited semantics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: