Hacker News new | past | comments | ask | show | jobs | submit login

And even wastes cycles on 16bit size_t MCUs.



Now that you mention it, at least on Wintel compiler vendors did not preserve the definition of `int` during the transition from 16-bit to 32-bit. I started in the 386 era myself so I have no frame of reference for porting code from 286. But Windows famously retains a lot of 16-bit heritage, such as defining `DWORD` as 32 bits, making it now a double-anachronism. I wonder if the decision to model today’s popular 64-bit processors as LP64 is related to not wanting to go through that again.

Edit: of course, I completely forgot that Windows chose LLP64, not LP64, for x86_64 and AArch64. Raymond Chen has commented on this [1], but only as an addendum to reasons given elsewhere which have since bitrotted.

[1]: https://devblogs.microsoft.com/oldnewthing/20050131-00/?p=36...


Some of the 8-bit MCUs I started with defaulted to standards noncompliant 8-bit int. 16-bit was an option, but slower and took much more code.


Is there any MCU where `size_t` is 16 bits but `int` is 32 bits? I'm genuinely curious, I have never seen one.


The original 32-bit machine, the Manchester Baby, would've likely had a 32-bit int, but with only 32 words of RAM, C would be rather limited, though static-stack implementations would work.


Me either, but it wouldn’t be unreasonable if the target has 32-bit ALUs but only 16 address lines and no MMU.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: