> [...] Char is not a byte. Byte is not a char. [...]
In this paragraph you seem to be confusing the C's "char" with the much more fuzzy idea of "Character" which has like 13 valid definitions.
C's "char" is abstractly defined as the smallest addressable unit of memory available on the machine (required to have at least 8 bits), and historically there have existed 8-bit, 9-bit, 16-bit, or even 36-bit chars. In today's practice it is universally taken synonymous for (8-bit) bytes since all hardware is 8-bit by now. Some people like to be pedantic about the distinction between byte and char, but I most often do not, especially since char is the generally interoperable type in C (with respect to type punning etc.), while uint8_t to my knowledge is not.
"Character" is sometimes understood as "Unicode codepoint" (typically represented as a 32-bit entity, or even as a UTF-8 encoded slice of bytes) or in some cases understood as "Unicode glyph" (probably represented as a slice of codepoints), sometimes understood as even other things.
There is a prominent counter example to the 8-bit char: DSP's, TI has several DSPs with 16-bit chars and I think I've encountered one with a 32-bit char. These boards are actually pretty common in industrial settings
In this paragraph you seem to be confusing the C's "char" with the much more fuzzy idea of "Character" which has like 13 valid definitions.
C's "char" is abstractly defined as the smallest addressable unit of memory available on the machine (required to have at least 8 bits), and historically there have existed 8-bit, 9-bit, 16-bit, or even 36-bit chars. In today's practice it is universally taken synonymous for (8-bit) bytes since all hardware is 8-bit by now. Some people like to be pedantic about the distinction between byte and char, but I most often do not, especially since char is the generally interoperable type in C (with respect to type punning etc.), while uint8_t to my knowledge is not.
"Character" is sometimes understood as "Unicode codepoint" (typically represented as a 32-bit entity, or even as a UTF-8 encoded slice of bytes) or in some cases understood as "Unicode glyph" (probably represented as a slice of codepoints), sometimes understood as even other things.