In ASCII, the concept of a character is overloaded with many properties. Unicode breaks "character" out to separate concepts like code unit, code point, grapheme cluster, etc.
Because of this, some of your statements are invalid. For example, "Along comes unicode which has variable bytes per character. (Yes, even for utf-32, which is why no-one uses utf-32).".
UTF-32 has a fixed number of bytes per code point. It doesn't have a fixed number of bytes per grapheme cluster.
Because of this, some of your statements are invalid. For example, "Along comes unicode which has variable bytes per character. (Yes, even for utf-32, which is why no-one uses utf-32).".
UTF-32 has a fixed number of bytes per code point. It doesn't have a fixed number of bytes per grapheme cluster.