UTF-16 is variable-length in that a Unicode code-point can take up one or two UTF-16 code-units. However, for backward-compatibility reasons "charAt()" is defined to return a UTF-16 code-unit (regardless of whether or not it's a useless half-a-code-point) so effectively it's O(1) indexing.
Why wouldn't UTF-8 offer the same O(1) indexing? I still think they should fix regex/charCodeAt etc to support the newer characters above 0xFFFF as demonstrated, it would see dramatic memory improvements and remove the need for hacks to detect surrogate pairs.