As per my comment earlier, the dimension isn't fixed. The usual use case (storing embeddings) is instructive as to the range of values. For token embeddings, often the embedding is generated via a lookup in a fixed vocabulary of tokens token to a token ID. So say your vocab is words, the value is a word id which would obviously be an integer not a real. Here's an intro to word embeddings https://wiki.pathmind.com/word2vec and here's one for positional embeddings (the new hotness given how zeitgeisty GPTs are) https://theaisummer.com/positional-embeddings/