I'm interested in this as I maintain a little floating point RNG project, but I suspect you are somehow mislead on this.
Simple RNGs can discard significant bits during math operations to help get random-like data, eg. the "middle-square method" or the lower bits of Math.sin(seed++) can be used as practically random. The little rng I developed and tested discards 0 to 3 MSbs erratically in its state as part of its generation scheme. But getting a library random function to return [1,2] and then subtracting 1 can barely affect overall uniformity of distribution. At best its just discarding 1 bit of potential entropy for more regular and i suspect louder noise floor.
Simple RNGs can discard significant bits during math operations to help get random-like data, eg. the "middle-square method" or the lower bits of Math.sin(seed++) can be used as practically random. The little rng I developed and tested discards 0 to 3 MSbs erratically in its state as part of its generation scheme. But getting a library random function to return [1,2] and then subtracting 1 can barely affect overall uniformity of distribution. At best its just discarding 1 bit of potential entropy for more regular and i suspect louder noise floor.