Yeah, besides the fact that this compositionality is relatively unique to word2vec, research on the biases pre-trained models express is pretty available. Linked a few below for those interested. Most of the issues are down to the same phenomenon discussed here in the context of ImageNet, the input texts were biased and the algorithm learned said bias.
Doctor - Man + Woman = ?
What normally comes out is Nurse. What "they" think should come out is Doctor!
By "they" I mean people that get upset by this.