OpenAI came up with the term Generative Pre-Training with the paper introducing what is now called GPT-1.
Not arguing that they should own a trademark for it but saying they are “hijacking” the term is disingenuous.
No, they took a pre-existing term "Generative Pretraining"[0] and applied it to Transformers (a Google innovation [1]) to get Generative Pretrained Transformers [2]. Even if you looked at the full name, that paper doesn't use GPT or Generative Pretrained Transformers at all from what I can tell, this commenter [3] claims that the name was first used in the BERT paper.
It's a pretty dumb acronym though since it's doubly redundant.
The "T" is the only descriptive bit. The transformer is inherently a generative architecture - a sequence predictor/generator, so "generative" adds nothing to the description. All current neural net models are trained before use, so "pretrained" adds nothing either.
It's like calling a car an MPC - a mobile pre-assembled car.
Shilling for a multi billion dollar corporation makes you look like a fool and even more so when clearly they are on the wrong side of the argument here