LLMs are trained on language. Not mediocre language. This is why models can be fine-tuned in one language and then see the benefits in other languages. How much longer with this fundamental misunderstanding of these models continue and how often will they be put forth by people who are worried about task X that they enjoy being replaced?