Only murmurs for now, admittedly, but I heard a reason IBM is training models (e.g. https://research.ibm.com/blog/granite-code-models-open-sourc...) is because they provide LLM-based systems for enterprise customers to work on ancient, legacy codebases in languages like COBOL more easily. If true, I could definitely see how that might boost productivity as fewer people remain fully trained in the details of such old systems and languages.
I don't have any insider knowledge, but maybe IBM could get its hands on a lot more legacy code for weird arcane systems than GitHub could? It would make their models more specific than those trained on 50% Python at least.