There is a great opportunity for totalitarian and authoritarian regimes (China and UAE so far) to create commercially usable and free LLMs that work significantly better than alternatives (backed by large amounts of government money).
Over time, as they get used in more and more products, these LLMs can become more 'aligned' to these regimes way of thinking.
There are no Chinese companies that are not part of the Chinese government.
This would be more interesting for discussion if the comment described an actual threat scenario instead of a vague hypothetical. Even in the hypothetical, there’s no actual described consequence, only that China gains some undefined level of soft power which means nothing on its own.
Some ideas:
- How are these LLMs being used? Who is the end user and what are they using the application for?
- If a state-level threat actor wanted to compromise an LLM, how would they do it? What would their goals be? How would they then use the attack vector to accomplish their goals?
- What benefit would the actor get from doing so? What are the costs? What are the consequences if they fail or are discovered?
- How would a target detect if they’ve been compromised? How easily could they recover?
I didn't think it was that vague. But if you're looking for ideas.
1. High quality LLM made free for commercial use.
2. LLM is used in many places as it is the best.
3. LLM is aligned to subtly promote the interests of the threat actor.
There is no 'compromise'. It is not hacking software, only wetware.
A concrete example...
LLM created by Advanced Persistent Threat (APT) is used in educational software aimed at kids. Over time as the kid interacts with it the LLM promotes a way of thought that either aligns with the APT or undermines the ideology of the society they are in. There is no moment that can be pointed to that says: "look they are trying to hack us!", but decades later you have adult members of a foreign society more open to your way of thinking.
The weights, calculated through very intensive computing, are what hold the knowledge in LLMs, the source code just executes those. These products could just update/patch their weights periodically, and no one would complain because that's not bad per se.
Over time, as they get used in more and more products, these LLMs can become more 'aligned' to these regimes way of thinking.
There are no Chinese companies that are not part of the Chinese government.
This is a new kind of cultural soft power.