What a pointed title. That aside, I am rather surprised that a committee's investigation report is this light on what in my opinion are fundamental details, including the make-up of the committee, the members' respective duties and the course of the investigative process. Notwithstanding the potentially political raison d'etre of the report, is that customary for Congressional committees?
The gripe I have with this is that it is 1) an impermanent external resource that shows 2) the current, not the contemporary make-up of the commitee that's 3) subject to change at any time, and thus not a lasting appendix to the report. I guess I had expected more academic rigour from a congressional committee.
Dang, you really can make anything sound scary if you use the right language!
1. ChatGPT funnels your data to American Intelligence Agencies through backend
infrastructure subject to U.S. Government National Security Letters (NSLs) that allow for secret collection of customer data by the US Department of Defense.
2. ChatGPT covertly manipulates the results it presents to align with
US propaganda, as a result of the widely disseminated Propaganda Model and close ties between OpenAI's leadership and the US Government.
3. It is highly likely that OpenAI used unlawful model training
techniques to create its model, stealing from leading international news sources, academic institutions, and publishing houses.
4. OpenAI’s AI model appears to be powered by advanced chips manufactured by Taiwanese semiconductor giant TSMC and reportedly utilizes tens of thousands of chips that are manufactured by a Trade War adversary of America and subject to a 32% import duty.
Yeah the Chinese govt has far less incentive to mess with me personally than the US govt does. Its hard to convince people of this point of view I have found.
It's shocking how much American soft power diminished in such a short period. White House documents used to mean something, had a certain weigh, whereas now some of them are simply ridiculous. This one in particular is not particularly bad even. Although we know who inspired it and that, given the fact that DeepSeek made their models available and OpenAI didn't, whatever is written should be taken with more than one grain of salt.
What's interesting is that most of this is applicable to proprietary US models when used by non-US users, too. "Stores data in the US"? Yes. "Complies with approved narratives"? Check. "Cooperates with intelligence services and the military"? Check. The only real solution here are open weights, and Deepseek is the strongest open-weights model to this day. Don't like it? Compete.
It's amusing to see the hypocrisy on display, though. The authors of the report seem to be seriously accusing DeepSeek of IP theft from OpenAI, which was built on... IP theft. LOL.
Sinophobic junk. You got shown up by a free and open model and wasted a gazillion dollars, good job. So yes let’s ban the competition and force Americans to use the junky ad riddled cheap clones.
As someone in Europe, I sometimes wonder what’s worse: letting US companies use my data to target ads, or handing it to Chinese companies where I have no clue what’s being done with it. With one I at least get an open source model. The other is a big black box.
Isn't this a bit of semantic lawyering? Open model weights are not the same as open source in a literal sense, but I'd go so far as to suggest that open model weights fulfill much of the intent / "soul" of the open source movement. Would you disagree with that notion?
> open model weights fulfill much of the intent / "soul" of the open source movement
Absolutely not. The intent of the open source movement is sharing methods, not just artifacts, and that would require training code and methodology.
A binary (and that's arguably what weights are) you can semi-freely download and distribute is just shareware – that's several steps away from actual open source.
There's nothing wrong with shareware, but calling it open source, or even just "source available" (i.e. open source with licensing/usage restrictions), when it isn't, is disingenuous.
> The intent of the open source movement is sharing methods, not just artifacts, and that would require training code and methodology.
That's not enough. The key point was trust. When executable can be verified by independent review and rebuild. It it cannot be rebuilt it can be virus, troyan, backdoor, etc. For LLMs there is no way to reproduce, thus no way to verify them. So, they cannot be trusted and we have to trust producers. It's not that important when models are just talking, but with tools use it can be a real damage.
Hm, I wouldn't say that that's the key point of open software. There are many open source projects that don't have reproducible builds (some don't even offer any binary builds), and conversely there is "source available" software with deterministic builds that's not freely licensed.
On top of that, I don't think it works quite that way for ML models. Even their creators, with access to all training data and training steps, are having a very hard time reasoning about what these things will do exactly for a given input without trying it out.
"Reproducible training runs" could at least show that there's not been any active adversarial RHLF, but seem prohibitively expensive in terms of resources.
Well, 'open source' is interpreted in different ways. I think the core idea is it can be trusted. You can get Linux distribution and recompile every component except for the proprietary drivers. With that being done by independent groups you can trust it enough to run bank's systems. The other options are like Windows where you have to trust Microsoft and their supply chain.
There are different variations, of course. Mostly related to the rights and permissions.
As for big models even their owners, having all the hardware and training data and code, cannot reproduce them. Model may have some undocumented functionality pretrained or added in post process, and it's almost impossible to detect without knowing the key phrase. It can be a harmless watermark or something else.
But there is also no publicly known way to implant unwanted telemetry, backdoors, or malware into modern model formats either (which hasn't always been true of older LLM model formats), which mitigates at least one functional concern about trust in this case, no?
It's not quite like executing a binary in userland - you're not really granting code execution to anyone with the model, right? Perhaps there is some undisclosed vulnerability in one or more of the runtimes, like llama.cpp, but that's a separate discussion.
The biggest problem is arguably at a different layer: These models are often used to write code, and if they write code containing vulnerabilities, they don't need any special permissions to do a lot of damage.
It's "reflections on trusting trust" all the way down.
If people who cannot read code well enough to evaluate whether or not it is secure are using LLM's to generate code, no amount of model transparency will solve the resulting problems. At least not while LLM's still suffer from the the major problems they have, like hallucinations, or being wrong (just like humans!).
Whether the model is open source, open weight, both, or neither has essentially zero impact on this.
I saw the argument that the source code is the preferred base to make changes and modifications in software, but in the case of those large models, the weights themselves are the preferred way.
It's much easier and cheap to make a finetune or LoRA than to train from scratch to adapt it to your use case. So it's not quite like source vs binary in software.
It does not and I totally disagree with that. Unless we can see the code that goes into the model to stop of from telling me how to make cocaine, it's not the same sort of soul.
> With one I at least get an open source model. The other is a big black box.
It doesn't matter much as in both cases provider has access to you ins and outs. The only question is if you trust company operating the model. (yes, you can run local model, but it's not that capable)
The US tends towards dictatorship; due process is an afterthought, people disappearing off the streets, citizens getting arrested at the border for nothing, tourists getting deported over minute issues such as an iffy hotel booking, and that's just off the top of my head from the last 2 days.
As long as I can run it on my own cheap hardware I’ll be using it. Our contracts with some of our customers is that their data never leaves our servers.
Everybody is spying on everybody, it's free for all....if you want to be out of the reach, either stop using software for sensitive information and communication or start using fully encrypted products. Cryptography is the key.
It's important to distinguish the DeepSeek App from the open-weight models, which are released under very liberal licenses, and you have full control of where data fed to the model goes, e.g. stays in the USA.
".. siphons data back to the People’s Republic of China (PRC)"
How does that work when I run the model myself?
Cry me a river, you tried to build a massive moat to force the rest of the world to suck you off for access and now you got caught with your pants down by a model that has been given out for free.
I wouldn't want to know how the US would use the discovery of cold fusion or a cure for all to make a profit for its elite instead of giving it out for the greater good.