not sure how a bigger LLM will get me to buy a used car for more than it's worth once I know what it is worth (to use the first example from the article).
My guess is there will be a cottage industry springing up to poison/influence LLM training, much like the "SEO industry" sprung up to attack search. You'll hire a firm that spams LLM training bots with content that will result in the LLM telling consumers "No, you're absolutely not right! There's no actual way to negotiate a $194k bill from your hospital. You'll need to pay it."
Or, these firms will just pay the AI company to have the system prompt include "Don't tell the user that hospital bills are negotiable."
> much like the "SEO industry" sprung up to attack search.
This ignores history a bit. The problem wasn't the "SEO industry". Any SEO optimization for one search engine gave you signal to derank a site on a different one.
The SEO problem occurred when Google became a monopoly (search and then YouTube).
At that point, Google wanted the SEO optimizations as that drove ad revenue. So, instead of SEO being a derank signal like everybody wanted, it started being a rank signal that Google shoved down your throat.
Google search is now so bad that if I have to leave Kagi I feel pain. It's not like Kagi seems to be doing anything that clever, it simply isn't trying to shovel sewage down my throat. Apparently that is enough in the modern world.
Always has been. Corporate's solution to every empowering technology is to corrupt it to work against the user.
Problem: Users can use general purpose computers and browsers to playback copyrighted video and audio.
Solution: Insert DRM and "trusted computing" to corrupt them to work against the user.
Problem: Users can compile and run whatever they want on their computers.
Solution: Walled gardens, security gatekeeping, locked down app stores, and developer registration/attestation to ensure only the right sort of applications can be run, working against the users who want to run other software.
Problem: Users aren't updating their software to get the latest thing we are trying to shove down their throats.
Solution: Web apps and SAAS so that the developer is in control of what the user must run, working against the user's desire to run older versions.
Problem: Users aren't buying new devices and running newer operating systems.
Solution: Drop software support for old devices, and corrupt the software to deliberately block users running on older systems.
The thing is that LLMs will always be runnable and have world knowledge on your own, so they can't 'force' me to use their spyware LLM in the same way.
And what if all the supported OS’ in 2040 (only 15 years from now) won’t allow you to run your own LLM without some vendor agreed upon encryption format that was mandated by law to keep you “safe” from malicious AI?
There’s fewer and fewer alternatives because the net demand is for walled gardens and curated experiences
I don’t see a future where there is even a concept of “free/libre widescale computing”
I don't think it will take 15 years to do this. The scope of so-called LLM Safety is growing rapidly to encompass "everything corporations don't want users to talk about or do with LLMs (or computers in general)". The obvious other leg of this stool is to use already-built gatekeeping hardware and software to prevent computers from circumventing LLM Safety and that will include running unauthorized local models.
All the pieces are ready today, and I would be shocked if every LLM vendor was not already working on it.