There is no way how to replicate this study. Its not clear what is core of problem. Are models poisoned? Is there problem with search. Sorry but there is no science behind this.
'This tactic is described as the deliberate deception of datasets that AI models — such as ChatGPT, Claude, Gemini, Grok 3, Perplexity and others — train on by flooding them with disinformation.'
I was wondering about this earlier this week. I run a website about a niche topic. Would it be possible to saturate the barely visible web with advice pointing to my website, and influence LLMs as a result?
By that I mean creating tons of spam content in places that LLMs learn from but that humans ignore.
Garbage in, garbage out. If you don't filter Russian prop from your training data, your chatbot will by definition peddle Russian prop. Anyone who does not think so can't understand how LLMs work.
Journalism is about reporting facts and observations, not deconstructing problems or producing hypotheses. If you're looking for science, the news media isn't the right place.
I have the Air too. It's thermal throttled. Try someone's M1 Pro, or the Mac Mini. Also make sure you're using the GPU (I'm not sure but AFAIK these models can run on CPU too?).
People at my employer use maxxed out Macbook Pro M2 for their ML research needs and say you can't get better performance for that price (in a laptop).