Already quite a while ago I was entertained by a particular British tabloid article, which had been "AI edited". Basically the article was partially correct, but then it went badly wrong because the subject of the article was about recent political events that had happened some years after the point where LLM's training data ended. Because of this, the article contained several AI-generated contextual statements about state of the world that had been true two years ago, but not anymore.
They quietly fixed the article only after I pointed its flaws out to them. I hope more serious journalists don't trust AI so blindly.
They quietly fixed the article only after I pointed its flaws out to them. I hope more serious journalists don't trust AI so blindly.