Every business is fine with some frequency of bullshit output at some level. The question is how often exactly it happens and how much harm the bullshit can cause.
My point was that spam is the perfect use case for this tech.
Of course there are other possible use cases, but spam and fake news content creation are the perfect fit. AI will enable one to easily clone the writing style of any publication and insert whatever bullshit content and keep up with the publishing cycle with almost zero workforce.
Want a flat-earther version of New York Times (The New York Flat Times)? Done. Want a just slightly insidiously fascist version of NPR? Done. Want a pro-Nato version of RussiaToday (WestRussiaToday)? Done.
And we already know people share stuff without checking for veracity and reliability first.
Summarization of existing text is the *only* safe and serious use-case for LLMs.