> As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services.
Just like I've predicted many times on HN.
Text generation is low on the list of things needed to successfully engage in automated spam. Social media is built on reputation, not who can write generic believable text the quickest. And funny enough the ex-OpenAI people (mostly Helen) calling for gov regulation said GPT should not have been released to the public because of this risk.
In a way, I find your comment unintentionally hilarious. Sure, this may actually be true. Consider the source, however. Congratulating yourself by quoting the tobacco company's own research of (the lack of) adverse effects of tobacco seems a tad ill-conceived. The talk of reputation makes it doubly so, since on this topic Sam Altman or any official OpenAI post has no credibility whatsoever.
> Social media is built on reputation, not who can write generic believable text the quickest.
This is not the whole picture at all. I’d even say mostly incorrect. You can definitely influence public opinion with a social media consisting entirely of bots, provided the public aren’t aware of it.
A lot of social consensus is often made when huge threads on twitter or Instagram have a massive number of supporters behind a topic. So when bots get better at all saying the same thing in seemingly human voices, it trains us to believe that is popular, and likely that that is right
Just like I've predicted many times on HN.
Text generation is low on the list of things needed to successfully engage in automated spam. Social media is built on reputation, not who can write generic believable text the quickest. And funny enough the ex-OpenAI people (mostly Helen) calling for gov regulation said GPT should not have been released to the public because of this risk.