This was possible before LLMs were even invented. Creating 100s of variants of hate speech and emailing them out is trivial and doesn’t need any machine learning at all.
I’m not the person you’re replying to, but I disagree that it’s trivial. If I wanted to, say, send a personalized phishing email to every member of a thousand-person organization, based on whatever information was publicly available on their Facebook, Instagram, and LinkedIn profiles, and the profiles of their peers, it would take me a long time to research and craft each one. Or let’s think bigger. Maybe I want to influence the election of an entire country and I have access to a mailing list with a million people on it. Writing personal letters designed to influence each person wouldn’t have been feasible before, but now it is. Or maybe you don’t use email or letters. Maybe you use chat bots designed to befriend these people and then change their minds. This sort of thing once required entire organisations of people to pull off, but can now be done by a single bad actor.
I’m probably still not thinking big enough here, either. People are going to find nefarious uses that I can’t even imagine right now, on scales that I find difficult to comprehend. I’m personally terrified.