Anecdotal, but this is the exact consensus I saw among my non-tech peers. They find it fun for a few days or weeks, then basically never touch it again once the novelty wears off. The only normies I know still using LLMs are students using them to write papers.
I know a lot of people who went through the "Oh, wow - wait a minute..." cycle. Including me.
They're approximately useful in some contexts. But those contexts are limited. And if there are facts or code involved, both require manual confirmation.
They're ideal for bullshit jobs - low-stakes corporate makework, such as mediocre ad copy and generic reports that no one is ever going to read.
Now you're the bullshit machine. No one said that. We expect basic reliability/reproducibility. A $4 drugstore calculator has that to about a dozen 9s, every single time. These machines will give you a correct answer and walk it right back if you respond the "wrong" way. They're not just wrong a lot of the time, they simply have no idea even when they're right. Your strawman is of no value here.
Clearly generative AI can currently only be used when verification is easy. A good example is software. Not sure why you think that I claimed otherwise.