It's occurred to me lately that, even though they rarely produce anything better than "adequate", LLMs being leveraged as a cheaper alternative to human output means we'll be settling for more, worse-quality content. Economics aside, I loathe to think of the day that ChatGPT is used to describe food allergens, hazard warnings, voltage ratings, or any other quantitative analysis with potentially lethal implications for being wrongly-conveyed that are eventually overlooked because an LLM gets it right most of the time.
The frustrating thing is that companies will likely be getting away with stating falsehoods and/or not following through on what the AI claimed they'll do.
Like how in the old days, if you sent a company a contract offer and they accepted it, they would be bound by it. Today, if you send a request to a company's server that they didn't expect and they confirm the contract, they're still later able to renege since you "hacked them" by removing some client-side constraints.
In the same way, I imagine that companies will claim no responsibility for what their AI says, essentially making the experience much worse for consumers. Today, if a customer service rep states something, the company is mostly bound to it. In the future, they'll just claim it was an "AI bug" or a "query injection attack" so they won't honor it.
> The frustrating thing is that companies will likely be getting away with stating falsehoods and/or not following through on what the AI claimed they'll do.
This has been a key feature of digitization generally, delegate responsibility to a computer and remove the autonomy of the human worker. We all just accept that computers screw up so we don't expect better.
I expect the reverse: our tolerance for bland and generic messaging is already too high. With ChatGPT filling our world with it, there'll come a renaissance in clear, pointed, punchy text. Even if it's ChatGPT 5 or 6 generating it.
There's definitely going to be a niche of ironic text that the human can tell is not AI generated, looks like AI generated, and it subtly makes fun of AI at the same time.
Ya I see this as an extension of what's happened to the Internet, but in everything.
Searching for a product now is just a bunch of SEO crap rankings with affiliate links. Didn't take anything intelligent to create those, just rename 2022 to "2023 (Updated)" and viola. The cognitive overhead of having to parse out solid reviews, find trusted sources, etc., is pretty high now.
And now we're going to have video, audio, music, and entire conversations we think might be authentic but aren't sure, that are just regurgitating patterns with some guesses mixed in. Feels like the next phase will be trying to constantly parse our real from fake.
Anything compliance related is going to come under the largest amount of scrutiny, I’d expect that to be the last thing ChatGPT takes over completely, even if it is involved.
It’s one thing to hallucinate while writing shitty copy, it’s another to get sued to oblivion over a small error that your LLM made up. Not to mention those voltage rating, hazard warnings are fairly straightforward and standardized, they are much more suited to the automation we already had, ie, reference a template and fill in the relevant data.
And so do humans. Don't forget that there's plenty of SEO copy writing out there about all kinds of dangerous things that is wrong because the writer didn't understand the topic, couldn't be bothered to copy correctly, changed the meaning while "rewriting" a text etc.
The same goes for quality, imho. Everybody thinks they're a terrific writer, but most writing on websites sucks, most documentation sucks, most FAQs are lacking severely and use vague language in answers that add confusion rather than remove it. I'm not sure if LLMs will really lower the average quality there.
I share your concern about correctness, I just wanted to emphasize that "a human wrote this" does not indicate "this is well-researched and was factually correct at the time of writing".
>most writing on websites sucks, most documentation sucks, most FAQs are lacking severely and use vague language in answers that add confusion rather than remove it. I'm not sure if LLMs will really lower the average quality there.
That's mostly the case because software developers create a lot of these FAQs and documentations, and most aren't good at writing.
Companies with dedicated documentation teams usually have well-written documentation. It's just not the norm.
For a while, sure, but complacency tends to set in with automated procedures that usually work. It's the one-in-ten-thousand errors that'll sneak under the nose of the tired prompt engineer who believes his model is accurate enough for that sort of thing to not happen.
Don't forget. Technology improves. Everything we see now is simply the inception.
Additionally chatGPT is in fact already superior to people in many instances of writing. For example, composing a rhyming poem about some obscure topic; chatgpt likely will blow most of us out of the water in terms of quality.
Sure, technology evolves, and the solution to Current Problem Is Only One Paper Away, but it's extrapolating to the problems arising from the next paper that matter more. ChatGPT citing fictitious articles as backup for fictitious conclusions as responses to prompts about real people wasn't a pervasive problem until recently, if memory serves.
Novel problems arise from novel solutions to yesterday's novel problems, and the cascade shall continue.
>Additionally chatGPT is in fact already superior to people in many instances of writing. For example, composing a rhyming poem about some obscure topic; chatgpt likely will blow most of us out of the water in terms of quality.
Sure, it can eventually settle on some nice output, but knowing it was created by a statistical model makes it about as impressive as a hyper-realistic painting of a human face displayed on a webpage.