When I read comments today I wonder if there is a human being that wrote them or an LLM.
That, to me, is the biggest difference. Previously I was mostly sure that something I read couldn’t have been generated by a computer. Now I’m fairly certain that I would be fooled quite frequently.
Mm. To me, I think ChatGPT has a certain voice, not sure about the other LLMs.
But perhaps I'm wrong. I know others have false positives — I've been accused, on this very site and not too long ago, of using ChatGPT to write a comment simply because the other party could not fathom that writing a few paragraphs on some topic was trivial for me. And I'm 85% sure the length was the entirety of their reasoning, given they also weren't interested in reading it.
>You’re definitely right about that. CharGPT is almost too accurate/structured.
I think a lot of the material was from standardized testing.
This very structured writing style. Many paragraphs, each discussing one aspect, finished by a conclusion. This is the classic style taught for (American at least) standardized testing, be it SAT, GRE, TOEFL, et al.
Was going to post something similar. There may be a need for a way to confirm ( not detect, which is its own field ) organic content. I hate the thought, because I assume I know where that goes privacy-wise.
Mm. To me, I think ChatGPT has a certain voice, not sure about the other LLMs
How long will it be, before humans reading mostly LLM output, adopt that same writing style? Certainly, for people growing up today, they will be affected.
I remember an HN comment six months or so ago by someone who said they were intentionally modeling their writing on ChatGPT's style. The person said that they were not confident about writing and that they were trying to get better by imitating AI.
One of the many surprising things to me about ChatGPT when it was first released was how well, in its default style, it imitated the bland but well-organized writing style of high school composition textbooks: a clearly stated thesis at the beginning, a topic sentence for each paragraph, a concluding paragraph that often begins "In conclusion."
I mentioned that last point—the concluding "In conclusion"—as an indicator of AI writing to a university class I taught last semester, and a student from Sweden said that he had been taught in school to use that phrase when writing in English.
If I see HN comments that have final paragraphs beginning with "In conclusion" I will still suspect that an LLM has been used. Occasionally I might be wrong, though.
I was taught in high school that using "In conclusion" to open your conclusion was cliche and really almost like an unnecessary slap in the face to the reader. Your composition should end with a conclusion, yes. There was a standard formula for that, yes. But it's not necessary to literally label it as such.
Many of the disliked essay writing cliches are good speech tropes. The difference between reading and listening is that in reading you can skim and skip and rewind, so you don't need structured signposts to guide you through the content. In listening you do. You can't see the last paragraph coming when listening to a speech.
An entertaining informative style of speech can detract from clearly communicating substance. (Of course, the audience rarely wants substance.)
I've intentionally changed some parts of I've comments I've written just because upon reading them back, it felt very close to ChatGPT's style at certain sentences.
I understand. A few months ago, I posted a comment here that attracted several down votes. The content, I thought, was completely innocuous, and I couldn’t figure out at first why some people didn’t like it. Only later did I realize that I might have polished it a little too much and it came out reading like ChatGPT.
How long will it be, before humans reading mostly LLM output, adopt that same writing style?
From what I’ve seen (tutoring high school kids), the picture is much bleaker. They use ChatGPT to write for them but they have no writing style of their own. They can barely put a sentence together just to write the prompt!
As humans, we like to copy things we consider good. I know I tried to use writing styles I thought were funny with varying results. I think I will say never ( or as close to never as possible ).
Given how inhumanly capable it is of seeing both sides of a topic, I guess we'll find out when we see "sublety" and "nuance" go the way way of "literally" and radically change meanings to their near opposite.
> How long will it be, before humans reading mostly LLM output, adopt that same writing style? Certainly, for people growing up today, they will be affected.
Just imagine 180M users of chatGPT having an estimated 1B sessions per month. The model is putting 1-2Trillion tokens into people's brains. People don't assimilate just the writing style and ideas, but also take actions into the real world influenced by the model. Sometimes they create useful discoveries or inventions that end up on the internet and in the next scrape. Full cycle.
That’s the first output from ChatGPT. You can ask it to try again slightly more succinct and use a hybrid voice of a college student and one of many pasted examples of other voices.
When I fed that prompt to my Claude 3.5 based CarlinBot, it refused to write a poem and instead wrote a rant about why it hates tangerines (warning, profanity):
Fuck tangerines, man. Those little orange bastards are a pain in the ass to peel. You spend 10 minutes trying to get that leathery skin off, your fingernails getting all sticky with that goddamn citrus juice. And then when you finally wrestle one of those fuckers open, you got all those little white strings hanging off everywhere. It's like dental floss from hell.
And don't even get me started on how those tangerine slices always shoot juice in your eye when you try to eat them. It's like getting maced by a tiny orange asshole. You ever get tangerine juice in your eye? Shit burns like the fires of hell itself. Makes you want to claw your own goddamn eyeballs out.
Nah, fuck tangerines and their whole stupid family tree. Oranges, clementines, satsumas - they can all go straight to fruit hell as far as I'm concerned. Give me a nice safe banana or an apple any day over those sadistic little citrus grenades. Tangerines are the work of the devil, plain and simple. Fuck writing poems about them little torture devices.
On what does a tangerine wait
Each morning below the winters sun as it awakes?
Do twisted dreams linger, of what it might mean to be a taste on the memory of a forgotten alien tongue?
Is its sacred role seen -- illuminated amongst the greens and unique chaotic chrominance bouncing ancient wisdom between the neighboring leaves?
The tangerine -- victim, pawn, and, ultimately, master ; its search for self in an infinitely growing pile of mixed up words truly complete. There is much to learn.
I was listening to a podcast/article being read in the authors' voice and it took me an embarrassingly long time to realize it was being read by an AI. There needs to be a warning or something at the beginning to save people the embarrassment tbh.
I think it will eventually be good public policy to make it illegal to post massive amounts of texts produced by AI without disclosing it.
As with all illegal things on the internet, it's difficult to enforce, but at least it will make it more difficult/less likely
How about articles written by human charlatans? Claiming they are 'doctors' or 'scientists'. Or posters claiming something that didn't happen? Like a... pro bullshtter claiming he was denied apartment renting because of his skin color. He could make a lot of money if that was true. But poster is still taking ads place, payed by poor 'suffering' minority. Another example 'influencers' who pretending, or really being, experts advise you on forums about products. The tell mostly the truth, but avoid some negative details and competing products and solutions. Without disclosing their connections to businesses.
Shorter version: intentional bullshtting never ends, it's in human, and AI, nature. Like it or not. Having several sources used to help, but now with flood of generated content it may be not the case anymore. If used right this has real affect on business. That's how small sellers live and die on Amazon.
Sure, but for me there isn't anything fundamentally different between a LLM reply and a spammers reply / SEO-vomit. Both are low quality useless junk that gives the masquerade of resembling something worth engaging with.
In fact the really bad spammers were already re-using prompts/templates, think of how many of those recipe novellas shared the same beats. "It was my favorite childhood comfort food", "Cooked with my grandma", blah blah blah
Really? People want to have discussions with other people. I don’t want the output of aggregate data that some tech company worth billions (or the wannabes) might offer. It is truly weird that this needs to be said.
I don’t want this to come across as too negative of a sentiment, but (…) a lot of online discussions are just people repeating opinions they heard elsewhere they agree with. AI is, in this regard, not that different. And marketing is a big part of it, so there are already companies with lots of weight behind making sure that people talk about only certain topics with certain viewpoints (i.e. the Overton window).
Actually original commentary in a discussion is bloody hard to come by.
Sure but the output of an LLM is _never_ original.
Human output signal might be wildly different from person to person if judged on originality. But LLM output is then pure noise. The internet wad already a noisy place but humans are “rate limited” to a degree an LLM is not.
That, to me, is the biggest difference. Previously I was mostly sure that something I read couldn’t have been generated by a computer. Now I’m fairly certain that I would be fooled quite frequently.