Hacker News new | past | comments | ask | show | jobs | submit login

This writing feels so strongly LLM flavored. It's too bad, since I've really liked Alexander Mordvintsev's other work.



Are you sure you aren’t just falling into the “it’s all llm” trap? A lot of common writing styles are similar, and the most common ones are what LLMs imitate. I often am accused of llm writing. I don’t publish llm text because I think it is a social harm, so it’s pretty demoralising to have people call out my writing as “”llm slop”. OTOH, I have a few books published and people seem to find them handy, so there’s that.


Don't take it too hard. I've seen someone accused of using an LLM to write something because they correctly used an oxford comma. It's definitely the trap.


Yup i independently noticed passages with phrases and word choice mimicking llms. Certainly just used for assistance though, the writing is too good overall.


Which portion of the text gave you that impression?


> To answer this, we'll start by attacking Conway's Game of Life - perhaps the most iconic cellular automata, having captivated researchers for decades

> At the heart of this project lies... > his powerful paradigm, pioneered by Mordvintsev et al., represents a fundamental shift in...

(Not only is this clearly LLM-style, I doubt someone working in a group w/ Mordvintsev would write this)

> Traditional cellular automata have long captivated...

> In the first stage, each cell perceives its environment. Think of it as a cell sensing the world around it.

> To do this, it uses Sobel filters, mathematical tools designed to numerically approximate spatial gradients

Mathematical tools??? This is a deep learning paper my guy.

> Next, the neural network steps in.

...

And it just keeps going. If you ask ChatGPT or Claude to write an essay for you, this is the style you get. I suffered through it b/c again, I really like Mordvintsev's work and have been following this line of research for a while, but it feels pretty rude to make people read this.


The reason LLMs write like that is, unsurprisingly, that some people write like that. In fact many of them do - it's not uncommon.

If you have proof like the logits are statistically significant for LLM output, that would be appreciated - otherwise it's just arguing over style.


Yeah, it’s disheartening that people often think my writing (most of it predates gpt3) is llm, and some of my favourite writers also fall under this wet blanket. LLMs just copy the most common writing style, so now if you write in a common way you are “llm”.


I’ve also had my writing misidentified as being LLM-produced on multiple occasions in the last month. Personally, I don’t really care if some writing is generated by AI if the contents contain solid arguments and reasoning, but when you haven’t used generative AI in the production of something it’s a weird claim to respond to.

Before GPT3 existed, I often received positive feedback about my writing and now it’s quite the opposite.

I’m not sure whether these accusations of AI generation are from genuine belief (and overconfidence) or some bizarre ploy for standing/internet points. Usually these claims of detecting AI generation get bolstered by others who also claim to be more observant than the average person. You can know they’re wrong in cases where you wrote something yourself but it’s not really provable.


I've read a _lot_ of deep learning papers, and this is extremely atypical. I agree with you that if there were any sort of serious implications then it'd be important to establish proof, but in the case of griping on a forum I think the standard of evidence is much lower.


> in the case of griping on a forum I think the standard of evidence is much lower.

Uh, no. Human “slop” is no better than AI slop.

There is no good purpose for a constant hum of predictable poorly supported “oh that’s LLM” “gripes”, if we care about the quality of a forum.


Reasearch papers are written like this and LLMs are trained on arxiv.


A lot of these are very close to stuff I have written. Not saying this piece did or didn't get a pass through an LLM, I have no idea, but it really makes me wonder how many people accuse me of using an LLM when it's just how I write.

I feel awful for anyone going to school now, or will be in the future. I probably would have been kicked out, seeing how easily people say "LLM" whenever they read some common phrasing, a particular word, structure of the writing, etc.


Ran the entire text through Claude 3.7 to evaluate style. Anyone on HN can do the same.

I’d rather hear about the content instead of this meta analysis on editorial services. Writers used to have professional copy editors with wicked fine-tipped green pens. Now we expect more incompetence from humans. Let me add some more typos to this comment.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: