This is a genre of article I find particularly annoying. Instead of writing an essay on why he personally thinks GPT-5 is bad based on his own analysis, the author just gathers up a bunch of social media reactions and tells us about them, characterizing every criticism as “devastating” or a “slam”, and then hopes that the combined weight of these overtorqued summaries will convince us to see things his way.
It’s both too slanted to be journalism, but not original enough to be analysis.
For some reason AI seems to bring out articles that seem to fundamentally lack curiosity - opting instead for gleeful mockery and scorn. I like AI, but I'll happily read thoughtful articles from people who disagree. But not this. This article has no value other than to dunk on the opposition.
I tend to think HN's moderation is OK, but I think these sorts of low-curiosity articles need to be off the front page.
> For some reason AI seems to bring out articles that seem to fundamentally lack curiosity - opting instead for gleeful mockery and scorn
I think its broader to all tech. It all started in 2016 after it was deemed that tech, especially social media, had helped sway the election. Since then a lot of things became political that weren't in the past and tech got swept up w/ that. And unfortunately AI has its haters despite the fact that it's objectively the fastest growing most exciting technology in the last 50 years. Instead they're dissecting some CEOs shitposts.
Fast forward to today, pretty much everything is political. Take this banger from NY Times:
> Mr. Kennedy has singled out Froot Loops as an example of a product with too many ingredients. In an interview with MSNBC on Nov. 6, he questioned the overall ingredient count: “Why do we have Froot Loops in this country that have 18 or 19 ingredients and you go to Canada and it has two or three?” Mr. Kennedy asked.
> He was wrong on the ingredient count, they are roughly the same. But the Canadian version does have natural colorings made from blueberries and carrots while the U.S. product contains red dye 40, yellow 5 and blue 1 as well as Butylated hydroxytoluene, or BHT, a lab-made chemical that is used “for freshness,” according to the ingredient label.
I think you are missing the forest for the trees here.
> It all started in 2016 after it was deemed that tech, especially social media, had helped sway the election. Since then a lot of things became political that weren't in the past and tech got swept up w/ that
The 2016 election was a symptom of broader societal changes, and yeah, I'd also say it exhibited a new level of psychological manipulation in election campaigns. But the election being "deemed" influenced by technology and media (sure it was, why not?) as a cause for political division seems very far-fetched. Regarding the US healthcare politics farce, I don't understand your point or how it relates to the beginning of your comment.
Political division and propaganda inciting outrage are flourishing, yes. Not because of what you describe about the 2016 election though, IMO. What's the connection? So you mean if nobody had assessed social media campaigns after that election, politics wozld be more honest and fact-based? Why? And what did you want to say with the NY Times article about the US secretary of health and American junk food / cereals?
Political division and propaganda inciting outrage are flourishing, yes.
My concern is there is no antidote for this in the horizon it’s just more and more stupid getting traction all the time. You have to put a lot of faith in common sense to stay optimistic.
I agree with this, Neil Postmans "Amusing ourselves to death" is still a good read in 2025.
The only antidotes I can imagine are honesty and real community: make whatever you want from this, it should be obvious by now that global cut-throat capitalism does not lead to democracy, or to efficient resource usage (externalization...), or to equality.
Honestly my concern is WW3 fueled on propaganda. The ONLY reason I think this might not happen is because people are too amused to even bother fighting a war now. I'm not joking.
I mean, what's political about having former NSA heads on your "exciting technology" board?
Or what's political about lining up together as the front row at the despot in chief's inauguration?
And what's so political about lobbying to and sequestering large amounts of public funds for your personal manufactured consent machine?
These things are literally software that runs on technology developed in the last 50 years, but by your clearly apolitical, unbiased, absolutely thoughtful, well reasoned, fully researched, insight is in fact "the most exciting technology in the last 50 years".
The Snowden scandal ? Cambridge Analytica ? YouTube's ContentId in 2009 ? Microsoft's behaviour with IE ?
A lot of these issues are not general to infocoms either, but specific to platforms, to USian (/Russian/Chinese) companies, to companies grown too big and a failure of antitrust...
Why should the excitement of a technology have anything to do with my critical view of it? Are we toddlers playing with toys, or are we trying to make a better world here?
I don’t like AI and I think this type of article is very boring. Imagine having one of the most interesting technological developments of the last 50 years unfolding before your eyes and resort to reposting tweet fragments…
I think this is the fundamental trend of all "commentary" in the digital age.
Thoughtful, nuanced takes simply cannot generate the same audience and velocity, and by the time you write something of actual substance the moment is gone and the hyper-narrative has moved on.
This is well earned by the likes of OpenAI that is trying to convince everyone they need trillions of dollars to build fabs to build super genius AIs. These super genius AIs will replace everyone (except billionaires) and act as magic money printers (for billionaires).
Meanwhile their super genius precursor AIs make up shit and can't count letters in words while being laughably sycophantic.
There's no need to defend poor innocent megacorps trying to usher in a techno-feudal dystopia.
This is exactly what I'm talking about. The immediate issue with GPT-5's launch was that some people were getting rerouted to 4o, and there was no control over which version of GPT-5 you're getting. Whatever he's talking to there looks nothing like a reasoning model.
I said clearly and explicitly in my comment that I am happy to read thoughtful anti-AI material. That you disregarded this to write your comment says more about your ability to "form better opinions" than mine.
This seems wrong -- i mean its certainly possible that such things happen, but anecdotally, most of my friends, university educated, are various shades of anti ai, especially those in artistic, writing, humanities, and musical fields. I'm not that anti ai myself and have discussed the issue with them at length
But Marcus is decidedly NOT anti-AI, he's just against the idea that LLMs are the be-all end-all of AI. He may be right about that, or he may be wrong, but describing him as anti-AI is just wrong.
What I'm saying is it's easy for folks like him to fall into the trap of writing for an undiscriminating audience because their feedback is so positive and strong.
He was (and I think still is) a believer in psychological nativism, which I think is an intellectual dead end. You can't really believe in LLMs while also being a nativist.
But nobody really cares about the axe he's grinding (nativism), yet when he writes about the "slopocalypse" he gets positive feedback.
Is there any single sentence in my comment that you think is factually incorrect as written?
And do you not see that it's a common form of intellectual dishonesty to pretend that a "there exists" statement is a "for all" statement, and now three people have tried to do this without adding anything at all to the discussion?
Just copy the arguments you see a lot into a spreadsheet and track them.
There are fewer than 10 that come up repeatedly it's not hard to do.
Ones I see on HN frequently are the "slop" critique, the claim that AI produces inferior content, and the claim that customers hate it and it's being forced on them. These all go back at least to the luddite movement and have been used in every anti-tech movement I've looked into. I'm sure you're capable of tracking the other talking points.
There are other ones that don't go back to the luddites but which are nonsensical, like the critique that LLMs are statistical.
You said people underestimate the amount of astroturfing. Clearly implying there is more than people think.
You seemed to imply there was a small amount of genuine anti-AI posting.
You then said the posts came in identifiable waves followed by explaining how people organize on discord to post such things.
All of that added up sounds to me like you are implying the vast majority of anti-AI sentiment is organized astroturfing. Yeah you never said every single one, I generalized on that front.
You say that’s not what you meant. Sorry, that was how I read your post. It sounded to me like a blatant dismissal of all critiques as fake or insincere.
People tend to think everything is organic. Whereas in reality there are forums and channels dedicated to raiding parties. And that's just the human participants, not including any bot activity or troll farm activity that's coordinated off of public forums.
That does not mean that all content is inorganic. In any culture war or wedge issue or agitprop campaign you seed with influencers and rely on amplification by ordinary people. But the influencers frame the debate, decide what the priorities are, etc.
It takes time and money to coordinate everyone's opinion on something. The natural tendency is toward entropy and fighting against that requires resources. So when large numbers of people believe the same thing and repeat the same lines that's a candidate for inorganic content. You can then investigate that further and decide whether it's a conlincidence or not.
Sometimes there's just a trend that catches on. But people also underestimate how much work and coordination goes into making something catch on.
This is a big part of why influencers and advertisers try to avoid the impression that they are paid to pretend to like something and why there are regulations requiring that you disclose paid content. People are easily fooled into thinking it's all just happening naturally and serendipitously.
It's a blog post about whether GPT 5 lived up to the hype and how it is being received, which is a totally legitimate thing to blog about. This is Gary Marcus's blog, not BBC coverage, of course it's slanted to the opinion he is trying to express.
Yeah which is exactly what the post you’re responding to is commenting on.
It’s a classic HN comment asking for nuance and discrediting Gary. It’s about how Gary is always following classic mob mentality, so of course it’s not slanted at all and commenting about the accuracies of the post.
So ironically you’re saying Gary’s shit is supposed to be that way and you’re criticizing the HN comment for that, but now I’m criticizing you for criticizing the comment because HN comments ARE supposed to be the opposite of Gary’s bullshit opinions.
I expect to read better stuff on HN. Not this type of biased social media violence and character take downs.
Sure, that's fine, but the question is whether or not that's interesting to the HN crowd. Apparently it is, as it made it to the front page. But I agree with GP's criticism of the article; if I wanted to know what reddit thought about GPT-5, I'd go to reddit. I don't need to read an article filled with gleeful vitriol. I gave up after a couple paragraphs.
That's not at all what Marcus is saying. He admits that it does remarkably well, but says (1) it's still not trustworthy; and (2) This version is not much better than the previous version. Both points are in support of his claim that just scaling isn't ever going to lead to General AI.
I agree. A better article would dive into the economics of what and why they didn't release the model that won gold in the 2025 International Math Olympiad. And the answer there is (probably) because it had a million dollars in inference compute costs just to take the test. If you follow that line of reasoning, they're onto something, possibly something that is AGI, but it's still much too expensive to commercialize.
If AI gains now come from spending OOMs more on inference than compute, it means we're in slow takeoff-istan and they're going to need to keep the hype train going for a long time to keep investment up.
Can they get there? Is it a matter of waiting out the clock on Moore's law? Or are they on the sigmoid curve with inference based gains well?
That's the question of our time and it's completely ignored.
I think it's a broad problem across every aspect of life - it has gotten increasingly more difficult to find genuine takes. Most people online seem to be just relaying a version of someone else's take and we end up with unnecessarily loud and high volume shallow content.
To be fair, Gary Marcus pioneered the "LLMs will never make it" genre of complaining. Everyone else is derivative [1]. Let the man have his victory lap. He's been losing arguments for 5 years straight at this point.
[1] Due credit to Yann for his 'LLMs will stop scaling, energy based methods are the way forward' obsession.
I mean, his conceptual arguments are pretty good, ML/LLMs/statistical learning methods have real problems with out of distribution inputs and samples, and there's really no way to work around this (that I know of).
I get what you mean though, even if he's right it must've been pretty annoying hearing the same thing all the time.
100% agree. I feel like this is a symptom of Dead Internet Theory as well - as a negative take starts to spiral out of control, we start to get an absolute deluge of a repurposing of the directionally negative sound bytes and it honestly feels like bot canvasing.
I think from the authors perspective, LLM hype has been mostly the same exact thing you’re accusing him of doing. People with very little technical background claiming AGI is near or all these CEOs pushing these nonsense narratives are getting old.. People are blindly trusting these people and offloading all their thinking to a sophisticated stochastic machine. It’s useful yes, super cool yes. Some super god like power or something that brings us to AGI? No probably not. I can’t blame him. I am sick of the hype. Grifters are coming out of the woodwork in a field with too many grifters to begin with. All these AI/LLM companies are high of their own supply and it’s getting old.
> That’s exactly what it means to hit a wall, and exactly the particular set of obstacles I described in my most notorious (and prescient) paper, in 2022. Real progress on some dimensions, but stuck in place on others.
The author includes their personal experience — recommend reading to the end.
I did read to the end before commenting. The author alludes to a paper they wrote 3 years ago while self-importantly complimenting themself on how good it was (always a red flag). They don’t really say much other than that in the post.
I don't think the complaint is ultimately against the post - if someone wants to post whatever on their blog that is fine. The complaint is more targeted against the people upvoting it because ... it is hard to speculate what their motivations are, but their ability to pick a low-content article when they see it is limited.
Any journalism (or anything that resembles it) which contains the words "devastating", "slam", or the many equivalents is garbage. Unless it's about a natural disaster or professional wrestling.
99% of content about AI nowadays is smug bullshitting with no real value. Is that new?
I'm eagerly awaiting more insightful analyses about how AI will create a perpetuum mobile of wealth for all of humanity, independent from natural resource consumption and human greed. It seems so logical!
Even some lesswrong discussions, however confused, are more insightful than most blog posts by AI company customers.
I will never understand the "bad diagram" kind of critique. Yes maybe it can't build and label a perfect image of a bicycle, but could it list and explain the major components of a bike? Schematics are a whole different skill, and do we need to remind everyone what the L is?
Listing and explaining is essentially repeating what someone else has said somewhere. Labeling a schematic requires understanding what you're saying (or copy-pasting a schematic, so I guess we can be happy that GPT-5 doesn't do that). No one who actually understood the function of the major components would mislabel the schematic like that, unless they were blind.
Its true if you expect general intelligence. Its forgiveable for expecting general intelligence given the hype. But there's no real reason we should have expected a language model to create an image that is for some reason a perfect bicycle schematic (other than hype). And I'm not sure that imagery is actually a required format to demonstrate intelligence.
I bet it could generate assembly instructions and list each part and help diagnose or tune. And that's remarkable and demonstrates enough fake understanding to be useful.
"fake understanding" is exactly the right term. And the image is just fine, it's the labeling that's bonkers. What it illustrates is that the LLM can repeat words, but it has no idea what it's saying. Whereas any pre-1817 engineer, reading descriptions of bicycles (which GPT-5 obviously has access to), could easily have labeled a picture of one. (1817 is the date the first real bicycle is believed to have been invented, but driven by the rider's feet on the ground. Bicycles with chain drives weren't invented until decades later. But an engineer would have understood the principle.)
I did get a strong sense of gilted nerd. Why didn’t they give ME those billions in research funding? Nobody sees that I am the smartest boy because they’re just a bunch of dopes. Opinion people are something I think we could all do with less of.
We as a community have decided to absolutely drench the front page with low-effort hot takes by non-practitioners about one of many areas of modern neural network ... progress.
This low-effort hot take is every bit as "valid" as all the nepobaby vibecode hype garbage: we decided to do the AI thing. This is the AI thing.
What's your point? This one is on the critical side of the argument that was stupid in toto to begin with?
It’s both too slanted to be journalism, but not original enough to be analysis.