Hacker Newsnew | past | comments | ask | show | jobs | submit | foldr's commentslogin

>Painstaking scientific testing by a team of international experts has been able to debunk a rumour on whether Hitler had Jewish ancestry (he didn't)

Isn't this Elizabeth Warren and the Cherokee all over again? Jewishness isn't an identity based on genetic tests. The results might show that it's relatively likely or unlikely that some of his ancestors were Jewish, but they can't give any definitive answer on that point.

I mean, ok, there is stuff like the following, but it's somewhat controversial, no? https://www.theguardian.com/lifeandstyle/2019/jun/12/what-do...

I've been running about the place defending the BBC in other comments, but maybe they could manage not to uncritically accept racial essentialism in an article about Hitler.


Cursor’s killer feature is that you can use it to edit diffs or restore a known good version? That is basic version control functionality. I understand what you’re referring to (I use Zed, which also has an interface for partially applying AI-generated code changes), but it’s very weird to me that this basic functionality would be considered some kind of competitive advantage.

same for VSCode Copilot. it's a basic feature at this point

i guess people just don't try other tools often


No, this works fine with a single Zed instance.

I stand corrected.

> The riot had already started before he gave his speech

Sort of. It started before the speech finished, but about 50 minutes after it started. Here's the relevant timeline from the Wikipedia article.

>At noon, Trump began an over one-hour speech at the Ellipse, encouraging protesters to march to the U.S. Capitol. At 12:49 p.m., Capitol Police responded to reports of an explosive device, later identified as a pipe bomb. At 12:53 p.m., eighteen minutes before Trump's speech ended, rioters overran police on the west perimeter of restricted Capitol grounds.


Here is the BBC news website:

https://bbc.co.uk/news

It is not 90% American politics.


Even in this hypothetical scenario, you’re radically rearchitecting your entire product to save “months of work” and the cost of some beefier dev machines. How can that be rational?

Neither was Jeremy Corbyn, but he would have been Prime Minister if enough people had voted for him. Say what you will about him (I am not a fan), but he is not “establishment approved”.

That's why he "didn't pass all the tests" and was undermined both by the media and by his own parties, and various establishment interests.

You said:

> because people were only given a couple of establishment approved choices

Unless you think Jeremy Corbyn was establishment approved (!), this is clearly not true.

I’m not really sure what to make of your latest comment. Is your preferred world one where the media never criticize your favored politicians and the left wing of the Labour Party ruthlessly crushes internal dissent? If that’s what it would have taken to make Corbyn Prime Minister, then count me out.


The accusations against the BBC in this case are extremely weak, though. The main thing is the somewhat misleading edit of Trump’s speech in one documentary (which was such a big deal that no-one noticed till a year later), and then some general grumbling about too many positive portrayals of trans people. This latter complaint would be bizarre if it was made with reference to any other group of people. (Imagine if the BBC were accused of showing too many positive portrayals of Irish people and was then required to broadcast programs where people who hate the Irish were also given space to air their views.) Finally there are some complaints about coverage of the Israel-Palestine conflict, which is of course impossible for any news organization to cover without enraging someone or other.

This paints a pretty awful picture of the BBC in other areas: https://archive.is/mBcPq

So does this:

https://cfmm.org.uk/bbc-on-gaza-israel-one-story-double-stan...

It’s quite a fun game to match every accusation of BBC bias with its exact opposite. That’s not to say that BBC reporting is perfect and that no individual criticisms are valid. But the BBC simply could not report on Israel/Palestine at all without being accused of bias in all directions. Show me the media organization whose reporting on this issue is agreed to be unbiased by all relevant parties.

(Also little ironic to link to a Telegraph article, of all things, while complaining about media bias. On anyone’s analysis, the Telegraph is a vastly more partisan media organization than the BBC.)


If you read the article they are listing things that the BBC itself concluded and cites specific examples. Ignore who wrote the article or even the article itself and just look at what BBC itself found. Also note it's about BBC Arabic in comparison to regular BBC.

I’m sure the BBC gets lots of stuff wrong in its news coverage, just like any other news organization you could name. But if we’re talking about systemic bias in reporting on the Israel/Palestine conflict, it’s worth noting that “both sides” (much as I hate this expression) frequently claim that the BBC is biased against them. The whole issue is so contentious that it’s highly unlikely that any major news organization will escape criticisms of bias. Again, I wonder if you can name a different major news organization that regularly reports on Israel/Palestine and that nobody considers to have any bias in its reporting.

I agree with you that they all have bias. However, as the article points out... this is an egregious bias by it's own sister publication. So, it is unique in that it all falls under the same roof and it's being called out by itself.

That’s unique in a good way, no? Would it be better if the BBC were incapable of self critique?

I’ll give them +10 points for hiring somebody to measure their own accuracy, and −100 points for ignoring their reports and broadcasting deliberate lies anyway.

Who are you comparing the BBC to? I can’t think of a news service of comparable scope that clearly has a better track record when it comes to accuracy. The edit of Trump’s speech in that one documentary was indeed misleading, but it’s being blown out of all proportion here.

Meanwhile, the President who is a stickler for news accuracy and wants to sue the BBC for a billion dollars is busy broadcasting libelous nonsense to his Twitter followers on a daily basis (e.g. the absurd claim that Barack Obama has been receiving millions in Obamacare “royalties”).

The hypocrisy would be astounding if we weren’t already so used to it.


Excusing the BBC for lying just because other people are lying too doesn’t really work.

I'm not excusing the BBC for broadcasting a documentary with a misleading edit of Trump's speech. They've rightly admitted their error in that respect.

This is a politically motivated attack on the BBC by people who, as referenced above, don't care a jot for accuracy in news reporting. By combing through the BBC's enormous output they have, unsurprisingly enough, managed to find one or two legitimate grievances.


> just like any other news organization you could name

Excuse.

> Show me the media organization whose reporting on this issue is agreed to be unbiased by all relevant parties.

Excuse.

> it’s being blown out of all proportion

Excuse.

> the President […] is busy broadcasting libelous nonsense to his Twitter followers

Excuse.

> managed to find one or two legitimate grievances.

Wow, ok. Maybe you didn’t actually read the report?

https://www.telegraph.co.uk/news/2025/11/06/read-devastating...

The BBC has been pushing specific viewpoints for years, and burying any story or article that might contradict it or offer a competing viewpoint. A quote:

    As virtually all shows had lost their own reporters, programme editors had to make requests to News if they wanted a correspondent to cover a story. I was told that time and time again the LGBTQ desk staffers would decline to cover any story raising difficult questions about the trans-debate.
    
    The allegation made to me was stark: that the desk had been captured by a small group of people promoting the Stonewall view of the debate and keeping other perspectives off-air. Individual programmes had come to lack their own reporters as a counterweight.
    
    What I was told chimed with what I saw for myself on BBC Online - that stories raising difficult questions about the ‘trans agenda’ were ignored even if they had been widely taken up and discussed across other media outlets.
    
    There was also a constant drip-feed of one-sided stories, usually news features, celebrating the trans experience without adequate balance or objectivity.
You see? The BBC isn’t trustworthy. It’s literally not worth trusting anything they say. The Trump thing is just the tip of the iceberg. It’s the big obvious lie that everyone can see because it’s so easy to find the truth.

If the people who have resigned could point to even one action that they’ve taken to fix the BBC’s culture of unethical behavior then they would not have had to resign.


I addressed the other claims in the report in my original comment. Anyone who writes a report complaining about a ‘trans agenda’ clearly has more of an axe to grind than anyone at the BBC. Your last comment is bordering on a rant (e.g. the far out claim that “it’s literally not worth trusting anything that [the BBC] says”), so I will leave it here.

I think you’re ignoring the problem. Regardless of anyone’s opinion on the matter, the BBC’s strategy of supporting one viewpoint and burying another is unethical. If you want people to trust the media then the media must report the facts even when the individuals within the media dislike them. If they don’t do that they they’re not trustworthy.

The BBC doesn’t have a “strategy” of doing that, but any news service will have a detectable lean towards some viewpoints and away from others. When it comes to Israel/Palestine or trans rights, there is little general agreement as to what the facts are, so you cannot please everyone with some simplistic notion of “purely factual” reporting.

In fact the BBC, following the general transphobic climate in the UK, has given a lot of airtime to people trying to create a moral panic around trans people. They’re using virtually identical tactics to those used to stir up panic about gays in the 80s. This won’t look good with hindsight any more than the 80s and 90s “debate” about homosexuality does now. (Will the Telegraph demand that the BBC give airtime to homophobes in the interests of fairness and balance? No, they have quietly forgotten about that issue, and moved on to the next vulnerable minority group.)

In short, you’re holding the BBC to a standard that you don’t apply to any comparable broadcaster.


> One of the major beliefs of this view is that LLMs are essentially impossible because there's not enough information in language to learn it unless you have a special purpose language-learning module built into the brain by evolution. This is Chomsky's "poverty of the stimulus" argument

The argument is that there is not enough information available to a child to do this. So even if we grant the dubious premise that LLMs have learned to speak languages in a manner analogous to humans, they are not a counterexample to Chomsky’s poverty of the stimulus argument because they have been trained on a vast array of linguistic data that is not available within a single human childhood.

If you want to better understand Chomsky’s position, it’s easiest to do so in relation to other animals. Why are other intelligent animals not able to learn human languages? The rather unsurprising answer, in Chomsky’s view, is that humans have a built-in linguistic capacity, rather in the way that e.g. bats have a built in capacity for echolocation. The claim that bats have a built-in capacity for echolocation is not refuted by the existence of sonar. Likewise, our ability to construct machines that mimic some aspects of human linguistic capacity does not automatically refute the hypothesis that this is a specific human capacity absent in other animals.

Imagine if sonar engineers were constantly shitting on chiropterologists because their silly theory of bats having evolved a capacity for echolocation has now been refuted by human-constructed sonar arrays. The argument makes so little sense that it’s difficult to even imagine the scenario. But the argument against Chomsky from LLMs doesn’t really make any more sense, on reflection.

Chomsky hasn’t helped his case in recent years by tacking his name on some dumb articles about LLMs that he didn’t actually write. (A warning to us all that retirement is a good idea.) So I don’t blame people who are excited about LLMs for seeing him as a bit of rube, but the supposed conflict between Chomsky and LLMs is entirely artificial. Chomsky is (was) trying to do cognitive science. People experimenting with LLMs are not, on the other hand, making any serious effort to study how humans acquire language, and so have very little of substance to argue with Chomsky about. They are just taking opportunistic pot shots at a Big Name because it’s a good way to get attention.

For the record, Chomsky himself has never made any very specific claims about a dedicated module in the brain or about the evolutionary origins of human linguistic capacity (except for some skeptical comments on it being a gradual adaptation).


There was a large literature on language acquisition prior to the invention of LLMs that showed that Chomsky's argument likely wasn't correct. This is in addition to the fact that he significantly underestimated the amount of linguistic input children receive.

There's too much to hash it out here in HN. You can try to save the LAD argument by a strategic retreat, but it's been in retreat for decades now and keeps losing ground. It's clear that neural networks can learn the rules of grammar without specifically baking grammatical hierarchies into the network. You can retreat to saying it's about setting hyper parameters or priors but even the evidence for that is marginal.

There are certainly features of the brain that make language learning easier (such as size) but POS doesn't really provide anything to guide research and is mostly of historical interest now. It's a claim that something is impossible, which is a strong claim. And the evidence for it is poor. It's not clear it would have any adherents if it were proposed anew today. And this is all before LLMs enter the picture.

The research from neuroscience and learning theory and machine learning etc have all pointed toward a view of the brain as significantly different from the psychological nativism view. When many prominent results in the nativist camp failed to replicate during the replicability crisis, most big name researchers pivoted to other fields. Marcus is one of the remaining vocal holdouts for nativism. And his beliefs about AI align very closely with all the old debates about statistical learning models vs symbolic manipulation etc.

> Why are other intelligent animals not able to learn human languages?

Animals and plants do communicate with each other in structured ways. Animals can learn to communicate with humans. This is one of those areas where you can choose to try to see the continuities with communication or you can try to define a vision of language that isolates human language as completely set apart. I think human language is more like an outlier in complexity to the communication animals do rather than a fundamentally different thing. In that sense there's not much of a mystery given brain size, number of neurons, sociality etc.

> The argument is that there is not enough information available to a child to do this

Yes, but children are the humans who earn language in the typical case. So you can replace "child" with "human" especially with all the hedging I did in my first post (e.g. "essentially"). As I said above Chomsky is known to have underestimated the amount of input babies receive. Babies hear language from the moment they're born until they learn to speak. Also, as a parent, I often correct grammatical and other mistakes as toddlers learn to talk. Other parents do the same. Part of the POS is based on the premise that children don't get their grammar corrected often.


Yes, lots of people have argued that Chomsky is wrong about various things for various reasons and at various times. The point of my post was not to get into all of those historical arguments, but to point out that recent developments in LLMs are largely irrelevant. But I'll briefly respond to some of your broader points.

You mention 'neural networks' learning rules of grammar. Again, this is relevant to Chomsky's argument only to the extent that such devices do so on the basis of the kind of data available to a child. Here you implicitly reference a body of research that's largely non-existent. Where are the papers showing that neural networks can learn, say, ECP effects, ACD, restrictions on possible scope interpretations, etc. etc., on the basis of a realistic child linguistic corpus?

Your 'continuities' argument cuts both ways. There are continuities between human perception and bat perception and between bat communication and human communication; but we still can't echolocate, and bats still can't hold conversations. The specifics matter here. Is bat echolocation just a more complex variant of my very slight ability to sense whether I'm in an enclosed location or an outdoor space when I have my eyes closed? And is the explanation for why bats but not humans have this ability that bat cognition is just more sophisticated than human cognition? I'm sure neural networks can be trained to do echolocation too. Humans can train an artificial network to do echolocation, therefore it can't be a species-specific capacity of bats. << This seems like a terrible argument, no?

Poverty of the stimulus arguments don't really depend at all on the assumption that parents don't correct children, or that children ignore such corrections. If you look at specific examples of the kind of grammatical rules that tend to interest generative linguists (e.g. ACD, ECP effects, ...) then parents don't even know about any of these, and certainly aren't correcting their children on them.

Chomsky has never made any specific estimate of the 'amount' of input that babies receive, so he certainly can't be known to have underestimated it. Poverty of the stimulus arguments are at heart not quantitative but rather are based on the assumption that certain specific kinds of data are not likely to be available in a child's input. This assumption has been validated by experimental and corpus studies (e.g. https://sites.socsci.uci.edu/~lpearl/courses/readings/LidzWa...)

> Babies hear language from the moment they're born until they learn to speak

I can assure you that this insight is not lost on anyone who works on child language acquisition :)


A realistic child linguistic corpus for a 2 year old starting to form sentences would be about 15 million words over the course of their lifetime. Converted to LM units that's maybe about 20 million tokens. There are small language models trained on sets that small.

Some LMs are specifically trained on child-focused small corpora in the 10 million range, e.g. BabyLM: https://babylm.github.io.

Keep in mind that before age 2, children are using individual words and getting much richer feedback than LMs are.

Humans can and do echolocate: https://en.wikipedia.org/wiki/Human_echolocation. There are also anatomical differences that are not cognitive that affect the abilities like echolocation. For example, the positioning and frequency response of sensors (e.g. ears) can affect echolocation performance.


Yes, humans can echolocate to a limited extent, just as some animals have very limited analogs of human language. That was the point of the comparison. It is no more sensible to view human language as just a more complex variant of vervet monkey calls than it is to view bat echolocation as just a more complex variant of whatever limited capacity humans have in that area. There is continuity viewed from the outside, if you squint a little, but that's unlikely to correspond to continuity in terms of the underlying cognitive mechanisms. Bats, for example, can make precise calculations of distance based on a built-in reference for the speed of sound: https://www.pnas.org/doi/10.1073/pnas.2024352118

Children don't get 'rich feedback' at all on the grammatical structure of their sentences. I think this idea is probably based on a misconception of what 'grammar' is from a generative linguistics perspective. When was the last time that a child got rich feedback on their misinterpretation of an ACD construction? https://www.bu.edu/bucld/files/2011/05/29-SyrettBUCLD2004.pd...

LLMs trained on small datasets don't perform that well from the point of view of language acquisition – even up to 100 million tokens. There's not a very large literature on this because, as I said, there are many more people interested in making a drive-by critique of generative linguistics than there are people who are genuinely interested in investigating different models of child language acquisition. But here is one suggestive result: https://aclanthology.org/2025.emnlp-main.761.pdf See also the last paragraph of p.6 onwards of https://arxiv.org/pdf/2308.03228

The other point that's often missed in evaluations of these models is their capacity for learning completely non-human-like languages. Thus, the BabyLM models have some limited success in learning (for example) some island constraints, but could just have easily acquired languages without island constraints. That then leaves the question of why we do not see human languages without such constraints.


>Children don't get 'rich feedback' at all on the grammatical structure of their sentences.

They probably do get parents and the like correcting them or giving an example. Kid says we goed fish, adult say yeah we went fishing. I taught English as a foreign language a bit and people learn almost entirely from examples like that rather than talking about ellipsis or any sort of grammar jargon.

It seems brains / neurons / LLMs are good at pattern recognition. Brains probably quicker on the uptake than LLM backpropagation though.


That particular example is irrelevant to poverty of the stimulus arguments because no-one has ever suggested that kids acquiring English lack evidence for the irregular past tense of ‘go’.

See above for some examples of the kinds of grammatical principles that can form the basis of a poverty of the stimulus argument. They’re not generally the kind of thing that parental corrections could conceivably help with, for two reasons:

1) (The main reason) Poverty of the stimulus arguments relate to features of grammatical constructions that are rarely exemplified. As examples are rarely uttered, deviant instances are rarely corrected, even assuming the presence of superlatively wise and attentive caregivers.

2) (The reason that you mention) Explicit instruction on grammatical rules has almost no effect on most people, especially young children. So corrections at most add a few more examples of bad sentences to the child’s dataset, which they can probably obtain anyway via more indirect cues.

If corrections were really effective, someone should be able to do a killer experiment where they show an improved (i.e. more adult-like) handling of, say, quantifier scope in four year olds after giving them lots of relevant corrections. I am open minded about the outcome of such an experiment, but I’d bet a fairly large amount of money that it would go nowhere.


How are you getting $166k take home for a $200 salary? If I use one of those online tax calculator things for New York, I get a take home of $135,000 from $200k. Ok, so let’s try a lower tax state that still has a lot of economic opportunity. For Texas I get $150,000 take home. Once you start adding on the costs associated in the US with having a serious medical condition, it’s far from clear that you’re going to be saving a lot of money from lower taxes. You’ll save a bit if you stay healthy and pay a lot more if you don’t.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: