Hacker Newsnew | past | comments | ask | show | jobs | submit | j2kun's commentslogin

The article heavily quotes the "AI Security Institute" as a third-party analysis. It was the first I heard of them, so I looked up their about page, and it appears to be primarily people from the AI industry (former Deepmind/OpenAI staff, etc.), with no folks from the security industry mentioned. So while the security landscape is clearly evolving (cf. also Big Sleep and Project Zero), the conclusion of "to harden a system we need to spend more tokens" sounds like yet more AI boosting from a different angle. It raises the question of why no other alternatives (like formal verification) are mentioned in the article or the AISI report.

I wouldn't be surprised if NVIDIA picked up this talking point to sell more GPUs.


I would be interested in which notable security researchers you can find to take the other side of this argument. I don't know anything about the "AI Security Institute", but they're saying something broadly mirrored by security researchers. From what I can see, the "debate" in the actual practitioner community is whether frontier models are merely as big a deal as fuzzing was, or something signficantly bigger. Fuzzing was a profound shift in vulnerability research.

(Fan of your writing, btw.)


It's less that I think they would take the other side of the argument, than that they would lend some credence to the content of the analysis. For example, I would not particularly trust a bunch of AI researchers to come up with a representative set of CTF tasks, which seems to be the basis of this analysis.

Yeah, you might be right about this particular analysis! The sense I have from talking to people at the labs is that they're really just picking deliberately diverse and high-profile targets to see what the models are capable of.

> but they're saying something broadly mirrored by security researchers.

You might well be right, it is not an area I know much of or work in. But I'm a fan of reliable sources for claims. It is far to easy to make general statements on the internet that appear authorative.


They are a UK government unit: "The AI Security Institute is a research organisation within the Department of Science, Innovation and Technology."

Unfortunately, they fit straight lines to graphs with y axis from 0 to 100% and x axis being time - which is not great. Should do logistic instead.


If true, that's naked, shameless and brutal capitalism.

Seems much like those secretly tobacco industry funded reports about tobacco being safe and such.


In that case, the US was worried about espionage, not violation of civil liberties.

I work in an esoteric compiler domain (compilers for fancy cryptography) and we've been eyeing e-graphs for a bit. This article is super helpful seeing how it materialized in a real-world scenario.

An interesting move in this direction is the Tamagoyaki project: https://github.com/jumerckx/Tamagoyaki that supports equality saturation directly in MLIR.


It reads to me as a cogent and measured response to a very clickbaity advertisement about the result.

> Not tested. Proved. For every possible input.

Finding inputs that crashed and then saying, "be clear about what is in the scope of what you proved" is interesting and factual.


Self-harm (especially when depicting minors) has special standards. The recent court ruling on child safety against Meta probably led directly to this decision.

I don't think it particularly does in other media. Plenty of books have that as a theme. On netflix, 13 reasons why was one of their big hits.

https://www.npr.org/2019/07/16/742386829/netflix-edits-out-c...

And that was in 2019.

Explicit depictions are the target in most of these controversies.


Self-harm (especially when depicting minors) has special standards. The recent court losses on child safety for Meta and YouTube probably led to this.

Completely absurd. If it's not safe for children just slap an age rating on it.

I don't like this trend of every technology assuming I'm a child that needs to be protected from the world while simultaneously assuming I'm an adult with infinite disposable income that must be shown ads to all the time. This is insincere. Children need to be "protected" only when it's convenient and allows the platform to exercise unchecked control. Nobody is protecting children from ads because that would be inconvenient.


To be clear, I'm not advocating for the behavior here, just explaining that, for most tech companies, the risk of liability is a huge motivator. Liability for poor use of ad targeting would induce similar behavior (and I think that'd be a win for everyone involved)

i feel like this is going to be really dangerous tbh, especially if it starts blocking informational content

How often are you playing as black?

As often as the system decides that I should play as black.

Ha. I thought mine was broken on iPhone for a second.

A comment not about the article, but rather about the perceived quality of the HN comments.

X suppresses posts from people you follow in favor of algorithmically boosted posts, so at scale the follow counts don't matter as much.

My favorite class of HN comment: bringing concreteness to a vibes fight.

Perhaps unfortunately, vibes are part of being human. Ignore them at your peril.

Vibes can be safely ignored when they are disproven by easily accessible facts

Facts are ultimately just vibes that pass a cultural filter.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: