Hacker Newsnew | past | comments | ask | show | jobs | submit | bigbadfeline's commentslogin

> Cheap stuff to buy vs. employed natives.

Who's going to employ them? AI?


> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit. Just fyi. Be glad it was disclosed at all.

I'm so glad these so called "researchers" aren't totally evil, I'm so grateful they're only half evil, give them a lollipop.

Whatever, the way they disclosed it isn't much different from no disclosure at all - the exploit would have been identified in the wild and fixed soon thereafter.

"Researchers"...


the way the disclosed it is the industry standard. think of the biggest security research teams you know (e.g. google), and they follow the same process.

non-security people always seem to get up in arms about it, but there is very good reasons why the industry has landed on the process it has, which has been hashed out over a few decades.


> Having tens of thousands of decentralized, independently moderated servers would result in an order of magnitude more CSAM being shared than having a few oligopolies.

It doesn't matter how many are shared but how many are viewed. On a small server community policing works just fine, bad actors are easier and faster to block and to top it off, the smaller reach of each server makes it unprofitable to target multiple serves, fish for their weak points. etc - the dirty jobs become unprofitable which is what matters most.

With the help of AI, small players can do a better job at removing CSAM.


> With the help of AI, small players can do a better job at removing CSAM.

Chicken/egg. How do you expect that AI to be able to detect CSAM without appropriate training, which requires appropriately classified training data?


> So it's >90% desktop browser and OS, plus >30% mobile OS. > Yes, I think it's very safe to say "browsers and operating systems are increasingly expected to gain access to language models."

Doesn't follow. Every case you listed justifies LLM inclusion with a similar "everything is expected to be defiled by LLMs" argument, mine is a better wording but still evasively passive and the "expected" part is still nonsense.

Just don't tell me LLM inclusion is justified by "expected" all the way down, like the bottomless money pit it is.


> the example in my first comment, project zero, is still active today.

So? Many smaller players actually contribute more.

It's not about a single contribution but about what is better - a lot of power in the hands of a large corp which can afford to obstruct with impunity and do the opposite of "do no evil" versus several smaller players who have to actually compete and are concerned about their image.


>So? Many smaller players actually contribute more.

the claim was that no one should expect google to do anything good for the web or humanity "EVER". the existence of even one good thing is to refute that point.

but your sibling comment is probably correct. people say "EVER" but dont mean it literally, or something. its very confusing to me.


> Does requiring a VIN on a car mean that cars are banned?

Of course, Chinese cars in the US are effectively banned by regulations. Proof that VIN can be used for banning.


> You gave it capability to delete emails. Why did you expect it not to do that at least some of the time?

Because of the I in AI of course. Would you call it false advertisement and go after the providers?


This reminds of the conversation the other day about the deleted production database at railway. "this person obviously didn't follow best practice of being hyper distrusting of LLM agents", and the response "yeah but every company is marketing it as safe. someone is gonna fall for it".

(Well-regulated) free markets are sort of built on the principle of educated consumerism. Your choice matters; its not up to the government to make illegal every non-optimal product. However, we do expect some minimum level of safety.

What does that mean for llms? Their nondeterminism does seem to incline them toward a legal safety requirement. Can you buy a fire extinguisher that 1/1000 times burns your house down? Or can your car brakes instead increase acceleration in rare cases?

Im using llms much more than i used to, but i still cant shake the fundamental stochastic nature of the technology.


Wherever I'm going, I'll be there to apply the formula. I'll keep the secret intact. It's simple arithmetic. It's a story problem. If a new car built by my company leaves Chicago traveling west at 60 miles per hour, and the rear differential locks up, and the car crashes and burns with everyone trapped inside, does my company initiate a recall? You take the population of vehicles in the field (A) and multiple it by the probable rate of failure (B), then multiply the result by the average cost of an out-of-court settlement (C). A times B times C equals X. This is what it will cost if we don't initiate a recall. If X is greater than the cost of a recall, we recall the cars and no one gets hurt. If X is less than the cost of a recall, then we don't recall.

Chuck Palahniuk, Fight Club


But intelligent beings are fundamentally fallible? That's kind of the nature of doing leaps of reasoning: sometimes those leaps are amazing, sometimes they're wrong. It's what's advertised.

You could do a whole thesis on how industrialization and the invention of bureaucracy are efforts to get reproducible results out of fallible humans.

We don't yet have the luxury of several thousand years of work trying to get LLMs to be less fallible.


> But intelligent beings are fundamentally fallible?

Not fundamentally, only until they're compelled to learn from it. The current crop of AI understands neither compelling nor learning.


I is in the I of the beholder :)

> you chose that instance because you're OK with that admin making choices for you

Nobody chooses instances for that, very few know anything about the admin, people just like the content until... in >70% of cases, bait and switch follows

That's why Mastodon is such an incredible mess, it creates the conditions for serious problems, then goes: "you chose what you knew nothing about, nor there's any way to know anything, therefore... you are the problem".


Yes, if you willing participate in an ecosystem where you know large swaths will be actively against you and try to defederate with you, that's kind of on you. Don't participate in that ecosystem if you don't like it, the ones already inside this ecosystem (like not me), seems to be OK with it and others outside of it (like me) seem to be OK with them having their own ecosystem where they can do these things.

Maybe I'm lucky in the instance I chose or the content I like being uncontroversial, but this isn't my experience at all.

I've heard of instances carrying a lot of Nazi content being banned, and of instances choosing not to re-host adult media (which makes the interface a bit worse, but doesn't actually block you from getting that). But most admins from what I've seen are pretty clear on this in the about page of the instance.

70% seems like a wild claim.

I have had content I like being removed from major social media platforms, like reddit and tumblr.

Also, if you choose an instance and it gets shut down, you just start another account. This isn't serious business, it's social media. To me, complaining about having to choose an instance to start is like complaining about having to choose a class at the start of an RPG.

Personally, I really love mastodon as a platform and I don't understand all the hate it gets here.


> now, they are to best of my knowledge fantastic people,

They could be the next "don't do evil" people but practice shows that doesn't last for long. And then the messy license terms become very handy for what comes next.

If they went to the trouble to specify all of their rights over your data, their glossing over about what they can do with it is a solid reason to push for complete clarity or pull out completely.


> my main qualm with Ed is his analysis on the financials is decent, but he absolutely refuses to admit that the technology is useful

Yeah, I find that sort of critics to cause more harm than good. The economic case for closed source AI isn't there - in macroeconomic sense, and accounting for all costs, it's more expensive than the value it provides. There's data to back that up, so focus on economics.

On the other hand, hallucinating about what AI can or cannot do is useless, only research can provide the answer.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: