Hacker Newsnew | past | comments | ask | show | jobs | submit | zbentley's commentslogin

> other things (alcohol) that cause problems and are not being restricted

Alcohol is heavily restricted, though. You can't sell it to minors, younger minors can't drink it in public, you can't sell/buy/make it above a certain proof, you can only resell it from authorized distributors, it is taxes, and so on.

Sure, banning cigarettes for a specific generation is a much more stringent restriction, but plenty of other restrictions exist.


what if they told you your kids would never be allowed to have a drink?

I’m having a hard time coming up with a better way. Simply banning all manufacturing and import is not going to work when it’s heavily addictive. In the case of alcohol, quitting cold can kill you.

Banning it today and expecting people to cope, or attempt to fund recovery efforts for a whole nation would completely misunderstand the addicts mind. If you don’t want to quit, you never will.

Instead we have a total ban that is timeboxed to allow the addicts the rest of their lives to quit one way or another.


This is a great idea. To slightly sidetrack things: I think updating computer UI text selection behavior to not break click/snap-to-next selectable words on colons without padding spaces in general would be a good thing.

"A: B" would still click-select either "A:" or "B", but "1:2" (a ratio) would select the whole thing, as would "small:med:large" or an ipv6 address. In other words, I think that, in practice, English writing has assigned semantic significance to space-less colons in enough cases that text selection systems should reflect that.

Though I'm not sure RFCs are going to drive general GUI behavior--they won't "MUST" it, because that's overstepping, and I'm not sure GUI/OS-text-selection-functionality maintainers will be persuaded otherwise.


ZFS snapshots can be transmitted over the network, with some diff-only and deduplication gains if the remote destination has an older instance of the same ZFS filesystem. It’s not perfect, and the worst case is still a full copy, but the tooling and efficiency wins for the ordinary case are battle-tested and capable.

Yes, for sure, and stuff like this is really useful when rebalancing storage nodes, for example.

My point is that for the use case of offering a Postgres service with CoW branching as a key feature, you can't really escape some form of separation of storage and compute.

Btw, don't really want to talk too much about it yet, but our proprietary storage engine (Xatastor) is basically ZFS exposed over NVMe-OF. We'll announce it in a couple of weeks, and we'll have a detailed technical blog post then on pros/cons.


This other front page links to the first party docs: https://news.ycombinator.com/item?id=47835735

Or burglars.

I mean, I'm down to rip on JS/NPM any day of the week, but this specific issue isn't related to any JS/NPM-isms: it's a deserialization library which marshals language-specific objects from bytes using a variant of eval().

Any platform with eval (most implementations of Python, Perl, Lisp, Scheme, PHP, Ruby, Erlang, old editions/specific libraries of Java, Haskell, and many others) seems at risk for this type of issue.

Indeed, ser/de systems in those languages--all of them--have a long history of severe CVEs similar to this one.

It's also worth noting that this vuln has to do with the library's handling of .proto schema files, not data. The unsafe eval happens when a Protobuf schema file which itself describes the format of wire/data types is uploaded, not when the wire/data types themselves are deserialized. The majority of uses of Protobuf out there (in any language) handle the schema files rarely or as trusted input.

That doesn't make it safe/mitigated by any means, but it's worth being specific.


Very true. So many regulated/government security contexts use “critical” or “high” sev ratings as synonymous for “you can’t declare this unexploitable in context or write up a preexisting-mitigations blurb, you must take action and make the scanner stop detecting this”, which leads to really stupid prioritization and silliness.

At a previous job, we had to refactor our entire front end build system from Rollup(I believe it was) to a custom Webpack build because of this attitude. Our FE process was completely disconnected from the code on the site, existing entirely in our Azure pipeline and developer machines. The actual theoretically exploitable aspects were in third party APIs and our dotNet ecosystems which we obviously fixed. I wrote like 3 different documents and presented multiple times to their security team on how this wasn't necessary and we didn't want to take their money needlessly. $20000 or so later (with a year of support for the system baked in) we shut up Dependabot. Money well spent!

Very early in my career I'd take these vulnerability reports as a personal challenge and spent my day/evening proving it isn't actually exploitable in our environment. And I was often totally correct, it wasn't.

But... I spent a bunch of hours on that. For each one.

These days we just fix every reported vulnerable library, turns out that is far less work. And at some point we'd upgrade anyway so might as well.

Only if it causes problems (incompatible, regressions) then we look at it and analyze exploitability and make judgement calls. Over the last several years we've only had to do that for about 0.12% of the vulnerabilities we've handled.


That’s basically my experience as well. Just upgrading is much easier and cheaper.

Of course with latest supply chain failures we don’t update right away or automatically.

If it is RCE in a component that is exposed then of course we do it ASAP. But those are super rare.


My favorite: a Linux kernel pcmcia bug. On EC2 VMs.

In a similar vein:

Raising alarms on a CVE in Apache2 that only affects Windows when the server is Linux.

Or CVEs related to Bluetooth in cloud instances.


Or raising alarm on a CVE in linux mlx5 driver on an embedded device that doesn't have a pcie interface

ReDoS at CVSS 8+ ... in the configuration file parsing of a bundler.

”If you use that installed Python version to start a web server and use it to parse pdf, you may encounter a potential memory leak”

Yeah so 1) not running a web service 2) not parsing pdf in said non-existing service 3) congrats you are leaking memory on my dev laptop


I refused to refer to the whole vulnerability reporting / tracking effort as "security", always correcting people that it was compliance, not security.

I'll top that: wireless-regdb out of date. Against an EC2-specific kernel.

Kernel headers out of date -> kernel vulnerability... in a container.

Okay. You win.

Yep. And cloud providers could eat any slippage cost (enforcing, say, every 5 minutes by stopping service) without even a rounding error on their balance sheets.

The fact that they don’t indicates that there’s no market reason to support small spenders who get mad about runaway overages, not that it’s technically or financially hard to do so.


> Initially, we anticipated that the edge case would have minimal impact, given Prometheus’s widespread adoption and proven reliability in diverse environments. However, as we migrated more users, we started seeing this issue more frequently, and it stalled migration.

That's a very professional way of saying "Wait, everyone just lives with this? What the fuck?!"

Many such cases in the Prometheus ecosystem.


> we know how to structure societies with very few corrupted people

We do?


Sure. Their are plenty of theoretical way to do it, and even example of small communities that have put them in practice.

Looks very similar to the situation of proved correct code: it just never reached mass adoption and fail to win at scale when crappier alternative can propagate faster and occupy the ecological niche, that can then alter the ecosystem in ways that makes even less likely the most sound approach could gain enough traction and momentum to scale.


I'm doubtful. Which small communities that did this are you referring to? And is the thing that made them successful something that's just hard, or is it something innate to their being very small?

If it's the latter, I don't think that checks out; I interpreted "we know how to build societies that don't do this" as "we know how to build large-scale human systems that avoid these trends; systems that could exist at scale on earth today".

Otherwise the claim just ends up being "we know how to do this if we start tabula rasa" (fun thought experiment, can't happen) or "we know how to do this if we get rid of 99.9% of the population and go back to village-scale economies" (not worth it, and the process of getting there would be exploited).


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: