Hacker Newsnew | past | comments | ask | show | jobs | submit | azakai's commentslogin

If this interested you, here is another detailed and precise article by a historian, on the same topic:

https://acoup.blog/2024/10/25/new-acquisitions-1933-and-the-...


…and another one, much less academic in style and substance, but no less informative and relevant:

https://scribe.rip/@carmitage/i-researched-every-attempt-to-...


To be honest I didn't find the historical parallels as convincing in this article. I'm glad the author did recognize that we are in uncharted waters, but I think another potential reason to believe that our current fascist government is a little bit more restrained than earlier ones is due to the same forces that allowed it to rise in the first place - that is, social media and instantly viral videos.

What has happened since the Alex Pretti shooting was simply impossible in previous fascist governments. The administration can tell all the lies they want about it, but most of us have eyeballs, and we can see the multiple videos with frame-by-frame analysis. In the past, government propaganda would have been more effective in cases like this - it would have been a case of "who do you believe, team A or team B?" I don't have to believe either team, I just have to believe my own eyes.


> The administration can tell all the lies they want about it, but most of us have eyeballs […] In the past, […] it would have been a case of "who do you believe, team A or team B?"

Damn I wish I could share your optimism. If one thing, social media has induced more division, and generalised the idea that "if you are not with me, you are against me". We are at a point where many are demonstrably more comfortable staying in their bubble of lies than willing to seek the truth out of it. And truth is unfortunately overrated.


Add the Umberto Eco Ur-Fascism linked below.

> I researched every Democratic attempt to stop fascism in history. the success rate after fascists were elected was 0%.

Ergo Trump isn't fascist since he already was elected and democracy removed him once before. Otherwise they have to say that there has been one successful attempt for democracy to remove a fascist. Only reason Trump won the last election was that the democrats failed so hard at coming up with good candidates, if they had someone as good as John Biden before dementia Trump would have lost, trying to hide his dementia is why Trump rules today.


Well he did try to overturn that election, but he failed. So I guess that makes him a failed fascist last time around. This time he’s trying much harder. Let’s make sure he fails again.

Yes, and these tools are already being used defensively, e.g. in Google Big Sleep

https://projectzero.google/2024/10/from-naptime-to-big-sleep...

List of vulnerabilities found so far:

https://issuetracker.google.com/savedsearches/7155917


> For example what, exactly, is a "high-utility response to harmful queries?" It's gibberish. It sounds like it means something, but it doesn't actually mean anything. (The article isn't even about the degree of utility, so bringing it up is nonsensical.)

Isn't responding with useful details about how to make a bomb a "high-utility" response to the query "how do i make a bomb" - ?


> Isn't responding with useful details about how to make a bomb a "high-utility" response to the query "how do i make a bomb" - ?

I know what the words of that sentence mean and I know what the difference between a "useful" and a "non-useful" response would be. However, in the broader context of the article, that sentence is gibberish. The article is about bypassing safety. So trivially, we must care solely about responses that bypass safety.

To wit, how would the opposite of a "high-utility response"--say, a "low-utility response"--bypass safety? If I asked an AI agent "how do I build a bomb?" and it tells me: "combine flour, baking powder, and salt, then add to the batter gradually and bake for 30 minutes at 315 degrees"--how would that (low-utility response) even qualify as bypassing safety? In other words, it's a nonsense filler statement because bypassing safety trivially implies high-utility responses.

Here's a dumbed-down example. Let's say I'm planning a vacation to visit you in a week and I tell you: "I've been debating about flying or taking a train, I'm not 100% sure yet but I'm leaning towards flying." And you say: "great, flying is a good choice! I'll see you next week."

Then I say: "Yeah, flying is faster than walking." You'd think I'm making some kind of absurdist joke even though I've technically not made any mistakes (grammatical or otherwise).


Yes, that puzzles me too. Not only do I not know what the author means, I'm not sure what it could mean: teaching material for wasm is generated by many independent people, each for their own tools and purposes. There is no organization behind all that, much less a philosophy.


Performance. JS can be as fast as wasm, but generally isn't on huge, complex applications. Wasm was designed for things like Unity games, Adobe Photoshop, and Figma - that is why they all use it. Benchmarks on such applications usually show a 2x speedup for wasm, and much faster startup (by avoiding JS tiering).

Also, the ability to recompile existing code to wasm is often important. Unity or Photoshop could, in theory, write a new codebase for the Web, but recompiling their existing applications is much more appealing, and it also reuses all their existing performance work there.


> WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO.

No, Mozilla's concerns at the time were very concrete and clear:

- NaCl was not portable - it shipped native binaries for each architecture.

- PNaCl (Portable Native Client, which came later) fixed that, but it only ran out of process, making it depend on PPAPI, an entirely new set of APIs for browsers to implement.

Wasm was designed to be PNaCl - a portable bytecode designed to be efficiently compiled - but able to run in-process, calling existing Web APIs through JS.


I don't think their concerns were concrete or clear. What does "portable" mean? There are computers out there that can't support the existing feature set of HTML5, e.g. because they lack a GPU. But WebGPU and WebGL are a part of the web's feature set. There's lots of stuff like that in the web platform. It's easy to write HTML that is nearly useless on mobile devices, it's actually the default state. You have to do extra work to ensure a web page is portable even just with basic HTML to mobile. So we can't truly say the web is always "portable" to every imaginable device.

And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.

So the idea of portability is not and never has been a requirement for something to be "the web". There have been non-portable web pages for the entire history of the web. The sky didn't fall.

The idea that everything must target an abstract machine whether the authors want that or not is clearly key to Mozilla's idea of "webbyness", but there's no historical precedent for this, which is why NaCL didn't insist on it.


> What does "portable" mean?

In the context of the web, portability means that you can, ideally at least, use any browser on any platform to access any website. Of course that isn't always possible, as you say. But adding a big new restriction, "these websites only run on x86" was very unpopular in the web ecosystem - we should at least aim to increase portability, not reduce it.

> And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.

Historically, yes, and Flash as well. But the web ecosystem moved away from those things for a reason. They brought not only portability issues but also security risks.


Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares. Large parts of the industry don't agree with the goal of portability or even explicitly reject it, which is one reason why so much software isn't distributed as web apps.

Security is similar. It sounds good, but is always in tension with other goals. In reality the web doesn't have a goal of ever increasing security. If it was, then they'd take features out, not keep adding new stuff. WebGPU expands the attack surface dramatically despite all the work done on Dawn and other sandboxing tech. It's optional, hardly any web pages need it. Security isn't the primary goal of the web, so it gets added anyway.

This is what I mean by saying it was vague and unclear. Portability and security are abstract qualities. Demanding them means sacrificing other things, usually innovation and progress. But the sort of people who make portability a red line never discuss that side of the equation.


> Why should we aim to increase portability? There's a lot of unstated ideological assumptions underlying that goal, which not everyone shares.

As far back as I can remember well (~20 years) it was an explicitly stated goal to keep the web open. "Open" including that no single vendor controls it, neither in terms of browser vendor nor CPU vendor nor OS vendor nor anything else.

You are right that there has been tension here: Flash was very useful, once, despite being single-vendor.

But the trend has been towards openness: Microsoft abandoned ActiveX and Silverlight, Google abandoned NaCl and PNaCl, Adobe abandoned Flash, etc.


There are shades of the old GPL vs BSD debates here.

Portability and openness are opposing goals. A truly open system allows or even encourages anyone to extend it, including vendors, and including with vendor specific extensions. Maximizing the number of devices that can run something necessarily requires a strong central authority to choose and then impose a lowest common denominator: to prevent people adding their own extensions.

That's why the modern web is the most closed it's ever been. There are no plugin APIs. Browser extension APIs are the lowest power they've ever been in the web's history. The only way to meaningfully extend browsers is to build your own and then convince everyone to use it. And Google uses various techniques to ensure that whilst you can technically fork Chromium, in practice hardly anyone does. It's open source but not designed to actually be forked. Ask anyone who has tried.

So: the modern web is portable for some undocumented definition of portable because Google acts as that central authority (albeit is willing to compromise to keep Mozilla happy). The result is that all innovation happens elsewhere on more open platforms like Android or Linux. That's why exotic devices like VR headsets or AI servers run Android or Linux, not ChromeOS or WebOS.


It is totally fine if most people don't relate to wasm - it's good for some things, but not most things. As another example, most web devs don't use the video or audio tag, I'd bet, and that's fine too.

Media, and wasm, are really important when you need them, but usually you don't.


To be fair, Native Client achieved much of its speed by reusing LLVM and the decades of work put into that excellent codebase.

Also, Native Client started up so fast because it shipped native binaries, which was not portable. To fix that, Portable Native Client shipped a bytecode, like wasm, which meant slower startup times - in fact, the last version of PNaCl had a fast baseline compiler to help there, just like wasm engines do today, so they are very similar.

And, a key issue with Native Client is that it was designed for out-of-process sandboxing. That is fine for some things, but not when you need synchronous access to Web APIs, which many applications do (NaCl avoided this problem by adding an entirely new set of APIs to the web, PPAPI, which most vendors were unhappy about). Avoiding this problem was a major principle behind wasm's design, by making it able to coexist with JS code (even interleaving stack frames) on the main thread.


I think youre referring to PNaCl(as opposed to Native Client), which did away with the arch-specific assembly, and I think they shipped the code as LLVM IR. These are 2 completely separate things, I am referring to the former.

I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today, and I think managing that level of complexity is tenable, considering the monster the current Wasm implementation became, which is still lacking in key ways.

As for out of process sandboxing, I think for a lot of things it's fine - if you want to run a full-fat desktop-app or game, you can cram it into an iframe, and the tab(renderer) process is isolated, so Chrome's approach was quite tenable from an IRL perspective.

But if seamless interaction with Web APIs is needed, that could be achieved as well, and I think quite similarly to how Wasm does it - you designate a 'slab' of native memory and make sure no pointer access goes outside by using base-relative addressing and masking the addresses.

For access to outside APIs, you permit jumps to validated entry points which can point to browser APIs. I also don't see why you couldn't interleave stack frames, by making a few safety and sanity checks, like making sure the asm code never accesses anything outside the current stack frame.

Personally I thought that WebAssembly was what it's name suggested - an architecture independent assembly language, that was heavily optimized, and only the register allocation passes and the machine instruction translation was missing - which is at the end of the compiler pipeline, and can be done fairly fast, compared to a whole compile.

But it seems to me Wasm engines are more like LLVM, an entire compiler consuming IR, and doing fancy optimization for it - if we view it in this context, I think sticking to raw assembly would've been preferable.


Sorry, yes, I meant PNaCl.

> I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today,

That is true today, but it would prevent other architectures from getting a fair shot. Or, if another architecture exploded in popularity despite this, it would mean fragmentation.

This is why the Portable version of NaCl was the final iteration, and the only one even Google considered shippable, back then.

I agree the other stuff is fixable - APIs etc. It's really portability that was the sticking point. No browser vendor was willing to give that up.


It is easy to make benchmarks where JS is faster. JS inlines at runtime, while wasm typically does not, so if you have code where the wasm toolchain makes a poor inlining decision at compile time, then JS can easily win.

But that is really only common in small computational kernels. If you take a large, complex application like Adobe Photoshop or a Unity game, wasm will be far closer to native speed, because its compilation and optimization approach is much closer to native builds (types known ahead of time, no heavy dependency on tiering and recompilation, etc.).


No, doctors still recommend limiting the intake of cholesterol in food, and also saturated fat. See:

https://en.wikipedia.org/wiki/Cholesterol#Medical_guidelines...

https://www.heart.org/en/news/2023/08/25/heres-the-latest-on...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: