Hacker Newsnew | past | comments | ask | show | jobs | submit | fooyc's commentslogin

The author cites this to justify the need for Records:

> Most Java objects set every field to be private and make all fields accessible only through accessor methods for reading and writing.

> Unfortunately, there are no language enforced conventions for defining accessors; you could give the getter for foo the name getBar, and it’ll still work fine, except for the fact that it would confuse anybody trying to access bar and not `foo'.

Scala supports pattern matching on objects implementing the `unapply` method.

Is this considered harmful? Why didn’t Java follow this route?


it's a matter of standardisation again. Java's standard is like C++; ponderous. The record pattern jep indicates in final footnotes that something like unapply may be in the works, so all hope is not lost.


Yes it will


How?


The whole point of WEI is that the site can choose to block any combination of browser and OS they see fit, in a reliable way (currently, browsers can freely lie). CURL and friends will almost immediately be branded as bots and banned - that's the stated objective.


It is more severe than that. The design favors a whitelist approach: Only browsers that can get the attestation from a "trusted source" are allowed. Browsers that cannot, don't.


How?

The page must first load, then it requests an attestation using js and sends it back to the server for further use (like a recaptcha token).

So for something like curl it could be no change.

https://github.com/RupertBenWiser/Web-Environment-Integrity/...


WEI won’t even stop the bad bots. They will simply use "legitimate" devices.


Consider this scenario:

- Content sites implement Web Integrity API to block bots

- But they still allow Google crawlers, because Google is their source of traffic

- Google competitors are locked out

How do attesters solve this problem?


Not only adblocking, but also crawling. They want to kill competition.


How does that work? The binary format embeds variants of the same program?


Yes, here is an example how it works for GCC.

https://gcc.gnu.org/onlinedocs/gcc-13.1.0/gcc/Function-Multi...


On linux distros, the package manager downloads different binaries based on your CPU. Skylake would be x86-64-v3, Zen 4 would be x86-64-v4, for example.

And there are different schemes for multiple architectures in the same program, like hwcaps.


Isn’t this going to get very unmanageable very soon? Intel seems to add extensions every other year or so.


The extensions can be kinda broken down into 4 levels. Basically ancient, old (SSE 4.2), reasonably new (AVX2, Haswell/Zen 1 and up), and baseline AVX512.

https://developers.redhat.com/blog/2021/01/05/building-red-h...

There is discussion of a fifth level. Someone in the Intel Clear Linux IRC said a fifth level wasn't "worth it" for Sapphire Rapids because most of the new AVX512 extensions were not autovectorized by compilers, but that a new level would be needed in the future. Perhaps they were thinking of APX, but couldn't disclose it.


AVX10/APX does sound like a good baseline for v5.


except that it doesn't support full AVX-512, making the whole idea of backward compatibility between these levels meaningless. "It's Intel!!!"


Well that's an even better justification, as a x86-64-v5 level would be needed for the newer CPUs.

We can throw away any hope of v4 being a standard baseline.


It’s easy to fully automate and storage is relatively cheap these days.


I'd think the issue would be more build infra, every new variant means you have to build the world again


Again, compute is surprisingly cheap these days.

Work out what it would cost to compile - say - a terabyte of C code at typical cloud spot prices.

A large VM with 128 cores can compile the 100 MB Linux kernel source tree in about 30 seconds. So… 200 MB/minute or 12 GB/hour. This would take 80 hours for a terabyte.

A 120 core AMD server is about 50c per hour on Azure (Linux spot pricing).

So… about $40 to compile an entire distro. Not exactly breaking the bank.


you'd have to separate out compiling and linking at a bare minimum to get even a semi accurate model. plus a lot of userspace is c++, which is much, much slower.


Yes. Also, test it.


That can also be largely automated.


LTO does rarely break things in hard to detect ways, but I have never heard of a -march x86 compilation bug.


in the end it will be like any other modern hardware appliance:

the hardware is the same design for cost saving purposes, but different features are unlocked for $$$ by a software license key.

You want AVX-512? pay up and unlock feature in your CPU and you can now use the feature. This could also enable pay-as-you-go license scheme for CPUs, creating recurring revenue for Intel

from the hardware perspective - the same silicon, but different features sold separately


Maybe JIT compilers can take profit of this immediately, since they target a single machine?


Yup. It's one of their theoretical advantages that's about to become a lot less theoretical. Historically it hasn't made much difference because optional instructions were hard for JIT compilers for most languages to use (in particular high level JITd languages tend not to support vector instructions very well). But a doubling of registers is the sort of extension that any kind of code can immediately profit from.

Arguably it will be only JITd languages that benefit from this for quite a while. These sorts of fundamental changes are basically a new ISA and the infrastructure isn't really geared up to make doing that easy. Everyone would have to provide two versions of every app and shared library to get the most benefit, maybe even you get combinatorial complexity if people want to upgrade the inter-library calling conventions too. For native AOT compiled code it's going to just be a mess.


In what concerns the JVM and ART, and the CLR, it is quite practical, even if there is room for improvment.


Gentoo users will finally get to be smug again, once GCC/clang have support for them.


All the more reason that Wasm should be the bottom of software :)


IBM and Burroughs/Unisys have already been doing that for decades, with bytecode based executables for their mainframe/micros.

Or Xerox PARC, with their microcoded CPUs loading the desired interpreter on boot.

I guess, it is an idea that keeps being revalidated.


Google would do anything to make it harder for others to crawl the web. Killing RSS was part of that strategy.

News sites will implement these DRMs, but of course they will still allow Google because it is their source of traffic. Alternative search engines and good bots will be locked out.


>Killing RSS was part of that strategy.

Oh please.

I get that it's more satisfying to blame Google than the faceless masses who had zero interest in RSS and who had a variety of alternatives to Reader in any case.

I guess they also had a strategy to kill social media by axing Google+ and user-created encyclopedias by killing Knol.


Not only Reader, but also the RSS support in Chrome and Firefox (whose Google used to be the primary source of funds). And Feedburner.


> Firefox (whose Google used to be the primary source of funds)

Google deal with Firefox was always about being the default search engine there, and that's it. They never had any power of cutting it adding features to the project.


Officially, sure, but you shouldn't pretend that Google's funding isn't the main survival line for Mozilla as an entity, and that there isn't pressure there.


Note: Brave (Chromium) has a RSS support. It's pretty good.


You can Oh please, but Google will never live that one down.

It'll live on in the history of the internet ... foreverrrrrrrrrrr.


Not just Google, Cloudflare is working hard on it too.


Cloudflare works hard but Google works harder.


Honestly, credit to CF - for actual damage to the current internet they're pretty equal even if Google has had to work much harder for their share.


> Killing RSS was part of that strategy.

Oh boy. RSS died because it was "only for nerds". Never had I ever met a person outside my tech bubble that had used RSS yet knew what it was. That's not how the average Joe uses the internet.


The article mentions race conditions


Alternative interpretation: people with larger brains tend to nap more


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: