The author cites this to justify the need for Records:
> Most Java objects set every field to be private and make all fields accessible only through accessor methods for reading and writing.
> Unfortunately, there are no language enforced conventions for defining accessors; you could give the getter for foo the name getBar, and it’ll still work fine, except for the fact that it would confuse anybody trying to access bar and not `foo'.
Scala supports pattern matching on objects implementing the `unapply` method.
Is this considered harmful? Why didn’t Java follow this route?
it's a matter of standardisation again. Java's standard is like C++; ponderous. The record pattern jep indicates in final footnotes that something like unapply may be in the works, so all hope is not lost.
The whole point of WEI is that the site can choose to block any combination of browser and OS they see fit, in a reliable way (currently, browsers can freely lie). CURL and friends will almost immediately be branded as bots and banned - that's the stated objective.
It is more severe than that. The design favors a whitelist approach: Only browsers that can get the attestation from a "trusted source" are allowed. Browsers that cannot, don't.
On linux distros, the package manager downloads different binaries based on your CPU. Skylake would be x86-64-v3, Zen 4 would be x86-64-v4, for example.
And there are different schemes for multiple architectures in the same program, like hwcaps.
The extensions can be kinda broken down into 4 levels. Basically ancient, old (SSE 4.2), reasonably new (AVX2, Haswell/Zen 1 and up), and baseline AVX512.
There is discussion of a fifth level. Someone in the Intel Clear Linux IRC said a fifth level wasn't "worth it" for Sapphire Rapids because most of the new AVX512 extensions were not autovectorized by compilers, but that a new level would be needed in the future. Perhaps they were thinking of APX, but couldn't disclose it.
Work out what it would cost to compile - say - a terabyte of C code at typical cloud spot prices.
A large VM with 128 cores can compile the 100 MB Linux kernel source tree in about 30 seconds. So… 200 MB/minute or 12 GB/hour. This would take 80 hours for a terabyte.
A 120 core AMD server is about 50c per hour on Azure (Linux spot pricing).
So… about $40 to compile an entire distro. Not exactly breaking the bank.
you'd have to separate out compiling and linking at a bare minimum to get even a semi accurate model. plus a lot of userspace is c++, which is much, much slower.
in the end it will be like any other modern hardware appliance:
the hardware is the same design for cost saving purposes, but different features are unlocked for $$$ by a software license key.
You want AVX-512? pay up and unlock feature in your CPU and you can now use the feature. This could also enable pay-as-you-go license scheme for CPUs, creating recurring revenue for Intel
from the hardware perspective - the same silicon, but different features sold separately
Yup. It's one of their theoretical advantages that's about to become a lot less theoretical. Historically it hasn't made much difference because optional instructions were hard for JIT compilers for most languages to use (in particular high level JITd languages tend not to support vector instructions very well). But a doubling of registers is the sort of extension that any kind of code can immediately profit from.
Arguably it will be only JITd languages that benefit from this for quite a while. These sorts of fundamental changes are basically a new ISA and the infrastructure isn't really geared up to make doing that easy. Everyone would have to provide two versions of every app and shared library to get the most benefit, maybe even you get combinatorial complexity if people want to upgrade the inter-library calling conventions too. For native AOT compiled code it's going to just be a mess.
Google would do anything to make it harder for others to crawl the web. Killing RSS was part of that strategy.
News sites will implement these DRMs, but of course they will still allow Google because it is their source of traffic. Alternative search engines and good bots will be locked out.
I get that it's more satisfying to blame Google than the faceless masses who had zero interest in RSS and who had a variety of alternatives to Reader in any case.
I guess they also had a strategy to kill social media by axing Google+ and user-created encyclopedias by killing Knol.
> Firefox (whose Google used to be the primary source of funds)
Google deal with Firefox was always about being the default search engine there, and that's it. They never had any power of cutting it adding features to the project.
Officially, sure, but you shouldn't pretend that Google's funding isn't the main survival line for Mozilla as an entity, and that there isn't pressure there.
Oh boy. RSS died because it was "only for nerds". Never had I ever met a person outside my tech bubble that had used RSS yet knew what it was. That's not how the average Joe uses the internet.
> Most Java objects set every field to be private and make all fields accessible only through accessor methods for reading and writing.
> Unfortunately, there are no language enforced conventions for defining accessors; you could give the getter for foo the name getBar, and it’ll still work fine, except for the fact that it would confuse anybody trying to access bar and not `foo'.
Scala supports pattern matching on objects implementing the `unapply` method.
Is this considered harmful? Why didn’t Java follow this route?