Hacker Newsnew | past | comments | ask | show | jobs | submit | codedokode's commentslogin

Honestly it seems that most of Web Standards are used mostly for fingerprinting - I think a small number of websites uses IndexedDB (who even needs it) for actually storing data rather than fingerprinting.

That's why expansion of web standards is wrong. Browser should provide minimal APIs for interacting with device and features like IndexedDB can be implemented as WebAssembly library, leaking no valuable data.

For example, if canvas provided only access to picture buffer, and no drawing routines calling into platform-specific libraries, it would become useless for fingerprinting.


You can use a browser extension like "Local Storage Editor" to see the contents of the Local Storage of a website. So far, I've seen it used for caching long-life images (like on gmail), or used as another way to do logins instead of cookies.

> You can use a browser extension like "Local Storage Editor" to see the contents of the Local Storage of a website.

Or just open dev tools


I'm with you up to the bit about canvas. The problem there is that if you want hardware acceleration then either you can't permit services to read back what was rendered (why do they need to do that again?) or else you're inevitably going to leak lots of very subtle platform specific details. Personally I think reading back the content of a canvas should be gated behind a permission dialog.

You can put hardware acceleration behind permission.

You can hire people to test your product and provide analytics. But not try to siphon the data for free.

I'm not taking a side on whether a product should add telemetry. I'm rejecting the absurd notion that these suggestions are at all giving the same information.

Then pay for the data if you need it so bad.

You could hire people to be testers and pay them for the analytics, I think they would even allow you to record the screen if you paid well enough. The problem is that you do not want to pay or get consent, you want to grab the data for free and without permission and without people realizing what you do. And such kind of people deserve much worse treatment than they are treated today.

Why do you need to collect hardware fingerprint, IMEI, phone number, geolocation, list of nearby wifi access points, list of installed applications, selfie and passport photo when you can simply count how much times a server route was called?

My comment explicitly uses "how many people clicked the secondary button on the third tab" as an example, not any of that nonsense -- you are not responding in good faith.

That's a slippery slope and we both know it. Telemetry does not automatically include those things.

Indeed it's not fair in discussion context, so wonder if it was meant as a statement on the ills of telemetry as a whole.

Analytics is wrong. I never click any ads, but they keep showing it. I avoid registering or enter fake emails, but they keep showing full screen popups asking for email. I always reject cookies but they still ask me to accept them. And youtube keeps pushing those vertical videos for alternately gifted kids despite me never watching them. What's the point of this garbage analytics. It seems that their only goal is to annoy people.

All of those are affected by analytics.

Ad slots will be filled whether or not you click. If you never click, you'll tend to match with either very low quality ads or ads that pay per impression (display ads).

Email registration is highly valuable for a business, so analytics won't be used to decide whether to show the modal but rather test different versions of it.

Cookies are too valuable to not push on users, because without them only the previously mentioned low quality ads can be shown. High quality and display ads match on interest or demographic labels.

The business decision to keep vertical videos is highly likely to be affected by analytics, and of course the choice of which videos to show is based on recommendation models trained on interaction logs.

The priority isn't making your experience better, though that is often an incidental result -- it's driving the business.


You can have strings by using relative pointers ("string starts 123 bytes before this").

You can also just use an array which sets a max capacity, and either use a null-terminator or a separate size field.

In practice you probably want to have both, and choose what's most practical based on the message.


I disagree. Big endian is long dead and not worth worrying about. And code pages too. What is more important, is dealing with schema changes, when you add new fields to requests and responses.

There are niches where those matter.

but yes schema changes is most likely to get you today


What you use is perfect for short-range communication (application and child process talking over shared memory), but not good for long-range communication (over Internet) because you can have old client talking to new version of a server, so you will have to add version numbers and have the code to parse outdated formats. But protobuf has compatibility built in and you do not need to write anything to support outdated clients. Also, protobuf uses solutions like varints to compress data to use less network traffic. So it is obviously made for long-range communication, and you probably do not have that and send 7 zeros for every small number.

TL;DR protobuf has version compatibility and compact number encoding.


I already said you can UUIDs and schemas, and even dynamic conversion between mismatched schemas.

Doing plain C structs doesn't prevent any of this.


It requires extra effort to write conversion algorithm for older data structure version.

The converter is generated automatically based on the differences between the two schemas.

Takes zero effort other than CPU cycles.


As I understand, protobuf has compatibility (it stores field ids), so new service can read request from older client, and vice versa, so you do not need to refactor anything. Also, it is made for long-range communications, and is inefficient for inter-process or inter-thread messaging.

Presumably, OP refers to the generated rust types which depend on the specific protobuf framework.

I had the same issue when looking to adopt ConnectRPC for Go, which uses a custom wrapper type to model requests.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: