Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, so the vendor must not store it. Something along those lines is usually said in the privacy policy. If you don't trust the vendor to do that, then do not opt-in to sending data, or even better, do not use the vendor's software at all.


Sometimes, we have to or we simply want to run software from developers we don't know or entirely trust. This just means that the software developer needs to be treated as an attacker in your threat model and mitigate accordingly.

I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.


I can't argue that you are wrong, but I can argue that, for myself, if I don't trust a developer to not screw me over with telemetry, I cannot trust the developer to not screw me over with their code. I can't think of a scenario where this trust isn't binary, either I can trust them (with telemetry AND code execution), or I can't trust them with either. Could you describe what scenario I am missing?


You’re not missing anything. In general, I don’t think you can really trust the vast majority of software developers anymore. Incentives are so ridiculously aligned against the user.

If you take the next step: “do not use software from vendors you don’t trust,” you are severely limiting the amount of software you can use. Each user gets to decide for himself whether this is a feasible trade off.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: