Hacker News new | past | comments | ask | show | jobs | submit login

Techniques like differential privacy do not work for some types of data models, including many of the more interesting/risky ones. I've never seen a technique that can deliver an analytical model at scale that is both analytically effective and anonymous while also robust against sophisticated de-anonymization attacks. There is no good theoretical foundation to suggest that such things are possible.

Most modern techniques for ensuring anonymization make assumptions that won't constrain sophisticated blackhats. They are good policy in a legal ass-covering sense and increase the cost required to de-anonymize but that is about it.




Can you give an example? Global DP systems generally offer excellent privacy protection for the analytical distortion cost. Global DP is often not implemented simply due to workflow inconveniences but otherwise it’s great for analytical use cases.


If your scale is large enough and you don’t care about identifying individuals, synthetic data does this fairly effectively.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: