Hacker News new | past | comments | ask | show | jobs | submit login

> It's the norm in healthcare (HIPAA), disclosure is required for breaches that affect 500+ persons, and even <500 person breaches have to be reported annually to HHS and to the individual at the time of discovery.

> https://www.cms.gov/Outreach-and-Education/Medicare-Learning...

> edit: less-than sign wrong way*

Breaches, not vulnerabilities. The discussion is not whether or not breaches should be disclosed[0], but whether newly discovered and believed-to-be-unexploited vulnerabilities should be disclosed.

[0]: They should of course, after a reasonable period in which to patch the vulnerability used.




Easy fix, just design your system so that you can’t confirm whether there ever was a breach because you deleted all the old data


If the implication is that Google deletes the logs to avoid having to disclose breaches, you've got it completely backwards. The default is deleting disaggregated log data as soon as possible after collection. There is a very high bar that has to be met for retaining log data at Google, and generally speaking it's easier to get approval to log things if you set it up so that they're deleted after a short time span. Not sure if that's what happened here, but that would be my guess.


> believed-to-be-unexploited vulnerabilities

you cannot prove the negative (realistically). If you have a vulnerability, you must treat it as though it has been exploited.


That is the kind of argument that carries a lot of force on a message board, but is not at all lined up with how the world actually works. In reality, almost nobody operates under the norm of "any vulnerability found must have been exploited", and even fewer organizations disclose as if they were.

You can want the world to work differently, but to do so coherently I think you should explicitly engage with the unintended consequences of such a policy.


Sometimes you can, if you have comprehensive logs that cover it.

edit: Within reason, anyway. Obviously if your vulnerability includes write access to logs or something then you're poked.


I think in this particular case, their policy statement in the sister article from Google blog indicates they couldn't really say that in this case.

> We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.

^ the above statement, but couched with this:

> We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any Profile data was misused.


I wasn't arguing one way or the other on the issue, just reframing it so everyone's on the same page.

Devil's advocate: Do you believe that proactive security assessments would still be performed if each vulnerability found was required to be disclosed as though it had been exploited?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: