>> Open source code is much more likely to already have been audited better.
> It just happens to not be true.
I think it's true, free and open source code, on average, is more likely to have been audited to a greater extent. I think most people confuse "audited to a greater extent", "coverage" and "security". The paradox here is: Merely increasing the chance of fixing bugs do not automatically guarantee security by itself, nor guarantee a sufficient code coverage.
For example, if the codebase has 10 serious RCE exploits, if it's a binary-only program, the pentester may be able to find 3, but if it's FOSS, the community might be able to find 5. Yet, the remaining RCE exploits are still exploits. And paradoxically, a project can have fewer exploits than a binary-only alternative in a numerical measurement, yet, the mere discovery of a new exploit can create a huge storm and lead to the perception of the software being less secure, even if it's objectively false in a numerical measurement.
My opinion is, free and open source code, in general, often objectively reduces the number of exploits comparing to its binary-only alternative. It doesn't eliminate exploits. The important question here is code coverage - A group of random hackers browsing code can never replace a systematic review (organized by the community or otherwise). Nor it makes the software inherently secure, a program that uses suboptimal programming techniques is more prone to exploits and more reviews cannot change the fact. However, the exploits discovered and fixed by a group of random hackers are still real security improvements.
For example, OpenSSL, even before Heartbleed, were attacked by researchers around the world, some exploits involved advanced side-channel and timing attacks. The bugs they discovered and fixed are real. Now imagine a binary-only OpenSSL in a parallel universe, called CloseSSL (while other conditions - such as an underfunded development team, remain the same), in this universe, fewer exploits are discovered and fixed, and it may be more vulnerable to timing attacks than the OpenSSL in our universe, so objectively it's more "secure". But both are vulnerable to Heartbleed, in other words, being more "secure" in a numerical measurement does not translate to real world security, on the other hand, the numerical measurement showing the superiority of FOSS by itself, is nevertheless real. Of course, real-world programs do not behave like an ideal model, being FOSS or not also correlates to other variables such as the the size of funding or audit coverage. My argument is only an ideal model treating all variables as independent variables.
I call it the anti-Linus's law: Given more eyeballs, not all bugs are shallow, unless there's enough eyeballs. But it's always better than fewer eyeballs.
> Most vulnerabilities are found by techniques like fuzzing, not by combing through thousands of lines of code.
Having the source code available allows pentesters and auditors to use compiler-based instrumentation for fuzzing, which is more efficient than binary fuzzing.
> Having the source code available allows pentesters and auditors to use compiler-based instrumentation for fuzzing, which is more efficient than binary fuzzing.
I will concede that is pretty valid point. My argument is basically that there is the false sense of open source code being "more secure" because of an assumption that "the community" is checking it thoroughly. Most people will just grab it off of github and run it, without giving it a second thought at all. Generally speaking you don't get high quality full code audits for free, pentesters and auditors generally like to get paid and aren't out there testing github code-bases out of the goodness of their heart.
> It just happens to not be true.
I think it's true, free and open source code, on average, is more likely to have been audited to a greater extent. I think most people confuse "audited to a greater extent", "coverage" and "security". The paradox here is: Merely increasing the chance of fixing bugs do not automatically guarantee security by itself, nor guarantee a sufficient code coverage.
For example, if the codebase has 10 serious RCE exploits, if it's a binary-only program, the pentester may be able to find 3, but if it's FOSS, the community might be able to find 5. Yet, the remaining RCE exploits are still exploits. And paradoxically, a project can have fewer exploits than a binary-only alternative in a numerical measurement, yet, the mere discovery of a new exploit can create a huge storm and lead to the perception of the software being less secure, even if it's objectively false in a numerical measurement.
My opinion is, free and open source code, in general, often objectively reduces the number of exploits comparing to its binary-only alternative. It doesn't eliminate exploits. The important question here is code coverage - A group of random hackers browsing code can never replace a systematic review (organized by the community or otherwise). Nor it makes the software inherently secure, a program that uses suboptimal programming techniques is more prone to exploits and more reviews cannot change the fact. However, the exploits discovered and fixed by a group of random hackers are still real security improvements.
For example, OpenSSL, even before Heartbleed, were attacked by researchers around the world, some exploits involved advanced side-channel and timing attacks. The bugs they discovered and fixed are real. Now imagine a binary-only OpenSSL in a parallel universe, called CloseSSL (while other conditions - such as an underfunded development team, remain the same), in this universe, fewer exploits are discovered and fixed, and it may be more vulnerable to timing attacks than the OpenSSL in our universe, so objectively it's more "secure". But both are vulnerable to Heartbleed, in other words, being more "secure" in a numerical measurement does not translate to real world security, on the other hand, the numerical measurement showing the superiority of FOSS by itself, is nevertheless real. Of course, real-world programs do not behave like an ideal model, being FOSS or not also correlates to other variables such as the the size of funding or audit coverage. My argument is only an ideal model treating all variables as independent variables.
I call it the anti-Linus's law: Given more eyeballs, not all bugs are shallow, unless there's enough eyeballs. But it's always better than fewer eyeballs.
> Most vulnerabilities are found by techniques like fuzzing, not by combing through thousands of lines of code.
Having the source code available allows pentesters and auditors to use compiler-based instrumentation for fuzzing, which is more efficient than binary fuzzing.