Hacker News new | past | comments | ask | show | jobs | submit login

Source code availability makes it a lot easier to find vulnerabilities. Open source code is much more likely to already have been audited better. Closed source code often depends more heavily on security by obscurity, and unexpected source release can definitely make vulnerabilities immediately apparent that weren't known prior.



> Open source code is much more likely to already have been audited better.

Common wisdom. I just happens to not be true. People just aren't auditing random code on github for fun. Auditing code is hard, and time consuming. Most vulnerabilities are found by techniques like fuzzing, not by combing through thousands of lines of code.


I used to do it, every time I installed a new package/game/service I'd look at the code. That resulted in a whole bunch of security reports.

I still do it for fun, but not methodically, and not regularly. It's a great way to look at code, to learn, and sometimes it pays off.

e.g. Reporting a bunch of trivial predictable filename issues in GNU Emacs, including something referring to the (ancient) Mosiac support:

https://bugs.debian.org/747100

Fuzzing is definitely useful, and I've reported issues in awk, etc, but fuzzing tends to be used when you have a specific target in mind. I'd rarely make the effort to recompile a completely random/unknown binary with instrumentation for that.


That is awesome.


The point that "Open source code is much more likely to already have been audited better." is actually true, but with the caveats that 99% of code isn't audited at all, and the 'better' claim is dubious. Security-focused devs audit OSS projects for practise, for bounties, for the glory of finding something in a popular codebase, and just to contribute their skills. It does happen.

In the closed source world, very few companies will pay for their source code to be audited, because it's expensive and time-consuming, and most only do it if they're required to.


> very few companies will pay for their source code to be audited, because it's expensive and time-consuming, and most only do it if they're required to.

And even when they do, in my experience, they usually end up buying an expensive automated report that provides little or no real insight.


> they usually end up buying an expensive automated report that provides little or no real insight.

That's totally what we are in the process of doing (with even two separated tools for even more wasted time!)


Code intended to be closed and released unexpectedly seems like the worst of both worlds though.


Just adding my 2c from a few (horrible) years of WordPress development:

When writing integrations between third-party modules, the documentation was rarely enough, so with open-source ones, I generally went through at least half of the total (backend) source code of each to find all the hooks I needed and would semi-often find fairly standard security issues and report them.

In contrast, if a plugin was closed-source and obfuscated, I would just go bother their support, and so their code was never looked at by anyone other than the 2 core devs. When I inevitably had to reverse-engineer parts of the code anyways and discovered issues, I got far more "hey, you broke the EULA!" responses than "thanks for the report".


Developing an Open Source software can have some incentives to have cleaner code.

Talking from personal experience. Here are my though on it:

First, with Open Source software, you expose (potentially) what you write to the while world, as a consequence, you don't want to feel ridiculous by publishing atrociously bad code. In contrast, in a corporate environment, the code is not seen by so many eyes if at all, and even if read, the readers are roughly a known quantity.

Second, You have far more time/freedom, specially if it's a personal project, to think about design/code architecture, and to rework things if required. You can also spend time on things like unit tests, fuzzing, etc. Basically, you can more easily work on all these things that are "valuable" but difficult to "quantify/measure".

Third, people working on OSS projects are generally a bit more motivated, either because it's their subject of interest, or because of the general appeal of OSS.

Forth, often with OSS, you actually have more resources/service available to help with code quality. CI, static analysis, dependency auditing, etc is one click/integration away. In a Corporate environment, the procurement for such services can be off-putting, integration setup can be an up-hill battle, and there can be strong restrictions regarding external services.

Just as an example, in a previous job, I tried to get budget for a Jenkins server, never got it, so I ended-up "stealing" an old abandoned server from another project, once I got it I ran into issues not being able to configure the post commit hooks to trigger a CI build. Heck, in this job, even my home desktop was a 3 or 4 times better development machine than my work laptop.

Things are changing slowly, having a good code coverage is more and more a goal if not a company policy, code review processes are more and more common, companies are a bit less paranoid about trusting external services for CI/analysis/fuzzing. But from what I have been able to see in most of my job, proprietary code bases tend to be lower quality than most OSS projects. Even the code I've produced in my free time tend to be better than the code I've written at my job.

It's not an absolute however, you can still have terrible OSS code bases, it's just you are a bit more geared towards better code quality in OSS projects.


> People just aren't auditing random code on github for fun

Yes, they do: https://www.fsf.org/blogs/community/who-actually-reads-the-c...


Maybe the person you are replying to should have qualified “popular open source repositories”.


Like openssl? Rhetorical question; OpenSSL was both open source and broadly used, and it took over two years to identify heartbleed.

Plus. many companies, Microsoft included, open up their source code to partners.

The openness of source code has little correlation to its security.


I know it's not quite that simple but isn't OpenSSL exactly an example of how a bug in open source software was found and fixed? Of course it took a while and the software was already extremely widely used at that point but bugs happen and at least it's not just lying around unfixed. I can't remember bugs in closed software getting the same kind of exposure.


I'm not sure if heartbleed is a good example here, given that it was basically a new class of exploit.


Wasn't heartbleed a fairly typical buffer overflow?


the typical buffer overflow would have been caught by OpenBSD's protective malloc.

> [...] OpenSSL adds a wrapper around malloc & free so that the library will cache memory on it's own, and not free it to the protective malloc. [...] So then a bug shows up which leaks the content of memory mishandled by that layer. [...]

https://marc.info/?l=openbsd-misc&m=139698608410938&w=2


I don’t think the vulnerability was in malloced memory, it was some buffer on the stack. I’ve actually patched OpenSSL to stop heart bleed as an excersice and iirc the fix was in fact just preventing a typical buffer overflow.


Seems like that commenter is also saying that it would’ve been caught as a regular buffer overflow bug?

> OpenSSL is not developed by a responsible team.


I've always thought of buffer overflow as writing beyond the intended bounds of the buffer.

Heartbleed is reading beyond the intended bounds remotely. I don't think there were similar attacks before hand, but I could be wrong. I only have a base level knowledge here.


Infoleaks are nothing new.


> People just aren't auditing random code on github for fun

No, just the important code that everyone is running.


You don't have to audit that. It's so popular, someone else must have done a thorough review already!


Afaik it had the opposite effect for OpenSSL. Not only was the code so bad that it would crash if ran with a secure malloc implementation. Due to being free and open source nobody felt the need to donate[1], with only one developer employed to work on it full time.

[1] https://arstechnica.com/information-technology/2014/04/tech-...


Well. eventually someone looked at it. And probably Heartbleed has been used a long time before it was published.


I have to confess that I have run afl on random code on github.


>> Open source code is much more likely to already have been audited better.

> It just happens to not be true.

I think it's true, free and open source code, on average, is more likely to have been audited to a greater extent. I think most people confuse "audited to a greater extent", "coverage" and "security". The paradox here is: Merely increasing the chance of fixing bugs do not automatically guarantee security by itself, nor guarantee a sufficient code coverage.

For example, if the codebase has 10 serious RCE exploits, if it's a binary-only program, the pentester may be able to find 3, but if it's FOSS, the community might be able to find 5. Yet, the remaining RCE exploits are still exploits. And paradoxically, a project can have fewer exploits than a binary-only alternative in a numerical measurement, yet, the mere discovery of a new exploit can create a huge storm and lead to the perception of the software being less secure, even if it's objectively false in a numerical measurement.

My opinion is, free and open source code, in general, often objectively reduces the number of exploits comparing to its binary-only alternative. It doesn't eliminate exploits. The important question here is code coverage - A group of random hackers browsing code can never replace a systematic review (organized by the community or otherwise). Nor it makes the software inherently secure, a program that uses suboptimal programming techniques is more prone to exploits and more reviews cannot change the fact. However, the exploits discovered and fixed by a group of random hackers are still real security improvements.

For example, OpenSSL, even before Heartbleed, were attacked by researchers around the world, some exploits involved advanced side-channel and timing attacks. The bugs they discovered and fixed are real. Now imagine a binary-only OpenSSL in a parallel universe, called CloseSSL (while other conditions - such as an underfunded development team, remain the same), in this universe, fewer exploits are discovered and fixed, and it may be more vulnerable to timing attacks than the OpenSSL in our universe, so objectively it's more "secure". But both are vulnerable to Heartbleed, in other words, being more "secure" in a numerical measurement does not translate to real world security, on the other hand, the numerical measurement showing the superiority of FOSS by itself, is nevertheless real. Of course, real-world programs do not behave like an ideal model, being FOSS or not also correlates to other variables such as the the size of funding or audit coverage. My argument is only an ideal model treating all variables as independent variables.

I call it the anti-Linus's law: Given more eyeballs, not all bugs are shallow, unless there's enough eyeballs. But it's always better than fewer eyeballs.

> Most vulnerabilities are found by techniques like fuzzing, not by combing through thousands of lines of code.

Having the source code available allows pentesters and auditors to use compiler-based instrumentation for fuzzing, which is more efficient than binary fuzzing.


> Having the source code available allows pentesters and auditors to use compiler-based instrumentation for fuzzing, which is more efficient than binary fuzzing.

I will concede that is pretty valid point. My argument is basically that there is the false sense of open source code being "more secure" because of an assumption that "the community" is checking it thoroughly. Most people will just grab it off of github and run it, without giving it a second thought at all. Generally speaking you don't get high quality full code audits for free, pentesters and auditors generally like to get paid and aren't out there testing github code-bases out of the goodness of their heart.


I have to believe given the sheer size of these communities, that the source code being available only helped to confirm what was already known. The panic seen here hearkens back to the days when companies made similar ridiculous security claims about open source software compared to proprietary software.


That seems like quite a stretch. The difference between having the source code and not having it is night and day as far as exploring potential vulnerabilities...which is one of the strengths of open source as you point out, but this code was not intended to be || written as open source hence the panic. Feel like you missed the mark on this one.


> Open source code is much more likely to already have been audited better.

Worth keeping in mind this isn’t a silver bullet. OpenSSL with Heartbleed comes to mind.


Very true, but OpenSSL in particular is rather infamous. Unfortunate given that so much relies on it. https://news.ycombinator.com/item?id=7556407


This is why the assumption that “open source code is more likely to be closely audited for vulnerabilities” is not true (even for incredibly core/important projects with a wide scope) and is potentially dangerous to rely on.


> This is why the assumption that “open source code is more likely to be closely audited for vulnerabilities” is not true...

That is a safe assumption, otherwise you'd have to believe that non-open source code is more closely audited - at greater expense, because businesses secretly prioritize security.


It is not 100% and always. But practically. Especially unexpected leak.


Shellshock still outperforms any security issue of OpenSSL in terms of time in the wild.


open source code being more secure is a myth.


The whole "open source is audited better than closed source" is nothing but a myth and I am actually quite surprised to see this statement appear on HN.


Every statement you just made is speculation and not backed up by any meaningful data. While it’s obviously “easier” to find bugs when you can view the source code, making it one or the other doesn’t bestow any magical protections on the software.


"Time and effort required" in order to find vulnerabilities is not a magical protection. It is a legitimate protection. Not one that should be relied on, but very much something that factors in. Open sourcing software doesn't immediately improve security, but it drastically lowers the barrier of entry for researchers to start looking into it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: