Hacker News new | past | comments | ask | show | jobs | submit login

Writing widely-deployed infrastructure (web browsers, networking stacks, device drivers, openssl libraries) in C will be less and less justifiable over time. Rust is far from optimized right now. Rust is probably not going to be a language that you will script in. But it is a language that will possibly give Mozilla the world's most secure web browser by a significant margin if they succeed in writing the most bug prone components in Rust.

It takes more thought to write Rust programs. If you want to write safer widely-deployed infrastructure, you should learn about it. It may not be your best option right now, but its ideals are those that we absolutely need to strive for. Acting macho about C and the ability to write it safely does not lead to fewer RCE's in the world. It leads to more, because newcomers may see such people who can do so as examples to be emulated, far before they are capable.

Note that there are no familiar critics who say that Rust will not prevent some bugs. Note that there are no familiar proponents who claim that Rust will prevent all bugs. It is an implementation that strives to make progress in an area that we can all agree with - the safety of our programs with minimal performance degradation.




> But it is a language that will possibly give Mozilla the world's most secure web browser by a significant margin if they succeed in writing the most bug prone components in Rust.

A memory safe language alone isn't enough. Look at Java. Isn't that supposed to be a memory safe language? And yet it's constantly got security issues.


I would suggest that the security issues in Java are due to bugs in the various JVM implementations, which are likely written in C, right?

Not to say that Java is going to eliminate all bugs. However, it should eliminate all bugs to do with, e.g., reading/writing out of the bounds of an array and resulting in RCE due to bugs introduced in the application code. Reading/writing out of the bounds of an array will still be a bug, but not one that allows arbitrary code execution (at least not in the same way as it may for a language like C.


And many of them, from a quick glance, are actually due to complicated sandboxing policies. That is, RCE was an explicit feature, but they found it hard to get right.

In fact, one of the first real security holes I've ever found was in 1.0 version of the CLR. You could load a function pointer to an un-JIT'd function, then later on use that to safely jump to other code. This is technically a RCE, but the user has to be executing your code first anyways. That is, if the CLR and Java didn't set out to run "partially trusted" code with no help from the OS, these wouldn't have been problems at all.

Other languages like C, Ruby, etc. simply don't have this "partially trusted" concept so there's nothing to attack. (Well I guess things like Native Client or VMware qualify, but they're at a better abstraction later than JVM or CLR permissions.)


When I was an undergrad at UW, I engineered a vacuum bug that took advantage of a JDK1.1 bug in constant pool verification to suck out the user's in memory environment configuration (so like PATH variables and such).

http://www.cs.cornell.edu/People/egs/kimera/flaws/vacuum/

But most security holes these days are not flaws in the JVM/CLR, but logical errors made in the application; e.g. allocate a buffer and reuse it, this says nothing about memory safety at the VM level at all!

And at that, no one really trusts JVM security anymore, preferring to sandbox the entire operating environment in a heavyweight or lightweight VM.


> But most security holes these days are not flaws in the JVM/CLR, but logical errors made in the application; e.g. allocate a buffer and reuse it, this says nothing about memory safety at the VM level at all!

But most applications are written in memory-safe languages, so it's not surprising that most vulnerabilities are found in non-memory-safety related areas. The more interesting statistic is the number of critical security bugs that are memory-safety related in non-memory-safe languages.


That doesn't address what I said in the sentence you quoted at all. We were discussing Java and vulnerabilities that arise given flaws in the JVM. You are talking about something else that is completely different.


Neat! In 2003 I was writing an obfuscator for .NET, leading me to explore quite a bit. It was fun.

But app level bugs aren't what gives Java a bad name, are they? When someone says, like the GP, that "what about Java, that's got tons of security issues", that's almost certainly from its use as a browser plugin. Otherwise everyone would be saying the same about every language out there.


We found lots of bugs in Microsoft's Java implementation at the time; they offered us lots of money for our test suite but Brian wanted to do a startup :) Anyways, if you want to break something, you can usually get there with fuzz testing (but these days, most sane organizations will fuzz themselves).

Ya, when people say Java is insecure, they usually mean the Java plugin has insecure interfaces. As the browser + Java becomes increasingly rare, it fades from our memory. There is nothing insecure or secure about the language, memory safety actually works...but its only one small part in having a secure environment.

I worry about Rust, it pushes the safety card much more aggressively, but in reality without full on aggressive dependent typing, they'll only be able to guarantee a few basic properties. The language won't magically make your code "secure", just a bit easier to secure.


> I worry about Rust, it pushes the safety card much more aggressively, but in reality without full on aggressive dependent typing, they'll only be able to guarantee a few basic properties.

Our analysis of the security benefit of Rust comes from two very simple, empirical facts:

1. Apps written in memory-safe languages do not have nearly the same numbers of memory safety bugs (use after free, heap overflow, etc.) as apps written in C and C++ do.

2. Memory safety issues make up the largest number of critical RCE bugs in browser engines.

> The language won't magically make your code "secure", just a bit easier to secure.

Of course it won't magically make your code secure. Applications written in Rust will have security vulnerabilities, some of them critical. But I'm at a loss as to how you can claim that getting rid of all the use-after-free bugs (just to name one class) inside a multi-million-line browser engine in a language with manual memory management is easy. Nobody has ever succeeded at it, despite over a decade of sustained engineering effort on multiple browser engines.


> But I'm at a loss as to how you can claim that getting rid of all the use-after-free bugs (just to name one class) inside a multi-million-line browser engine in a language with manual memory management is easy

I didn't claim it was "easy", just that bugs would still exist and the browser's "securedness" would only increase marginally. The problem is not that this isn't an achievement, only in managing expectations (e.g. saying Rust is secure which doesn't make sense).

> Nobody has ever succeeded at it, despite over a decade of sustained engineering effort on multiple browser engines.

It is a nice goal, but the question is where is the world afterwards? Will Firefox all of a sudden become substantially more secure and robust vs. its competition, to the extent that it can out compete them and increase its market share significantly?

My brief experience at Coverity makes me guess that you could be getting rid of one class of bugs without necessarily improving the product in any noticeable way...that ya, those bugs were common but not particularly easy to exploit or hard to fix once found.

Not that the effort isn't worthy at all. I'm just a bit cynical when it comes to seeing the tangible benefits.


> I didn't claim it was "easy", just that bugs would still exist and the browser's "securedness" would only increase marginally.

I disagree with the latter. Based on our analysis, Rust provides a defense against the majority of critical security bugs in Gecko.

> It is a nice goal, but the question is where is the world afterwards? Will Firefox all of a sudden become substantially more secure and robust vs. its competition, to the extent that it can out compete them and increase its market share significantly?

You've changed the question from "will this increase security" to "is improved security going to result in users choosing Firefox en masse". The latter question is a business question, not a technical question, and not one relevant to Rust or this thread. At the limit, it's asking "why should engineering resources be spent improving the product".

Rust is a tool to defend against memory safety vulnerabilities. It's also a tool to make systems programming more accessible to programmers who aren't long-time C++ experts and to make concurrent and parallel programming in large-scale systems more robust. The combination of those things makes it a significant advance over what we had to work with before, in my mind.

> My brief experience at Coverity makes me guess that you could be getting rid of one class of bugs without necessarily improving the product in any noticeable way...that ya, those bugs were common but not particularly easy to exploit or hard to fix once found.

It is true that exploitation of UAF (for example) is not within the skill level of most programmers and that individual UAFs are easy to fix. But "hard for most programmers to exploit and easy to fix" doesn't seem to be much of a mitigation. For example, the Rails YAML vulnerability was also hard to exploit (requiring knowledge of Ruby serialization internals and vulnerable standard library constructors) and easy to fix (just disable YAML), but it was rightly considered a fire-drill operation across Web sites the world over. The "smart cow" phenomenon ensures that vulnerabilities that start out difficult to exploit become easy to exploit when packaged up into scripts, if the incentives are there to do so. Exploitable use-after-free vulnerabilities in network-facing apps are like the Rails YAML vulnerabilities: "game-over" RCEs (possibly when combined with sandbox escapes).


>the browser's "securedness" would only increase marginally

The developers' claim is that more than half of all security bugs are bugs due to memory safety issues and that Rust will solve these. More than halving the number of bugs doesn't sound marginal to me.


I'm not sure why you say this. Go look over Microsoft's CVEs for the past two years. I did, and, apart from the CLR-in-a-browser scenario, nearly every single critical CVE was a direct result of memory safety.

In other words, if we magically went back in time and wrote all MS products in Rust instead of C++, their CVE could for RCEs, their famous worms, etc. would all disappear (except in the cases where they explicitly opted into unsafe features.)


Those worms would disappear but you can't say for sure that the crackers just wouldn't find other vulnerabilities to focus their efforts on exploiting. That is to say, having gone through the cracking process myself (for research purposes, of course!), you find the lowest hanging fruit you can find, and once that fruit is gone you move on to the next lowest fruit.

Back in the 90s and early 00s, a lot of the low fruit was buffer overflows or forged pointers. Then we got serious about fuzz testing and static analysis, and now they are picking at other fruit (which is why Heartbleed was so weird).


OK then look at the CVEs for the last couple years. The reward for finding a 0day RCE in a MS product is so high, I don't think it's accurate to say it's just the low hanging fruit.


That idea of "easier to secure" led me to this scoring system. http://deliberate-software.com/programming-language-safety-a...

I'd love to see Rust shown too.


Many of the Java security bugs are in Java code. Relevant to this discussion, "Jetbleed". The many other SSL breaks in Java. A variety of issues involving deserialization of untrusted data, ala Rails yaml bug. Bugs in the JVM itself are more the exception than the rule.


What is jetbleed? Searching google for it brings me back to your comment!



I don't think Java proves your point. The major impacting bugs in Java itself tend to surround the idea of running arbitrary code with a complicated sandbox. The embedded CLR-in-the-browser also suffered many (in fact, out of all the severe MS CVEs that aren't memory related, most were sandbox escapes). So that's probably more of an indication not to build complicated sandboxes that rely on fine-grained classloading permissions systems.

The other Java bugs are ones that'd plague any language: SQL injection, rules-engines-gone-wild, etc.


> So that's probably more of an indication not to build complicated sandboxes that rely on fine-grained classloading permissions systems

Web sandboxes are full of holes too. That's why modern browsers have sandboxes within sandboxes. I don't think an HTML5 sandbox is less complicated than the JVM sandbox.


> A memory safe language alone isn't enough. Look at Java. Isn't that supposed to be a memory safe language? And yet it's constantly got security issues.

Sure. Nobody is claiming (or should claim) that memory safety is a solution to all security problems. That's independent of the claim that memory safety is an effective defense against common vulnerabilities in C and C++ programs.


A large number of the Java CVEs you're thinking of are bugs in the Oracle JVM, meaning they are bugs in C++ code, which is not in a memory-safe language.


Actually, I think a lot of the CVEs are in the shipped java classes. But it's sometimes hard to tell, with such fine descriptions as

    Unspecified vulnerability in Oracle Java SE 6u85, 7u72, 
    and 8u25 allows local users to affect confidentiality, 
    integrity, and availability via unknown vectors related 
    to Deployment.


Yeah, I started to look through OpenJDK commit history to get a better sense of the proportions here, but I haven't gotten far enough to have any useful data.


> A memory safe language alone isn't enough. Look at Java. Isn't that supposed to be a memory safe language? And yet it's constantly got security issues.

Has it? I can't remember the last time I saw a security advisory for Tomcat or Jetty (there are some but they're rare), in stark contrast to Apache or Nginx.


> Rust is probably not going to be a language that you will script in.

No; you have D for that.

EDIT: Seriously, for the grumpy down voter: I've seen quite a few programmers write that they use D even for things that are normally considered as scripting tasks.


As to scripting in D: I don't write a lot of D code anymore, but thanks to snappy compile speeds and the 'rdmd' compiler driver (which automatically tracks dependencies), it really does have a sweet spot for writing little programs where you'd like the benefits of 'scripting languages' (no makefile hassle, short edit-run cycles) but need just a bit more runtime performance.


Unfortunately, this point of view has little merit.

Security is all or nothing. You can't have a little bit of this and a little bit of that. Unless the parts of the web browser that can be "influenced" by external attacker (directly or indirectly) are written 100% in a memory safe language, you simply have no real security but the illusion of such.

And this is how that hypothetical browser fails, and why it will never amount to anything re: security, since it's gonna end up using a gazillion of C libraries, all of them full of bugs and possibly vulnerable to security exploits.

One could say that Rust also fails by allowing "unsafe" code in its core design but it's still too early to see how that will play out.


Security is not all or nothing. You can definitely say that X is more secure than Y even if both have bugs, so long as X's bugs are less critical and less frequent.

As an example, I would happily claim that nginx is more secure than wordpress or the average php website written with mysql_query in the 90s. Does nginx have bugs? Probably somewhere in there. Are they as likely to be found, exploited, or (when exploited) lead to as serious issues? I doubt it.

Security is often about many many levels. A good example of this is Chrome, its sandboxing, operating system memory randomization, and user privileges. When someone finds a bug in v8, to turn it into root on the box requires bugs in all those layers (see writeups for pwn2own).

Generally, an improvement in security at any layer will reduce the impact of bugs at other layers. I'd absolutely rather have a browser written 20% in rust than 0% in rust.


As someone who works in information security: Security is a spectrum. There is never all, and there is rarely nothing.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: