Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

C is unsafe.


Changing well-tested code is unsafe.


not changing working code to prevent issues is unsafe.

we can go in circles all day with blanket statements that are all true. but we have ample evidence that even if we think some real-world C code is safe, it is often not because humans are extremely bad at writing safe C.

sometimes it's worth preventing that more strongly, sometimes it's not, evidently they think that software that a truly gigantic amount of humans and machines use is an area where it's worth the cost.


believing that rewriting to rust will make code safe is unsafe) Of course it will be safer, but not safe. Safety is a marketing feature of rust and no more. But a lot of people really believe in it and will be zealously trying to prove that rust is safe.


If the code is brittle to change, it must not have been particularly safe in the first place, right?

And if it's well-tested, maybe that condition is achieved by the use of a test suite which could verify the changes are safe too?


A test will never catch every bug, otherwise it's a proof, and any change has the probability to introduce a new bug, irregardless of how careful you are. Thus, changing correct code will eventually result in incorrect code.


I'm not sure if that's how probability works.


I mean if you want Git to never change you're free to stick with the current version forever. I'm sure that will work well.


I obviously don’t think that is wise, but Git is literally designed with this in mind: https://git-scm.com/docs/repository-version/2.39.0

Just like SQLite has an explicit compatibility guarantee through 2050. You literally do not have to update if you do not want to.


And it’s still a choice you can make regardless of Git moving to Rust or not, so what’s the problem?


This is the repo format version.

It's pretty different from the git version, which receives new releases all the time for things like security patches, improvements, and new features.



Rust is not perfect, but perfect C is nearly impossible.


I honestly can't tell if this is meant as serious reply to my question (in that case: let's say I agree that Rust is 100% better than C; my question still stands) or as a way to mock Rust people's eagerness to rewrite everything in Rust (in that case: are you sure this is the reason behind this? They are not rewriting Git from scratch...)


As a user, you may not be aware that C makes it relatively easy to create https://en.m.wikipedia.org/wiki/Buffer_overflow which are a major source of security vulnerabilities.

This is one of the best reasons to rewrite software in Rust or any other more safe by default language.


Everyone on hackernews is well aware that C makes it relatively easy to create buffer overflows, and what buffer overflows are. You're still not responding to GP question.


I'm not involved in the initiative so I can't answer the question definitively? I provided one of the major reasons that projects get switched from C. I think it's likely to be a major part of the motivation.


I didn't know that C makes it easy.


Right, I never mentioned that I am a decently experienced C developer, so of course I got my fair share of buffer overflows and race conditions :)

I have also learned some Rust recently, I find a nice language and quite pleasant to work with. I understand its benefits.

But still, Git is already a mature tool (one may say "finished"). Lots of bugs have been found and fixed. And if more are found, sure it will be easier to fix them in the C code, rather than rewriting in Rust? Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.


https://access.redhat.com/articles/2201201 and https://github.com/git/git/security/advisories/GHSA-4v56-3xv... are interesting examples to consider (though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?).

> Unless the end goal is to rewrite the whole thing in Rust piece by piece, solving hidden memory bugs along the way.

I would assume that's the case.


> though I'm curious whether Rust's integer overflow behavior in release builds would have definitely fared better?

Based on the descriptions it's not the integer overflows that are issues themselves, it's that the overflows can lead to later buffer overflows. Rust's default release behavior is indeed to wrap on overflow, but buffer overflow checks will remain by default, so barring the use of unsafe I don't think there would have been corresponding vulnerabilities in Rust.


This doesn't matter at all for programs like Git. Any non-free standing program running on a modern OS on modern hardware trying to access memory its not supposed to will be killed by the OS. This seams to be the more reasonable security-boundary then relying on the language implementation to just not issue code, that does illegal things.

Yeah sure, memory-safety is nice for debuggibility and being more confident in the programs correctness, but it is not more than that. It is neither security nor proven correctness.


Not quite the best example, since Git usually has unrestricted file access and network access through HTTP/SSH, any kind of RCE would be disastrous if used for data exfiltration, for instance.

If you want a better example, take distributed database software: behind DMZ, and the interesting code paths require auth.


Git already runs "foreign" code e.g. in filters. The ability to write code that reacts unexpectedly on crafted user input isn't restricted to languages providing unchecked array/pointer access.


Unintentional bugs that caused data destruction would also be disastrous for a tool like git


Which are more likely to be introduced by a full rewrite.


> Any non-free standing program running on a modern OS on modern hardware trying to access memory its not supposed to will be killed by the OS.

This seems like a rather strong statement to me. Do you mind elaborating further?


I think bugs in the MMU hardware or the kernel accidentally configuring the MMU to allow access across processes that isn't supposed to be are quite rare.


Sure, but I think illegal interprocess memory accesses is a fairly narrow definition for "access[ing] memory its not supposed to". There's plenty of undesirable memory accesses that are possible without needing to cross process boundaries and I don't think the OS does that much to solve those outside of currently niche hardware.


It might be undesirable to you, but you haven't specified this to the computer. Process-boundaries are one way how we specify what is allowed to touch and what not.


OK, sure, but there's no reason you can't extend that argument to in-process improper memory accesses either. free() is you specifying that a particular bit of memory isn't supposed to be touched any more, malloc() is you specifying that some amount of memory is legal to access, etc. Language runtimes, inserted/compile-time checks, etc. would be analogous to the OS/MMU here.


Yes, but this is not across a trust boundary. Since these are in the same process/program. Rust "only" applies checks at compile time, it doesn't enforce security.

Not sure, if I'm clear. Rust is like cooperative multitasking, nice but not guaranteed. My claim here is, that we actually want preemptive multitasking.


I'm not quite sure I'm understanding your analogy here, but would that effectively mean each allocation lives in its own process?


Maybe? If we try to backport it to the current hardware/software. It would be an improvement to configure the MMU to enforce boundaries below a process.

However my point was that not every allocation is a trust boundary. What a program does in its own memory doesn't matter at all, this is gone in an instant. Everything that matters is I/O and this goes through syscalls, so there security can be enforced.

Why do you care about corrupting process memory? The memory state itself is totally irrelevant. What annoys you is when it e.g. deletes a file it is not supposed to. Would you rejoice when the file gets still deleted, but the process memory is totally fine? Of course not. The only thing that matters is the deletion of the file, you don't actually care about the memory safety. Thus, what you actually want is the computer to know that the file is not supposed to be deleted, when you have that, the memory can be trashed like the program likes to.


> Everything that matters is I/O and this goes through syscalls, so there security can be enforced.

I don't think "good" I/O and "bad" I/O are necessarily distinguishable by the OS ahead of time and/or in general. The OS isn't going to know whether the program wrote out a proper file or complete gibberish, or whether the numbers you're displaying were derived from uninitialized values, or whether what you're sending over the wire is what you intended (e.g., Heartbleed), etc., but those are very much things one should care about!

> Why do you care about corrupting process memory? The memory state itself is totally irrelevant.

Strong disagree here. If memory is corrupted all bets are off, especially if you know your program is actually supposed to perform some I/O.

> The only thing that matters is the deletion of the file, you don't actually care about the memory safety.

You would care if memory safety issues directly led to file deletion!

> Thus, what you actually want is the computer to know that the file is not supposed to be deleted, when you have that, the memory can be trashed like the program likes to.

So what happens if you know a file is supposed to be deleted but memory corruption led to the wrong one being deleted?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: