Hacker News new | past | comments | ask | show | jobs | submit login

> So since no one has mentioned it yet, you could rewrite this stuff in rust as a poor man's substitute. It will catch some of the aame things, but ultimately there is no substitute for test coverage with sanitizers.

That's backwards. You'll catch more cases with a Rust-style type system that naturally checks everything, than with sanitisers that can only check the paths that get executed in tests.




Rust isn't perfect. UB is a bug in rust, but it occasionally has bugs. Ideally you'd do rust and asan with good unit tests. And yes, if you only picked one, it should be rust, but don't just pick one. And just because you are using rust is no excuse to skip static analysis like prusti, coverage with gcov, llvm, or tarpaulin, and certainly not unit tests.


There is unlikely ever to be a perfect, but a getting better by constraining behavior that breaks things. Throw all of the compile-time, profiling, checked builds, and binary-level tools at projects for defense-in-depth and checklists, no matter the platform or the application. Fuzzing, valgrind, gperftools, dtrace, {[amt],ub}san, etc. and formal methods like seL4 if you can afford the investment. :>


Does valgrind add anything to asan given the same unit tests? I haven't used in years because it is so slow.


Address sanitizer is not precise and can miss certain classes of bugs that Valgrind would catch.


Yeap. Although Google killed off afl, exercise code with fuzzing too that unit and integration tests didn't catch.


> And just because you are using rust is no excuse to skip static analysis like prusti, coverage with gcov, llvm, or tarpaulin, and certainly not unit tests.

It absolutely is, and this kind of absolutism is what holds back the adoption of things like rust.

In most real world software development, the target defect rate is not zero. The value proposition of something like rust for most businesses isn't that it lets you lower your defect rate; it's that it lets you maintain your existing (acceptable) defect rate at a much lower cost, by letting you drop your static analysis and most of your tests (reducing maintenance burdens) and still have a better bottom-line defect rate.


So your assumption is that most to all defects are language related and not programmer error? I will bet the farm your defect rate will remain the same without any actual validation of what people write. Plenty of errors due to language quirks but lots of times I see missing statements, fixed values which should be variable, input checking failures due to unknown input, incorrect assumptions and lots of non language gelated bugs which will not go away.


You're not accounting for the fact that Rust ships with a mandatory static analyzer called rustc. Btw Google recently gave a talk about switching from Java to Kotlin, which significantly reduced bugs, despite less mature tooling. So...


Rust does validation of what people write, it won't let you make a dangling pointer, whereas C will happily accept it if you hide it from the warning heuristics


> So your assumption is that most to all defects are language related

What does that even mean?

> I will bet the farm your defect rate will remain the same without any actual validation of what people write.

In what sense is a typechecker (or the rust borrow checker) not "actual validation", but a static analyser is?

> Plenty of errors due to language quirks but lots of times I see missing statements, fixed values which should be variable, input checking failures due to unknown input, incorrect assumptions and lots of non language gelated bugs

Most of those sound like type errors to me. Errors where the program isn't doing what the programmer thought it should can generally be avoided by using more precise types. (The more insidious case is where the program is doing exactly what the programmer thought it should, but they've misunderstood the specification or not thought through the implications - but no kind of testing can catch that kind of bug).


I'll give you the static analyzer is much less important with rist, but that is the easy part anyway. I stand firm that tests are always important, and you don't know how well you tested without coverage. I'm not even sure I'd say you need less coverage with rust, not all errors are resource safety related.


Coverage numbers are misleading (to the point that I find they did more harm than good). There are definitely cases where you can't (or aren't clever enough to) express what makes your business logic valid in the type system and need to write a test, but IME they're the exception rather than the rule (both because you generally can encode things in types, and because the majority of code ends up being "plumbing" with no real business logic anyway); I like to follow something like https://spin.atomicobject.com/2014/12/09/typed-language-tdd-... .


I like that article! It doesn't say to do away with coverage though, it simply talks about using good typing to make coverage easier to get with fewer tests. Like you said, sometimes you can't get the type system to enforce correctness, and sometimes you may think you have done so when you haven't, you always need tests to see.


I'm not sure what you mean by "coverage" - the article is advocating deleting (or not writing) tests if you can move the corresponding check into the type system, and the end result is that you will naturally end up with codepaths that aren't covered by tests (because those codepaths essentially "can't go wrong"). That makes the things that I've normally heard called "coverage" pretty useless.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: