Hacker News new | past | comments | ask | show | jobs | submit | trippy_biscuits's comments login

How does it address quines?

http://research.swtch.com/zip


It's just a compressor, it doesn't provide any decompression support.


I love the fact that there is a ZIP quine, too, and I'll admit I immediately thought of that, but it's not clear to me what responsibility the library has to, or what the expectation on the user's part there is that it will, 'address' quines in any particular way.


Why can no one read a man page anymore? They even have them on the internet these days.

https://developer.apple.com/library/mac/documentation/Darwin...

-A List all entries except for . and ... Always set for the super-user.


That doesn't really explain the history of why. I think that's the most interesting part of the answer.


Do people really believe in more secure languages? Are they the same people that think switches make networks secure? Switches don't and neither does a given language. I recall a CTO that would not allow C++ development because he thought the language was insecure. Java was the only language allowed. Even college courses are still teaching that security is one of the benefits of the virtual machine. We only have to look at all the patches for java to see that it hasn't been secure. Then we look at every other software that has been patched to see that nothing is secure.

Please stop perpetuating the myth that security is produced by a programming language. People make security happen just like they make it not happen. Obligatory Schneier: https://www.schneier.com/blog/archives/2008/03/the_security_...


> We only have to look at all the patches for java to see that it hasn't been secure.

All those big security issues aren't in the Java language, they are in the JVM running untrusted Java byte code. Not to say that situation isn't bad, but you can't compare it to C++ because nobody ever thought running untrusted C++ code without some other sandboxing was a good idea.

That aside, memory safety is great for security. Of course there are 1000 other things that are important, too, and so I'd trust a C program written by a security expert much more then the same program written by someone who thinks his program is secure because he used Java. But I'd feel even better if the security expert used a memory-safe language because I am certain that all C programs above a certain size are vulnerable to memory attacks.


> Not to say that situation isn't bad, but you can't compare it to C++ because nobody ever thought running untrusted C++ code without some other sandboxing was a good idea.

This is actually kind of a point for the other side. You can sandbox code regardless of what language it's written in. Maybe what we need is not better languages but better sandboxes. Even when code is "trusted", if the developer knows it doesn't need to e.g. write to the filesystem or bind any sockets then it should never do those things and if it does the OS should deny access if not kill it immediately.


Isn't this exactly what SELinux does but nobody bothers to configure the rules?


But sandboxing does nothing to protect information if the information resides in the sandbox (sandboxing wouldn't have stopped heartbleed).

Rust and friends would aren't going to make all securty issues go away, just as sandboxing would not. There is no one true silver bullet in securty, at least not yet.


> This is actually kind of a point for the other side.

I wanted to move the goalpost from "Java is insecure" to "the Java sandbox is insecure". I completely agree with the second statement, so I don't think I made a point for any other side.


You made the point that I was trying to make: implementations are not secure. A programming language can follow a philosophy but implementations never quite line up with the theory. We only use implementations of the theory and experience shows that implementations all have vulnerabilities.


I'm sorry if I misrepresented your post, but I feel you do the same to mine. I didn't say the JVM is insecure - I said the sandboxing part of the JVM is insecure and C++ doesn't have anything comparable.


True, and a malfunctioning sandbox is worse than useless.

People tend to base security on them. Google did in their AppEngine cloud, but they put a lot of engineering resources and defence-in-depth behind it.


> a malfunctioning sandbox is worse than useless

Are there any sandboxes in existence which are definitely not worse than useless?


seccomp is simple and useful, in both incarnations.


I don't think security is produced by picking one language or another, but I do believe that it's harder to write secure code in a language like C than a language like Java or Rust. There are simply way, way more ways to shoot yourself in the foot.


> I don't think security is produced by picking one language or another, but I do believe that it's harder to write secure code in a language like C than a language like Java or Rust. There are simply way, way more ways to shoot yourself in the foot.

The trouble is that everything is a trade off. It's very hard to get a buffer overrun in Java but that doesn't make Java a good language. It tries so hard to keep you from hanging yourself that it won't let you have any rope, so in the instances when you actually need rope you're forced to create your own and hang yourself with that.

For example, you're presented with garbage collection and then encouraged to ignore object lifetime. There are no destructors to clean up when an object goes out of scope. But when it does you still have to cleanup open files or write records to the database or notify network peers etc. Which leaves you to have to manage it manually and out of order, leading to bugs and race conditions.

In other words, C and C++ encourage you to write simple dangerous bugs while Java encourages you to write complicated dangerous bugs.

That isn't to say that some languages don't have advantages over others, but rather that the differences aren't scalar. And code quality is by far more important than the choice of language. BIND would still be less secure than djbdns even if it was written in Java.


Not that this really proves anything one way or the other, but remember that in regard to the Java SSL implementation shipping with the JDK, it was very recently found that:

"...the JSSE implementation of TLS has been providing virtually no security guarantee (no authentication, no integrity, no confidentiality) for the past several years."


I don't understand this argument. For example, if I use a language that doesn't allow buffer overflows to happen, I've eliminated an entire class of security bugs being caused by programmer error. Why would you not want to use such a language? Performance and existing libraries will factor in to this obviously but I don't understand why you wouldn't consider security built into the language as a benefit.

Yes, security issues are found in Java and every other language, but when these are patched all programs that use that language are patched against the issue. The attack surface is much smaller.


> Yes, security issues are found in Java and every other language, but when these are patched all programs that use that language are patched against the issue. The attack surface is much smaller.

All patches work like that; when there is a bug in libssl and OpenSSL patches it then all the programs using libssl are patched. The difference with Java is that when a C library has a bug only programs using that library are exposed but when Java has a bug all Java programs are exposed. Moreover, Java itself is huge. It's an enormous attack surface. Your argument would hold more weight if the "much smaller" attack surface actually produced a scarcity of vulnerabilities.


> For example, if I use a language that doesn't allow buffer overflows to happen, I've eliminated an entire class of security bugs being caused by programmer error.

There are several assumptions behind "if I use a language that doesn't allow buffer overflows to happen" which you aren't taking into account. For instance, are you entirely sure that the implementation of that language's compiler will not allow buffer overflows to happen? We have a good example of a possible failure of that model in Heartbleed: when it came up, a bunch of people in the OpenBSD community raised their eyebrows, thinking hmm, that shouldn't happen for us, we have mitigation techniques for that. Turns out -- for performance reasons -- OpenSSL was implementing its own wrappers over native malloc() and free(), doing some caching of its own. This, in turn, rendered OpenBSD's own prevention mechanisms (e.g. overwriting malloc()-ed areas before using them) useless. The language specifications may not allow such behaviour, but that doesn't mean the implementation won't, too.

You're also underestimating a programmer's ability to shoot himself in the foot. Since I already mentioned OpenBSD and Heartbleed, here's a good example of a Heartbleed-like bug in Rust: http://www.tedunangst.com/flak/post/heartbleed-in-rust . The sad truth is that most vulnerabilities like this one don't stem from accidental mistakes that languages could have prevented; they stem from fundamental misunderstanding of the mode of operation which are otherwise safe constructs in their respective languages.

Granted, this isn't a buffer overflow, which, in a language that doesn't allow arbitrary writes, would be an incorrect construct and would barf at runtime, if not at compile time; but then my remark about bugs above still stands (and I'm not talking out of my ass, I've seen buggy code produced by an Ada compiler allowing this to happen), buffer overflows can be increasingly well mitigated with ASLR, and the increased complexity in the language runtime is, in and by itself, an increased attack surface.

Edit: just to be clear, I do think writing software in a language like Go or Rust would do away with the most trivial security issues (like blatant buffer overflows) -- and that is, in itself, a gain. However, those are also the kind of security issues that are typically resolved within months of the first release. Most of the crap that shows up five, ten, fifteen years after the first release is in perfectly innocent-looking code, which the compiler could infer to be a mistake only if it "knew" what the programmer actually wanted to achieve.


My point is simply that every programmer will make mistakes when coding so I want the most automated assistance possible to point out those mistakes. If a programmer has a pressing need and the persistence to work around those checks, that's fine, at least the surface area for those mistakes are then limited to a smaller amount of code.


From comments I read about that when it was written, it's not clear to me that the author actually demonstrated the same behaviour of Heartbleed. I'm not the person to be the judge of that, but for what it's worth here is the top comment from /r/rust on the topic. Then you can make up your own mind about that.

https://www.reddit.com/r/rust/comments/2uii0u/heartbleed_in_...


That comment sort of illustrates my point:

> You should note that Rust does not allow unintialized value by design and thus it does prevent heartbleed from happening. But indeed no programming language will ever prevent logic bugs from happening.

Under OpenBSD, that values would not have been uninitialized, were it not for OpenSSL's silly malloc wrapper -- a contraption of the sort that, if they really wanted, they could probably implement on top of Rust as well. What is arguably a logic mistake compromised the protection of a runtime that, just like Rust, claimed that it would not allow uninitialized values, "by design".

Of course, idiomatic Rust code would not fall into that trap -- but then arguably neither would idiomatic C code. It's true that Rust also enforces some of the traits of its idioms (unlike C), but as soon as -- like the OpenSSL developers did in C, or like Unangst did in that trivial example -- you start making up your own, there's only that much the compiler can do.

At the end of the day, the only thing that is 100% efficient is writing correct code. Better languages help, but it's naive to hope they'll put an end to bugs like these when they haven't put an end to many other trivial bugs that we keep on making since the days of EDSAC and Z3.


> Under OpenBSD, that values would not have been uninitialized, were it not for OpenSSL's silly malloc wrapper -- a contraption of the sort that, if they really wanted, they could probably implement on top of Rust as well. What is arguably a logic mistake compromised the protection of a runtime that, just like Rust, claimed that it would not allow uninitialized values, "by design".

I really disagree. Rust does not allow uninitialized values by design - end of story. If a piece of Rust code let's uninitialized values bleed through, then it is broken. The semantics of Rust demands this.

(OpenSSL on the other hand only broke/Overrode OpenBSD's malloc - they didn't break C.)

It is news to no one that you can break - break - Rust's semantics if you use anything that demands `unsafe`. That's why anyone who uses `unsafe` and intends to wrap that `unsafe` in a safe interface has to be very careful.

Complaining about Rust being unsafe - in the specific sense that the Rust devs use - by using the `unsafe` construct, is like complaining that Haskell is impure because you can use `unsafePerformIO` to `launchMissiles` from a non-IO context.

> Of course, idiomatic Rust code would not fall into that trap -- but then arguably neither would idiomatic C code.

It's not even a question of being idiomatic. If someone codes in safe (non-`unsafe`) Rust, then they should not fall into the trap that you describe. If they do, then someone who implemented something in an `unsafe` block messed up and broke Rust's semantics.

What if that same thing happened in C? Well, then it's just another bug.

---

I'd bet you'd be willing to take it to its next step, even if we assume that a language is 100% safe from X no matter what the programmer does - "what if the compiler implementation is broken?". And down the rabbit hole we go.


> I really disagree. Rust does not allow uninitialized values by design - end of story. If a piece of Rust code let's uninitialized values bleed through, then it is broken. The semantics of Rust demands this. (OpenSSL on the other hand only broke/Overrode OpenBSD's malloc - they didn't break C.)

I'm not familiar enough with Rust (mostly on account of being more partial to Go...), so I will gladly stand corrected if I'm missing anything here.

If the OpenSSL did the same thing they did in C -- implement their own, custom allocator over a pre-allocated memory region, would anything in Rust prevent them from the same sequence of events? That is:

1. Program receives a packet and wants 100 bytes of memory for it. 2. It asks custom_allocator to give it a 100 byte chunk. custom_allocator gives it a fresh 100 byte chunk, which is correctly initialized because this is Rust. 3. Program is done with that chunk... 4. ...but custom_allocator is not. It marks the 100 byte chunk as free for it to use them again, but continues to retain ownership and does not clear its contents. 5. Program receives a packet that claims it has 100 bytes of payload, so it asks custom_allocator to give it a chunk of 100 bytes. custom_allocator gives it the same chunk as before, without asking the Rust runtime for another (initialized!) chunk. Program is free to roam around those 100 bytes, too.

I.e. the semantics of Rust do not allow for data to be uninitialized, but custom_allocator sidesteps that.


Rust doesn't have custom allocator support yet, so no, it's not currently possible to make this error ;)


It doesn't have custom allocator support as in "you can't have one function allocate memory and pass it for another function to use it", or as in "you can't replace the runtime's own malloc"? OpenSSL were doing the former, not the latter.

(Edit: I'm really really curious, not necessarily trying to prove a point. I deal with low-level code in safety-critical (think medical) stuff every day, and only lack of time is what makes me procrastinate that week when I'm finally going to learn Rust)


Well, anything is possible because ... human ingenuity.

However, Rust currently statically links in jemalloc - even when building a dynamic shared library. There is no easy way around it.

(because someone might ask: rustc -C prefer-dynamic dynamically links everything except jemalloc)

Having said that, I hope jemalloc gets linked externally soon so my code doesn't have to pay the memory penalty in each of my memory-constrained threads.


You can't say "this vector uses this allocator and this vector uses another one." If you throw away the standard library, you can implement malloc yourself, but then, you're building all of your own stuff on top of it, so you'd be in control of whatever in that case.

(We eventually plan on supporting this case, just haven't gotten there yet.)


Yes, my point was that OpenSSL did not throw away the standard library! openssl_{malloc|free} were thin wrappers over the native malloc() and free(), except that they tried to be clever and not natively free() memory areas so that they can be reused by openssl_malloc() without calling malloc() again. I.e. sometimes, openssl_free(buf) would not call free(buf), but leave buf untouched and put it in a queue -- and the openssl_malloc() would take it out of there and give it to callers.

So basically openssl_malloc() wasn't always malloc()-ing -- it would sometimes return a previously malloc()-ed aread that was never (actually) freed.

This rendered OpenBSD's hardened malloc() useless: you can configure OBSD to always initialize malloc-ed areas. If OpenSSL had used malloc() instead of openssl_malloc, even with the wrong (large and unchecked) size, the buffer would not have contained previous values, as it would have already been initialized to 0x0D. Instead, they'd return previous malloc()-ed -- and never actually free()-d -- buffers, that contained the old data. Since malloc() was not called for them, there was never a chance to re-initialize their contents.

This can trivially (if just as dumbly!) be implemented in Go. It's equally pointless (the reasons they did that were 100% historical), but possible. And -- assuming they'd have done away with the whole openssl_dothis() and openssl_dothat() crap -- heartbleed would have been trivially prevented in C by just sanely implementing malloc.

> (We eventually plan on supporting this case, just haven't gotten there yet.)

I'm really (and not maliciously!) curious about how this will be done :). You guys are doing some amazing work with Rust! Good luck!


https://github.com/rust-lang/rfcs/issues/538 is where we're tracking the discussion, basically :) And thanks!


> Please stop perpetuating the myth that security is produced by a programming language.

Security happens by taking care of what you're doing; if a language can eliminate a whole class of bugs then you might as well use it. That's why people keep arguing that some languages can eliminate some kind of bugs, but that absolutely doesn't make programs implemented in these languages bug-free.

Said differently: more secure (relatively) doesn't mean secure (in absolute).

> We only have to look at all the patches for java to see that it hasn't been secure.

We only have to look at all the patches for java to see how much it is analyzed; it doesn't mean java is relatively more or less secure than any other language.

I've seen no patches for this nim interpreter for brainfuck [0], does that mean it's more secure than java ? Absolutely not.

You can draw some parallel with crypto schemes: anybody can come up with some cipher, nobody will analyze it unless there is something to gain (that includes fun). When you've reached the state where you're under scrutiny of every crypto analyst and their student, and potential vulnerabilities are found, does that make it a weak scheme ? We don't know. Only a real analysis of the vulnerabilities can tell us.

[0] https://github.com/def-/nim-brainfuck


Is 'security' really something that requires faith or belief? It seems that "more secure" (not secure in an absolute sense) can be made tangible, and I think that programming languages can give you "more security", in the sense that they close off certain possibilities or make them much harder to exploit/mess up.

Not that I know about security, but all you're doing right now is to fend off the claim of "more secure" by stating that java is not secure in an absolute sense - no one has claimed absolute security, only relatively more.


> sometimes I wonder if there is a version of intellectual ADD

The xkcd comic describes it as nerd sniping. http://xkcd.com/356/


Lizard Squad was already being doxed, 27 Dec 2014.

http://i.imgur.com/vQTaCKx.png


I don't have a short attention span and I'm not hyperactive. I spend a lot of time thinking about things. I deduce things noticeably faster than my peers as long as it's not socially related. I have intense focus as long as something remains stimulating. When things lose my interest or are pointless they become tedious and it's not about attention span. I cannot stop worrying about assigned tasks to the point of anxiety. There is a mental barrier and I cannot work on the task despite intentions and efforts to do so. Medications don't really help me. They do at first, then they become less effective. An interesting side note about meds: The first time on meds I finally could see social cues and body language. It was a whole new world. I still see them off the meds now that I know about them but I may have trouble understanding what they mean due to lack of experience in receiving those signals. Getting proper sleep, nutrition, and having regular, serious exercise help more than meds. The affliction is very real. Just because you can't see it or refuse to believe in it doesn't change the fact that it impacts the lives of others.


Something to think about: it's possible you have more than one named disorder (though they could be related in chemistry). When I first started ADHD meds I had a similar reaction as you, though I wound up in actual panic attacks from the anxiety. When I treated the anxiety with an ultra-low dose then everything came out perfect.

I've since stopped the anxiety med and feel I've trained myself a little more on how to handle it. I have my moments but have the skills I need now to recognize those moments and bring myself down. Luckily one can do that with mild anxiety, unlike ADHD...

The biggest non-medicinal benefits are absolutely sleep, nutrition, and exercise -- you're right. Without a good foundation, nothing can be built.


> "I don't have a short attention span and I'm not hyperactive"

I thought these were the defining characteristics of ADHD? I'm sorry, please forgive my lack of familiarity.

> "When things lose my interest or are pointless they become tedious"...

I don't understand how this is considered a symptom of anything. To me, this is entirely normal. Things that you aren't interested in should be tedious. The mental barrier you speak of is internal honesty - you don't find importance or interest in X, and consequently, motivation doesn't emerge.


You speak with the world view of a neurotypical human. I'm happy for you.

For the perspective of the broken toys in the box, let me explain. :)

When an NT is asked to do a boring, repetitive task, he'll do it for eight hours and then get drunk afterwards to recover. Good job.

When an ADHD-afflicted individual is asked to do a boring, repetitive task, he'll do it for about five minutes and then spend eight hours trying to find a way to not do it again. Or stare at the wall. Or berate himself for not working. Or rack up a disabling level of anxiety because he's not working.

You present this as something everyone does, and you're right to. The disorder comes in when someone cannot do it. Not that the person will not muster some internal whatever to push on, but that the person's brain is physically incapable of doing it. The same kind of incapable as a major depressive being incapable of talking himself out of an anxiety-induced depression.

When it's a disorder, it's a disorder. The problem is that so many people see the high numbers of people being diagnosed and write it off as a fad. It's not. Maybe the numbers are high and some are being misdiagnosed, or maybe we're learning about all the edge cases. I don't know. I do know it exists and it's an impairment and it goes well beyond basic motivation.

I've had "do it or you're fired" moments where THAT wasn't enough to motivate me, and I had a very real fear of being unemployed.


First of all, thank you for this in-depth response. It beats the hell out of anonymous and explanation-less downvotes.

I had never come across the word neurotypical before your comment and now, after reading the corresponding wikipedia page, I am aware that it does characterize me (i.e. "anyone who does not have autism, dyslexia, developmental coordination disorder, bipolar disorder, ADD/ADHD, or other similar conditions").

For the majority of elementary, middle, and secondary school, I fit your anecdote pretty well, minus the getting drunk part (I was young, sheltered, and without access to or interest in alcohol).

However, after sophomore year or so, I realized how much time I had wasted pushing through boring, repetitive tasks, and I grew incapable of completing assignments. This turning point left me in the position of ADHD-afflicted individuals for the final two years of high school. Call it burnout, early senioritous, or whatever - the symptoms were the same. With fear of college app rejections as my motivation (like your fear of unemployment), I couldn't bring myself to do mandatory, largely weighted assignments. They were just too boring, meaningless. Somehow I remained motivated up until then. I really don't know how, to be honest.

Out of curiosity, how would you say my realization [and subsequent drop-off in academic performance] relate to ADHD and NT?

On another note, are A DHD-afflicted individuals literally incapable of mustering the "internal whatever" you speak of? Is the ability to conjure motivation entirely absent? It's really hard to compare similarly subjective abilities, like pain thresholds and the like.

Even if this incapability is just that: a true incapability, I'm not certain that portrayal of ADHD as an affliction is a net-benefit. It seems better for people to believe in their own capabilities, even when many are literally incapable, as you say. Similarly, the belief in free will is good for people and society - even if free will is obviously nonexistent. Determinism yields higher rates of depression and discourages self-responsibility.


So does the experience of repeatedly failing at tasks you are expected to master.


The Diagnosis of ADD (Attention Deficit Disorder) has been folded into ADHD as a subtype Attention Deficit HyperActivity Disorder- Primarily Inattentive.

It is a symptom when normal people even if they are disinterested in a task can still summon the level of focus needed to effectively complete a task. That is a much harder proposition for someone with an attention deficit syndrome condition.


"Unix: just enough potholes and bear traps to keep an entire valley going."

If you don't understand how to use sharp tools, you may hurt yourself and others. Documentation for fork() clearly explains why and when fork() returns -1. Those that find the man page lacking or elusive may get more out of an earnest study of W. Richard Stevens' book, Advanced Programming in the UNIX Environment. In any case, every system programmer should own a copy and understand its contents.


> If you don't understand how to use sharp tools, you may hurt yourself and others.

It's still bad API design when naively handling an error case kills everything. Is there an inherent reason that the error value for a pid has to be the same as the "all pids" value? Unless there's a very compelling reason, it seems like very poor design, well documented or not.


The inherent reason is that -1 is the most common error return code in C based APIs. The problem is not naively handling an error case, it's not handling an error case. Using a different value might avoid calling killall -1, but the program would still be incorrect.

This is the same sort of argument as strlcat vs strncat, and people can't agree on that one.


every system programmer should own a copy

I'd argue every programmer. It's such a fundamental part of computers & operating systems that key concepts will come up again and again. Just the other day I wanted to learn about Docker/CoreOS/etcd only to realize that I have an embarrassingly lacking understanding of how UNIX works. I immediately went to the library to pick up this book and begin fixing a flaw of mine (even as a web developer).


Meh. Not all programmers are on a UNIX system. Not all programmers are even on UNIX || Window.XX.

But even if there was only UNIX... the entire point of a well designed system is to allow users of the system to reason about it on a high level, not a domain-expert level or even domain-intermediate level. As programmers we reason about code without worrying too much about gate layout on silicon. As non-system programmers we should likewise not need to worry about shoddy OS design.


Until the sequence rolls over. Why design a schema with built in failure (an integer PK that overflows)?


Sequences won't roll over by default. nextval() fails if you're out of values. You can create a sequence which is allowed to roll over (by specifying cycle), but it's not the default.

Yes. It's not optimal and you can shoot yourself in the foot, but it works in its default configuration. I just gave a workable workaround for a problem, I wasn't saying it's the perfect solution (a working unique constraint on the parent table would be)


The default sequence size is 64 bit. Even if you are getting 100,000 ids a second from it you wont run out for over 5 million years.


In the 90s I built a FreeBSD firewall using discarded PC parts. It took 10.5 hours to build world and kernel. There were 2 power outages that forced me to start over each time. I bought my first personal UPS to fix that problem. I would learn how to cross compile instead of waiting 12 hours.


When I think of all the money spent in the name of improving and enriching the lives of human beings, it troubles me that we don't have a reliable solution for ending DUI. I know many companies fund research to end cancer or improve the quality of life for those with various diseases. While this research may help save or improve lives, it's motivated in part by a potential return on investment. Why can't we do something to prevent self-inflicted suffering? Those people did not need to die. While I don't consume alcohol I don't see why a person that has consumed alcohol should be transformed into a homicidal idiot after getting into the driver's seat. Since we can't seem to limit DUI, perhaps we can we make a car that won't operate when the driver is incapacitated? Although, I would oppose any legislation that forces such technology on everyone. To be sure, this remains a tough problem to solve (1). Rather than working around the problem (removing drivers or reducing the need to drive, limiting/controlling alcohol, etc) how should we address the issue? If we could stop alcohol-impaired driving the United States could save USD$51 billion per year and prevent over 10,000 deaths annually.

1. http://www.cdc.gov/motorvehiclesafety/impaired_driving/impai...


Well, I'd love to see some tech a la "Ghost in the Shell" that can just break down all the alcohol in the bloodstream on command, or otherwise maintains the euphoria without getting to the point of "I'm going to go drive my car through Red River tonight." When you're ready to go you just activate the tech and poof, you can drive again. And you avoid the hangover too.

But in the meantime, it certainly seems that the best we can do is prevent others from getting behind the wheel at all. Which really only works in an environment of peer pressure, backed up with at least one person that will go beyond words and physically restrain the would-be driver if necessary. Avoiding the bystander effect is difficult enough as it is, but when the person is drunk at home and decides that they want to go get some McDonalds...


The problem is letting people with DUI back on the roads. The punishments for negligent murder is only a few years; the punishment for just getting caught is often not even jail time, just a short suspension of license. (This guy had 2 DUI arrests already before murdering two people: http://seattletimes.com/html/localnews/2020640576_nseattleac... ) The punishments for violating license suspension are minimal. Want to get serious about DUI? Make this the punishment:

1st time: 10 year suspension of driving privileges, monthly random police tailing to make sure you don't drive that day (funded by the violator); if you drive, the rest of your sentence is spent in jail

2nd time: 10 years in jail, permanent driving privileges revoked, monthly police tailing as above.

3rd time: Life in prison

You ever kill or injure someone while DUI: Life in prison


Life in prison has got to be one of the worst ways to deal with a criminal. If you think that the danger posed by DUI is worth life in prison, then it should apply to texting/phone use too.


It is people who knowingly put themselves in a situation where they kill people. It isn't about dealing with them, it's about protecting the rest of us from them. Typical prison may in fact be wrong -- an isolated island without vehicles where they can work and participate in society remotely would be okay for the ones who haven't murdered anyone yet.


I don't know about the jail punishment, but I see no reason that DUI punishment shouldn't be equal to phone/text punishment (and it should certainly be a license suspension of massive length).


What about driverless cars? That'd pretty much solve that problem completely if I'm not horribly mistaken.


Preventing DUI is a difficult problem. A much easier problem would be to prevent speeding. Put a GPS in every car that prevents the car from driving faster than the speed limit. All the technology exists, and this would doubtlessly save lives, but somehow I doubt anybody would pass the required laws.


I always wonder why we make cars that can go so fast. Even my car with a 1.4L engine making 100 HP can easily do 100mph, even though there's almost no place in the US where that's legal, and no place in my state where it's legal. The fastest speed limit in my state is 70mph. Why not speed-limit cars to 70mph by default, with an option to disable this limiter in a controlled fashion if the person wants to go out on a racetrack where these speeds are legal?

There are obviously arguments in favor of personal liberty that would make some people uncomfortable with this, but they shouldn't be. No one should be. If the speed limit on the road is 70mph, there is no reason for your car to be doing more than 70mph on the road, period. I don't care that you want to pass a vehicle that is only doing 69mph, you'll either lower your speed or pass them at 1mph (which, at least in my state, is also illegal. To pass someone, they must be doing at least 5mph under the speed limit, and you can't break the speed limit in order to pass someone).

Now, it wouldn't help in this situation, but it's something that's always bothered me. As we make better and better performing entry-level cars, we can't change the laws of physics. 90mph isn't unheard of as a common cruising speed on a road where the minimum speed limit is 45mph. That's just stupid and dangerous.


I always wonder why we make cars that can go so fast.

If you are actually questioning why car engines have enough power to do that in the first place: An engine capable of hauling a heavy load uphill at the speed limit is capable of exceeding the speed limit on a flat surface. Ditto for an engine capable of accelerating quickly to perform a merge in a short space.

If you are asking why cars don't come with interlocks preventing those kind of speeds: Because they're not required, and they're not a marketable feature.


I understand more power. What I was trying to get at was "why is it not required to speed limit cars to 70mph". That's common with semi trucks; the trucks at my company are speed limited to 65mph (the semi speed limit here).


Speeding is not involved in the majority of fatal accidents.


Maybe, maybe not (I don't know), but two things come to mind:

1) Speeding is illegal. This is a law that is broken every day by millions. Obviously the law and the punishments aren't working as a deterrent, and the next steps usually involve control rather than deter.

2) Do you know how reaction time and braking time change for every 5mph faster you're going? It might not make you cause an accident, but it sure doesn't help trying to avoid an accident.


That is a pretty bold statement. Pretty much all fatal accidents I've heard about involved speeding in some way. Now, anecdotes aren't reliable, so do you have some numbers to back up your claim?


"Q. Aren't most traffic accidents caused by speeding? A. No, the National Highway Traffic Safety Administration (NHTSA) claims that 30 percent of all fatal accidents are "speed related," but even this is misleading. This means that in less than a third of the cases, one of the drivers involved in the accident was "assumed" to be exceeding the posted limit. It does not mean that speeding caused the accident. Research conducted by the Florida Department of Transportation showed that the percentage of accidents actually caused by speeding is very low, 2.2 percent."

from http://www.motorists.org/speed-limits/faq

Q: Is the National Motorists Association a reliable source for this data?

A: I did the 30 seconds of work to go and Google this, go find your own stats if you don't like mine.


> 30 percent of all fatal accidents are "speed related"

Almost all accidents are "speed related". Very few accidents happen with cars standing still.


I would love to have a device like that, kind of a reverse cruise control The only times I've gotten speeding tickets is when I was unaware of a drop in the speed limit. Maybe something that combines GPS, plus a dash cam that recognizes road signs (so that it can pick up construction zones, etc).

Only one major problem with this -- many areas are funded in large part by speeding tickets. If it is impossible for a car to speed, what will these areas do for revenue?


Agreed. My $100 Garmin GPS tells me what the speed limit is, an what speed I'm doing. I've often wondered why this wasn't just built into cars to ensure they throttle lock at the speed limit.


That would be extremely dangerous.


Why? What's the situation where you would need to go faster than the speed limit?


Passing another vehicle comes to mind.

I'm not sure I would call that "extremely dangerous", though.


Breaking the speed limit to pass another vehicle is against the law. At least in my state, you can't legally pass unless the other car is going 5mph under the speed limit, and then you can only pass at the speed limit, not over.


You mean like http://en.wikipedia.org/wiki/Ignition_interlock_device ? It's operational here in nl, about 100 people a week are forced to have one installed.


Sure, the device is mentioned in the link I cited. It's a remedy applied after the fact. It limits drunk driving after someone has already demonstrated poor behavior. While I earnestly want a safer world for everyone it must not be like the dystopian Gattaca.

http://en.wikipedia.org/wiki/Gattaca


I wonder how self-driving cars would impact this situation? Also: what would happen to a self-driving car that went through a sobriety checkpoint with the drunk person as a passenger?


DUI is a problem, and any human loss is painful, especially when avoidable. All that said, I feel like DUI is an overblown problem, in a way. 8 people dead per 1 million estimates incidents of DUI with a max. toll of 10k lives is not insignificant, and human loss is always bad. However, I disagree with your thesis. Curing cancer, raising the standard of living for the poor, and the death toll to obesity are all much more impactful things on society, though DUI is more 'senseless' and painful for victims.

I think limiting the need to drive is the only feasible option.

* You won't stop people from drinking in places where they can't sleep. Drinking at a bar costs 5-6x as much as drinking at someone's home, and yet people are constantly going out to be in a public drinking environment.

* Hopefully, we won't see mandated technology on vehicles requiring alcohol inspections for driving. Privacy, constitutional and technological issues all exist there. Yes, I know the devices exist, but they're clumsy, frustrating, and in the US are only used for people convicted of a 1st time DUI.

Solution: Incentivize people to walk or taxi home, or have a designated driver.

* By far the biggest immediate change that could be made: Make taxi services as cheap as possible. Stop limiting competition like Uber and the pedbikes in Austin from competition; while there are public safety concerns regarding drivers, I think the economic and safety benefits outweigh the risks. Cheap, responsive drivers = less drunk driving, period. ESPECIALLY in places like Texas, or most of the US, where public transit isn't ubiquitous.

* Allow mixed development and stop making suburban islands. If the hip bars are a three block walk from the houses and apartment complexes, fewer people will need to drive. Or, if you're making a suburban neighborhood, build the bar/drug store/grocer right into the town!

* The following thing is a dangerous and controversial thing to say, because it might imply that DUI is "okay": If you're going to drive after drinking, be honest with yourself. Don't say "I'm totally fine to drive". When you're exhausted on a trip, you don't just say you're fine and keep driving. You either pull over and sleep (get a cab) or you recognize your state and roll down the windows, turn on the radio, splash some water on your face, get a Red Bull, etc. If you are inebriated, say "I am drunk, don't speed, check my mirrors, watch out at all the intersections." That is a big cause of accidents, I'd be willing to bet. Whenever you see government propaganda about the dangers of DUI, it's the couple stumbling to the car, kissing each other, and not paying attention to driving. It's not good to drive when your body is impaired in ANY way, but the big problem is not recognizing your impaired state and focusing on the road.

---

Postscript: DUI checkpoints are shit, and I don't care what the Supreme Court says. It's an invitation for the police to create probable cause and search your entire self and vehicle, your insurance, and if you piss them off, your cell phone. Even if you're sober, avoid them/fight them whenever possible.


"All that said, I feel like DUI is an overblown problem, in a way."

Drinking and alcohol, and their impacts on our society.. and family.. and work, are not overblown. It's completely underestimated. Those who are dealing with the direct impacts of addiction and abuse in their families feel like they're in a bubble, and yet it's absolutely everywhere.

Everyone knows an alcoholic, and drinking&driving&death is only a very small data point in a massive societal issue.


The leading cause of death for people under 35 are automobile accidents. About 1/3 of automobile accidents involve alcohol. With those two figures, I'd say it's pretty likely that it's not an overblown problem.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: