Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do you think we currently understand every mechanism by which software running in adversarial conditions (a web server with anonymous users, an operating system running a freshly downloaded application, that application handling random files off the Internet, &c) can be subverted? I don't; I've learned new bug classes within the last year, and I'm nowhere close to the best in my field.

If you're not confident that you understand all the avenues by which software can be compromised by attackers, how could you possibly be at ease with the idea that the law would presume vendors could secure their code before shipping it?

You may think, "I don't expect vendors to be at the forefront of security research," (which would be good, because even the vendors who want to be there are having a devil of a time trying to fill the headcount to do it) "but let's not excuse vendors who ship flaws readily apparent when their product was being developed". But who's to say what is and isn't readily apparent? How'd you do on the Stripe CTF? Last I checked, only 8 people got the last flag. But nothing in the Stripe CTF would have made the cut for a Black Hat presentation. You think a jury and a court of law would do a better job of judging this?



In the absence of some codified standard, it's not going to be possible to adjudicate a lawsuit. Do we understand every mechanism by which a building can catch on fire? No. But we still have a National Electric Code that specifies, for example, what size wire to use for a given load. If you install wire that's too small and it overheats and causes an electrical fire, you can be sued.

I think likewise if your app contains some known vulnerability, like you don't sanitize your database inputs, there should be a way to be held legally accountable for that.


Before we pursue this metaphor too far, some observations about the National Electrical Code.

One is that software is many orders of magnitude more complex than an electrical delivery system. People have spent decades trying to figure out how to construct secure software by bolting together "secure components" in a simple way. Let's be charitable and just say: That work continues, and will continue throughout my lifetime. It's a much harder problem. Electricity is easy.

The other is that even the electrical code doesn't provide strong guarantees against malicious attacks by hostile humans. That's not in the spec. There's no armor on the wires that come into my house, no alarms that would go off if a ninja with a saw started cutting down a crucial utility pole in a DoS attack on my power, no formal procedures for screening the wiring in my walls for wiretapping devices. The electrical code doesn't even mandate a backup battery or generator, let alone that said generator should be tamper-proof.

Similarly, the fire code doesn't specify that my smoke detectors should have locks so that those ninjas can't easily remove the batteries before setting off a gasoline bomb in my living room late at night. There isn't even a gasoline-fume detector in my house. The windows aren't armored. This place is not defensible!

Of course, there are places in the world that are built at great expense to withstand attacks by armed bandits or trained spies. But if we wrote our building codes to incorporate such measures, what would happen is just what we see happening with software: People would line up to sign waivers and variances, so that they could just build a simple inexpensive house and get on with their lives.


FWIW, the life safety code (separate, but related to the NEC) does mandate emergency power for certain systems (fire pump, egress lighting) in public buildings (not homes). The NEC doesn't address attacks by hostile agents because that's not its purpose. It's to prevent people from getting electrocuted and to prevent electrical fires from starting. That's it.

My point is not that a code is going to provide 100% assurance from all possible forms of attack. It's quite the opposite. The code simply spells out how certain known failure modes are avoided. It's not a guarantee that nothing ever will go wrong. It's basically a list of specific things that have gone wrong in the past, and what things should be done to prevent them.

The point is to establish exactly what "reasonable measures" are for the purpose of determining liability, not to spell out a method for a fail-proof system. If you present yourself as a competent developer and the build someone a system that passes user input directly to the database and stores passwords in plaintext, you should be held accountable for damages resulting from a security breach that made use of those holes.


I think that software is far easier to control than hardware. Physical products, after all, are subject to the infinite vagaries of reality. Yet, somehow we have managed to make sure that physical products work reasonably well and are reasonably safe.

Holding developers liable for foreseeable bugs is perfectly reasonable. After all, the flipside to 6-figure salaries is that they should expect to be held responsible for their work product.

If you're not confident that you understand all the avenues by which software can be compromised by attackers, how could you possibly be at ease with the idea that the law would presume vendors could secure their code before shipping it? The law would not be that specific; tort laws usually are exceptionally broad and leave the details to courts to determine on a case-by-case basis (because in torts, every case is different).

"but let's not excuse vendors who ship flaws readily apparent when their product was being developed". But who's to say what is and isn't readily apparent? How'd you do on the Stripe CTF? Last I checked, only 8 people got the last flag.

If only 8 people got that flag, then it's not readily apparent. Juries are not as dumb as the media makes them out to be. McDonald's verdicts aside, juries can and will understand that something that was only picked up by 8 uber-hackers is not a security flaw a normal developer would be expected to know.


You missed the second part of my assertion about the Stripe CTF, and thus missed my point. The 8 people who got the last Stripe flag are not "uber-hackers". Like I said: you couldn't get a Black Hat talk on the Stripe CTF. There are thousands of people who could get the last Stripe flag. It's a pool of talent that could easily be made to seem large. But ordinary professionals have no access to it.

Similarly, I made a comment downthread about how simple-sounding proscriptions of things like "SQL Injection" break down in the real world; generalist developers feel like they have a sense of what a "reasonable" vulnerability is versus an "unreasonable" vulnerability is, but they don't. Juries aren't dumb† but they aren't skilled in the art either, and so are simply going to end up hostages to expert witnesses.

Given your background, I'm interested to hear how you'd outline liability rules so that software firms could have some chance of building and selling software, in the sure and certain knowledge that someone somewhere can find a way to grievously damage the security of their offering, with some reasonable assurance that they won't get dragged into mid-6-to-low-7-figures legal drama when that happens.

(I agree HN thinks they are, along with lawmakers, but I don't think that, and I'm generally positive about technology regulation)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: