Do you think we currently understand every mechanism by which software running in adversarial conditions (a web server with anonymous users, an operating system running a freshly downloaded application, that application handling random files off the Internet, &c) can be subverted? I don't; I've learned new bug classes within the last year, and I'm nowhere close to the best in my field.
If you're not confident that you understand all the avenues by which software can be compromised by attackers, how could you possibly be at ease with the idea that the law would presume vendors could secure their code before shipping it?
You may think, "I don't expect vendors to be at the forefront of security research," (which would be good, because even the vendors who want to be there are having a devil of a time trying to fill the headcount to do it) "but let's not excuse vendors who ship flaws readily apparent when their product was being developed". But who's to say what is and isn't readily apparent? How'd you do on the Stripe CTF? Last I checked, only 8 people got the last flag. But nothing in the Stripe CTF would have made the cut for a Black Hat presentation. You think a jury and a court of law would do a better job of judging this?
In the absence of some codified standard, it's not going to be possible to adjudicate a lawsuit. Do we understand every mechanism by which a building can catch on fire? No. But we still have a National Electric Code that specifies, for example, what size wire to use for a given load. If you install wire that's too small and it overheats and causes an electrical fire, you can be sued.
I think likewise if your app contains some known vulnerability, like you don't sanitize your database inputs, there should be a way to be held legally accountable for that.
Before we pursue this metaphor too far, some observations about the National Electrical Code.
One is that software is many orders of magnitude more complex than an electrical delivery system. People have spent decades trying to figure out how to construct secure software by bolting together "secure components" in a simple way. Let's be charitable and just say: That work continues, and will continue throughout my lifetime. It's a much harder problem. Electricity is easy.
The other is that even the electrical code doesn't provide strong guarantees against malicious attacks by hostile humans. That's not in the spec. There's no armor on the wires that come into my house, no alarms that would go off if a ninja with a saw started cutting down a crucial utility pole in a DoS attack on my power, no formal procedures for screening the wiring in my walls for wiretapping devices. The electrical code doesn't even mandate a backup battery or generator, let alone that said generator should be tamper-proof.
Similarly, the fire code doesn't specify that my smoke detectors should have locks so that those ninjas can't easily remove the batteries before setting off a gasoline bomb in my living room late at night. There isn't even a gasoline-fume detector in my house. The windows aren't armored. This place is not defensible!
Of course, there are places in the world that are built at great expense to withstand attacks by armed bandits or trained spies. But if we wrote our building codes to incorporate such measures, what would happen is just what we see happening with software: People would line up to sign waivers and variances, so that they could just build a simple inexpensive house and get on with their lives.
FWIW, the life safety code (separate, but related to the NEC) does mandate emergency power for certain systems (fire pump, egress lighting) in public buildings (not homes). The NEC doesn't address attacks by hostile agents because that's not its purpose. It's to prevent people from getting electrocuted and to prevent electrical fires from starting. That's it.
My point is not that a code is going to provide 100% assurance from all possible forms of attack. It's quite the opposite. The code simply spells out how certain known failure modes are avoided. It's not a guarantee that nothing ever will go wrong. It's basically a list of specific things that have gone wrong in the past, and what things should be done to prevent them.
The point is to establish exactly what "reasonable measures" are for the purpose of determining liability, not to spell out a method for a fail-proof system. If you present yourself as a competent developer and the build someone a system that passes user input directly to the database and stores passwords in plaintext, you should be held accountable for damages resulting from a security breach that made use of those holes.
I think that software is far easier to control than hardware. Physical products, after all, are subject to the infinite vagaries of reality. Yet, somehow we have managed to make sure that physical products work reasonably well and are reasonably safe.
Holding developers liable for foreseeable bugs is perfectly reasonable. After all, the flipside to 6-figure salaries is that they should expect to be held responsible for their work product.
If you're not confident that you understand all the avenues by which software can be compromised by attackers, how could you possibly be at ease with the idea that the law would presume vendors could secure their code before shipping it?
The law would not be that specific; tort laws usually are exceptionally broad and leave the details to courts to determine on a case-by-case basis (because in torts, every case is different).
"but let's not excuse vendors who ship flaws readily apparent when their product was being developed". But who's to say what is and isn't readily apparent? How'd you do on the Stripe CTF? Last I checked, only 8 people got the last flag.
If only 8 people got that flag, then it's not readily apparent. Juries are not as dumb as the media makes them out to be. McDonald's verdicts aside, juries can and will understand that something that was only picked up by 8 uber-hackers is not a security flaw a normal developer would be expected to know.
You missed the second part of my assertion about the Stripe CTF, and thus missed my point. The 8 people who got the last Stripe flag are not "uber-hackers". Like I said: you couldn't get a Black Hat talk on the Stripe CTF. There are thousands of people who could get the last Stripe flag. It's a pool of talent that could easily be made to seem large. But ordinary professionals have no access to it.
Similarly, I made a comment downthread about how simple-sounding proscriptions of things like "SQL Injection" break down in the real world; generalist developers feel like they have a sense of what a "reasonable" vulnerability is versus an "unreasonable" vulnerability is, but they don't. Juries aren't dumb† but they aren't skilled in the art either, and so are simply going to end up hostages to expert witnesses.
Given your background, I'm interested to hear how you'd outline liability rules so that software firms could have some chance of building and selling software, in the sure and certain knowledge that someone somewhere can find a way to grievously damage the security of their offering, with some reasonable assurance that they won't get dragged into mid-6-to-low-7-figures legal drama when that happens.
† (I agree HN thinks they are, along with lawmakers, but I don't think that, and I'm generally positive about technology regulation)
> If you’re poisoned by a burger you can sue the restaurant that sold it - so why can’t you take a software developer to court if their negligent coding lets hackers empty your bank account?
Whoever came up with this example doesn't know much about law (and IANAL). To prevail in a court of law, the hamburger plaintiff would need to show a violation of prevailing standards of behavior, for example, health codes. But if a threat is unknown at the time of the injury, the defendant is generally held blameless.
An example is Legionnaire's Disease, which was ultimately traced to a decision to turn down water heater thermostats to save energy during the 1970s oil crisis. The hotel operators could hardly be held accountable for "negligence" in a case like this, where they had no possible way to anticipate a side effect of a reasonable decision.
It should be the same with software -- if a developer writes code in good faith and meets prevailing quality standards, he shouldn't be held to account for an exploit that arises later.
If the developer worked in collusion with hackers, that would be different, but it's not what's being discussed.
Right; in sales contracts, as I understand it, the obligation is that of a prudent person's ordinary care. Virtually everything we do to help secure applications is extraordinary; it's hard to suggest that a standard of care over security that isn't even consistently practiced by banks and by the military could bind on the developers of photo sharing applications.
This all not to mention the explicit waiver of liability that accompanies virtually every product or software service offered in the US.
Virtually everything we do to help secure applications is extraordinary; it's hard to suggest that a standard of care over security that isn't even consistently practiced by banks and by the military could bind on the developers of photo sharing applications.
Providers are held to a higher standard because the means of securing their product/service are within their control.
The standard of care is very context-specific. Photo sharing applications will not be held to the same standards as banks or the military any more than mom-and-pop delis are held to the same security standards as JFK airport or Dodgers stadium.
This all not to mention the explicit waiver of liability that accompanies virtually every product or software service offered in the US.
This is why liability waivers are now standard issue, so that liability for simple negligence is no longer an issue. But note that such waivers do not absolve the developer of liability for gross (i.e., intentional or pervasive) negligence.
I understand what you're saying but feel like I must be writing unclearly, because I'm not suggesting that photo sharing apps should be held to the same standard of care as banks, or even that anyone advocating liability believes they will be. What I'm saying is that the standard of care most generalist software developers consider when they think about liability is, contrary to expectations, a standard not consistently applied even in sensitive industries that are intrinsically and demonstrably motivated to defend against security flaws even in the absence of meaningful regulations.
Okay, so that standard is not currently applied as one would expect. That doesn't necessarily mean it couldn't be so applied. I think the question here is, how much more expensive is software development when held to this standard?
I don't propose an answer -- I'm curious what you think.
The key word there is "prevailing quality standards". If a developer writes something, in good faith, that's vulnerable to SQL injection, or stores passwords in plaintext on a server, is that negligence?
No. Developers in highly sensitive industries where we can safely presume the highest prevailing standards of care --- banks, for instance --- on projects that have documented standards of code safety and review far exceeding those of the industry as a whole --- have routinely managed to "ship" code containing SQL Injections.
This is easier to understand when you realize that "SQL Injection" really means "any sequence of conditions allowing an adversary to corrupt an SQL query issued by the application", and that those sequences of conditions can be long, complicated, deeply buried and not at all evident from the overt behavior of the application.
So now you have a problem: SQL Injection that is obvious from the moment you enter a single quote character in a login form is an obvious signal of slipshod development, but there are injection vulnerabilities that no reasonable person could say were obvious. Where are you going to draw the line, and, just as importantly, how are you going to articulate the line so that cases can be heard with predictable decisions? Because unpredictable outcomes are a tax.
Well, I would contend the "line" would be "an accredited penetration tester can't break your software", much like a health inspection is "an accredited health and safety inspector can't find anything wrong with your business".
But that opens up a can of worms of who does the accrediting, who pays for them, are they a governmental agency, and perhaps the most overriding thought of "How many users do you have using your software before you're required to get your code inspected".
Which is, obviously, one hell of a rabbit hole to be delving down, and probably a terrible idea. But as seen with a decent amount of companies online recently (LinkedIn and Dropbox come to mind), good security just isn't happening.
First: every piece of software shipped has to be pentested? Most software, by a long stretch, isn't. App pentesting is very expensive.
Second: what testing team? You really mean, "a good pentest team". But as we've seen with PCI, regulated testing is a race to the bottom, and your certification has as much to do with which QSA you pick as anything else. There are lots of terrible pentest teams out there. Every IT and network consulting shop has a line item now for "web application security testing".
>an accredited penetration tester can't break your software
This sounds like an employment act for pen-testers.
Which would probably be good for me, since I could pivot to doing that. And I would, because I'd be a fool to keep on writing software under that kind of regime.
I'm even more frightened by the maze that "accredited pen-tester" would entail.
Also most project are developed by a team with designers, developers, Q&A and they have a project leader etc. etc.
the outcome of their work is a collective responsability.
Maybe, it depends. What does the app do? If it's just a photo-sharing application, or a weekend project, then no. There's no expectation for the former to be secure, or for the latter to be a full-fledged product that meets standards.
On the other hand, if it is a file-sharing saas targeting small businesses, the failure to handle SQL injection or store passwords properly would be negligence.
I think it would be very easy to convince a jury under a straightforward liability framework that "failure to handle SQL injection is negligent". Which is unfortunate, because "failure to handle SQL injection" is by itself a mostly meaningless statement. Most SQL Injection flaws are indeed very dumb, very obvious bugs. But there are bugs that end up vectoring to SQL injection that are not obvious at all.
The question then becomes, who decides what that falls under? It's the kind of question that results in laws against braiding hair without a Cosmetology license.[1] And it's a lot easier to code for public consumption, given that all you need is a computer and an internet connection.
> why can’t you take a software developer to court if their negligent coding lets hackers empty your bank account?
It should be noted that the linked article is talking about the UK where things are different. In the US you are allowed to sue, for any reason. Whether you can find a lawyer willing to take your case, and whether a judge subsequently decides your case has any merit, is a separate matter.
This article actually does a pretty good job of focusing on what really drives lawsuits: negligence. It's easy to establish negligence in food safety, because there is consensus on what represents "reasonable food safety" procedures. This will be incredibly difficult for software because of the wide variety in types of software.
If legislation passes, you can bet it will be bad. I hate to be cynical, but I've yet to see a technology law that was well written. Large software vendors will see this as an opportunity to hamper smaller software developers by lobbying for ridiculous requirements that have little to do with actual security and everything to do with codifying expensive processes that big businesses already do.
* there is consensus on what represents "reasonable food safety" procedures. *
There really isn't. Food safety laws vary from state to state, and from country to country. The only real common ground is that refrigeration of food is required.
This will be incredibly difficult for software because of the wide variety in types of software.
Not really. There are more varieties of food than there are of software. Facebook, twitter, and groupon are all really just variations of CRUD applications for which we already have best practices.
Large software vendors will see this as an opportunity to hamper smaller software developers by lobbying for ridiculous requirements that have little to do with actual security and everything to do with codifying expensive processes that big businesses already do.
Well, yes. Such is life. Once an industry starts to become mainstream, tort liability starts to rear its ugly head. It is the third immutable law of life (after death and taxes).
> There really isn't. Food safety laws vary from state to state, and from country to country. The only real common ground is that refrigeration of food is required.
There isn't consensus in the language of the law, but there is consensus in the principles. Your example that refrigeration is required is an illustration of that point. Rather than focus on the details of the law, we can recognize that all food safety laws make some requirement related to the maintenance of temperature. Taken further, this is based on the principle that the available science is aware that food pathogens grow more rapidly at certain temperature ranges. There are many principles similar to this, upon which food safety laws are based, and these principles are the consensus.
The principle is what matters to this discussion, because that is what the laws will be based upon. I would expect that "software safety" laws will follow a similar tact. Even if I were to take the most cynical view possible, it doesn't seem plausible that legislators would attempt to write software safety laws for Java, C, C#.NET, Python, Ruby, Closure, etc, etc. Rather, they would attempt to establish principles, upon which process would be built.
No problem. I'll sell two versions of the software. The first will be as you get today with no capability of suing me.
The second will be a lot more expensive, but you can sue. The accompanying documentation will list exactly how you can use the product, and anything outside of that voids the product. It will list an exact certified environment you can use it in (eg only certain Intel processors), versions of operating systems, firewall configurations, other components that can be installed (any more voids the product). Essentially the product will be useless.
The problem with the "suing the developers" approach is who pays. It will always come at a cost including increased development times, more restrictive usage modes, less functionality, insurance, lawyers, court time etc. Ultimately people buying the software will have to pay for those, and it doesn't really benefit them.
At the moment people can put their money wherever they want, and they can choose the level of security, timeliness, access to code etc. I don't see the problem with that.
Perhaps, but if there is a liability on that code, you can be damn sure I'll be charging lawyers rates for writing it and it's going to take at least twice as long to produce.
In the end most industries that carry liabilities like this also are required carry error and omissions insurance or something similar. That cost will end up having to be factored in to the cost of the software.
If you expect 100% reliability from a piece of software, that ought to be in your contract with the vendor. You ought to have a provision describing what happens if the vendor fucked up. And the only time it is okay to sue is if the vendor then does not abide by that contract and its described penalty for less-than-100% performance.
The US has a sue-happy culture which I believe is the cause for much ridiculousness in life around here. Somebody finds a way to hurt himself that nobody else has though of, and sues, which leads to everybody then patching that ridiculous hole and making everybody go thru hoops to avoid future lawsuits. Ad nauseam. (The same mentality that caused the rule mandating we take off our shoes at airports…)
Are there any other situations where a completely unlicensed trade would be held to any standard other than caveat emptor?
Would regulating and standardizing software to the point where we could apply a standard of competence and sue people over even improve software security? If it did, would it be worth the potential chilling effect on innovation?
One thing I really can't stand about this is that at least professional associations (which I consider to be cartel-like in many ways) like the AMA or ABA do enable their practitioners to stand up to clients and employers. There are even laws around who is allowed to directly employ many licensed professionals.
Sounds like some lawmakers would like to deny any professional stature to developers other than the right to be sued, as individuals, for their work.
For the record, I'm generally opposed to the creating a developer's cartel. Anyone can read a book on PHP and hang out a shingle as a developer, which is fine by me. But it would be particularly offensive to be subjected only to the liabilities of professional recognition with none of the perks.
Sounds like some lawmakers would like to deny any professional stature to developers other than the right to be sued, as individuals, for their work.
Actually, one of the major features of being a "profession" is that the practitioner is held liable for the quality of their work (as measured against their adherence to pre-established standards). This is why teachers, businessmen, and programmers are not considered "professions": they do not have clearly defined standards for proper work conduct, or liability for their work product.
"The failure to meet a standard of care or standard of conduct that is recognized by a profession reaches the level of malpractice when a client or patient is injured or damaged because of error."
...
"Negligence is conduct that falls below the legally established standard for the protection of others against unreasonable risk of harm. Under negligence law a person must violate a reasonable standard of care."
So you're right, it's not so much quality of work as failure to meet minimum and pre-established standards of care. I suppose this might work with software - not so much "your code was low quality" as "you clearly violated one of the top 10 OWASP security vulnerabilities." Perhaps completely failing to validate input would be malpractice, whereas writing crappy code to do this wouldn't.
But this is all by the wayside, it's not really related to my main point - which is that I don't think there's any precedent for holding a practitioner liable for professional "malpractice" in the absence of a profession that sets standards (and controls the right to practice).
And, as I said above, I tend to be very suspicious of professional associations. I'm not saying I think there should be no regulation on who is allowed to be a medical care provider, but I do think the AMA (along with many other prof assns) show extremely cartel-like behavior that can be very damaging.
Of course developers should be held accountable for security flaws in their code - right after we hold bankers accountable for blowing up the economy and plundering the world.
I do think that at this point code being shipped by people who consider themselves professionals should be free of the types of common types of problems (SQL Injection, buffer overflows, integer overflows, etc...) There is no real excuse for having one in this day and age. I am less convinced that legal liability will cause a meaningful decrease in vulnerabilities. I do think that, like healthcare, the liability would cause less to get done by more people. Developers would evolve techniques for passing the buck on down to someone else.
If legislation was used the big boys (Microsoft, Google, Oracle, etc...) would unintentionally shape it in a way where startups would have most of the liability for not having "adequate security procedures" like having your own security team, certain tools, etc... And that would be bad in the long run.
If one would have to be an idiot to write server sotware (because one would be liable for things beyond her or his control), only idiots will write servers. If this sounds glib, it's only because the suggested regulations here are so utterly insane.
It would be interesting to see all the source code that would be forced out in suits if this happened. Similar to all the cool design insight we have been able to glean from Apple v Samsung.
Also, Wordpress and Wordpress plugin authors would be very broke very quickly. :D
In the case of consumers, certain clauses restricting consumer remedies would be unlawful due to legislation protecting consumers in any event.
This includes removing implied terms like the implied term to exercise reasonable care and skill in the provision of a service. A consumer could argue that a developer has failed in this duty if they provide software which had an avoidable defect. As the article identifies, the problem is it would be a difficult process to identify what is truly avoidable. Due to the complexity of software and the ingenuity of hackers (for want of a better word) certain flaws will always be present. There is also the issue of contributory factors like the user's own failings in updating etc.
In reality, I think the main barrier to consumers bringing an action would be the cost and expense of litigation, not necessarily the terms of a EULA.
A) There was a reasonable expectation that the architect was responsible for the security of the home under a design agreement.
and
B) That the architect was negligent in their duty to provide said security.
Then yes, they could be sued.
You're focusing on the wrong aspect of the argument. You can already be sued for any number of things related to failing to meet an agreement.
The core question here is not whether software developers should be liable for security issues arising from their software, but whether or not software companies should be able to disclaim liability in such broad ways in their EULA language.
If they specify that no locks be used, and that their functionality be replaced with a large sign on each door saying "Ceci n'est pas une porte", then yes, they ought to be.
But this brings us to the "software engineer" title question. Engineering brings these sorts of professional liabilities with it. If you can't, so to speak, affix a stamp to your work, then what you're doing is probably not engineering.
No but in some countries, architects can get sued if buildings they designed endanger lives or something like that. I think it's the case in Canada, isn't it? In some countries, engineering is a regulated profession in the same way as medicine or lawyers. Now should we consider software development as engineering, that's another story.
There is a whole lot about a specific deployment that generally isn't known during development (that isn't contracted for the specific task) that is highly relevant to both liability and threats. If I build a tic-tac-toe app, someone uses it to land 747s, and terrorists turn it into a fireball using an obscure timing attack, that's not my fault. Liability should rest with whoever deployed the system, and if they're not comfortable with that they can pay to have the developers or publishers (or insurance companies) adopt more of it. The real problem is that EULAs are impenetrable walls of legalese, half of which is unenforceable in any given jurisdiction, and so go unread, and so there's no pressure on companies to do anything but waive everything they can get away with waiving.
If the restaurant poison you, you will sue the restaurant owner, the restaurant owner can sue the cook, but i'm not sure you can or would sue directly the cook, that would suppose you know who cooked your meal i guess.
Also I believe if a new branch of software industry was developed where every library, os, patch, updates, firmware , framework is certified as "developed without any negligence for any use " only the defence and nuclear industry would be able to afford it. Actually I guess they already did that for their own use.
So the wish to have a software certified without negligence (and for any use the customer use it) is probably too expensive and customer would just keep buying the same as today if they would be given the choice.
So to sidestep this issue, a developer could provide a product configured to be completely locked down with little to no access to network/filesystem/kernel and provide a blurb as to being completely secure with the "recommended" configuration. Any configuration opening up access would be considered to be at the users own risk.
it's a really bad idea unless they properly grandfather it. people have sold software under the assumption of limited liability and if they are now are suddenly held liable then it could financially ruin some companies or at least end up being a massive transfer of wealth from software companies to consumers.
But joking aside, developers do need to be held responsible for their work product. Every other industry that makes things is held responsible for the safety/workablity of their output, and many of those products (i.e., cars and planes) are far more complicated than software. [There are some narrow exceptions, i.e., vaccines, where the industry is shielded from liability because alternative mechanisms provide for consumer redress.]
This does not mean strict liability (i.e., liability without actual fault), but it does mean that negligence should be on the table.
What would negligent liability require?
Merely that developers follow best practices regarding their product. They already exist for the most common types of projects. Where no best practices exist, following standard conventions is usually sufficient.
Also, users can waive the developer's liability for simple negligence (but generally not for intentional negligence), so this is unlikely to impact most developers.
Cars and planes aren't under constant bombardment from invisible human adversaries all over the world. They're required to withstand failure in the face of known natural conditions and flawed workmanship, but not constant attempts by human adversaries to break them.
And an example that proves the rule - we don't sue car manufacturers when their lock system fails to prevent a thief stealing the radio or the car.
And, should we hold military vehicle and fighter aircraft manufacturers legally responsible for damage to their products by enemies shooting at them?
If a delivered software product doesn't meet a contractually-agreed-upon spec, then there is legal recourse for that. Maybe software contracts should specify exactly what penetration attempts it has been designed to withstand, and in very specific, not general, terms.
If you're not confident that you understand all the avenues by which software can be compromised by attackers, how could you possibly be at ease with the idea that the law would presume vendors could secure their code before shipping it?
You may think, "I don't expect vendors to be at the forefront of security research," (which would be good, because even the vendors who want to be there are having a devil of a time trying to fill the headcount to do it) "but let's not excuse vendors who ship flaws readily apparent when their product was being developed". But who's to say what is and isn't readily apparent? How'd you do on the Stripe CTF? Last I checked, only 8 people got the last flag. But nothing in the Stripe CTF would have made the cut for a Black Hat presentation. You think a jury and a court of law would do a better job of judging this?