What software developer would ever sign on to a project where they could be held criminally liable for a single bug?
Do you want software development to turn into healthcare, where every developer needs millions of dollars of malpractice insurance? Because shit like this will turn it into a healthcare like system real quick.
How else to interpreted that? When a single bug can cause loss of life, and given that this in a thread about Uber, it’s hard to draw other conclusions. By all means though, offer another perspective on how regulating industries with significant number of lives on the line can’t manage regulation. While you’re doing that, I’d point to the aerospace sector which seems capable of both innovation and regulation.
There's a difference between holding someone criminally responsible for a bug in code that they wrote, and some sort of regulation. They are not the same.
Although the NTSB investigated the accident, it was unable to conclusively identify the cause of the crash. The rudder PCU from Flight 585 was severely damaged, which prevented operational testing of the PCU.[3]:47 A review of the flight crew's history determined that Flight 585's captain strictly adhered to operating procedures and had a conservative approach to flying.[3]:47 A first officer who had previously flown with Flight 585's captain reported that the captain had indicated to him while landing in turbulent weather that the captain had no problem with declaring a go-around if the landing appeared unsafe.[3]:48 The first officer was considered to be "very competent" by the captain on previous trips they had flown together.[3]:48 The weather data available to the NTSB indicated that Flight 585 might have encountered a horizontal axis wind vortex that could have caused the aircraft to roll over, but this could not be shown conclusively to have happened or to have caused the rollover.[3]:48–49
On December 8, 1992, the NTSB published a report which identified what the NTSB believed at the time to be the two most likely causes of the accident. The first possibility was that the airplane's directional control system had malfunctioned and caused the rudder to move in a manner which caused the accident. The second possibility was a weather disturbance that caused a sudden rudder movement or loss of control. The Board determined that it lacked sufficient evidence to conclude either theory as the probable cause of the accident.[2]:ix[3]:49 This was only the fourth time in the NTSB's history that it had closed an investigation and published a final aircraft accident report where the probable cause was undetermined.[4]
Second:
In 2004, following an independent investigation of the recovered PCU/dual-servo unit, a Los Angeles jury, which was not allowed to hear or consider the NTSB's conclusions about the accident, ruled that the 737's rudder was the cause of the crash, and ordered Parker Hannifin, a rudder component manufacturer, to pay US$44 million to the plaintiff families.[16] Parker Hannifin subsequently appealed the verdict, which resulted in an out-of-court settlement for an undisclosed amount.
You interpret it as written, which is that holding developers routinely criminally liable for bugs is going to have very negative effects. One of them is that the only developers you'll get are precisely those too unwise to realize what an incredibly stupid deal that is, no matter what the pay rate is. I don't think I'd like to see all my critical software written by such "unwise developers".
I have no problem "piercing the veil" for egregious issues. I'd have no problem holding a developer liable for failing to secure a project but just continuing on rather than quit. But "Let's just hold all the engineers criminally liable all the time!" is a bad idea and it is not already done for a reason.
It’s not done because software development is an unregulated shitshow full of wildly unethical companies scrambling for the bottom. It’s not unlike early aerospace, or early medicine, or any frontier which develops rapidly before legal frameworks inevitably close in.
This is not true at all. First of all, there's no such thing as being able to mathematically prove a design is sound in any engineering discipline, software or non-software. After all, it is infeasible if not impossible to encapsulate all the details of the implementation of _any_ system in mathematics or any other system of reasoning (down to every last atom, if you stretch your imagination).
All we have in engineering (non-software) is something like safety factors and confidence, and this is done with (usually) rigorous mathematical models as well as loads and loads of testing to fill in the gaps of mathematics (think unknown constant/parameters, assumptions, etc).
None of this is impossible to do for software. There are systems that enable one to do easy/entry level verification (such as something like TLA+), to much more complicated reasoning (something like COQ). This will allow the system designers to gain confidence in if the system will work and gain understanding about under what scenario they will fail. Contrast this with the existing software landscape, which is mostly, at least from my perspective, just let me write some stuff until things do approximately what I want. Even at the top of the ladder, I feel the tests conducted are "adhoc" at best and with none of the rigours that you associate with traditional engineering fields.
How else to interpreted that? When a single bug can cause loss of life, and given that this in a thread about Uber, it’s hard to draw other conclusions. By all means though, offer another perspective on how regulating industries with significant number of lives on the line can’t manage regulation. While you’re doing that, I’d point to the aerospace sector which seems capable of both innovation and regulation.