Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The full transcripts were posted on the nasa spaceflight forum here: http://forum.nasaspaceflight.com/index.php?topic=35974.440. One user provided the following summary:

> There was corporate conversational knowledge that unlocking the feather system during the transonic region would be catastrophic, but this knowledge wasn't formalized into the pilot handbook or in training. There was formal knowledge that unlocking late would lead to a flight abort, and a recent event had occurred where the unlock was late. Add to this copilot workload increases between flights, the fact that training wasn't done in the suits and equipment worn on the real flight or under the g and vibration loads in the flight, and the result was the copilot unlocked the feather early leading to the loss of the vehicle. As usual, not a single failure, but a chain of smaller failures - lack of formalization of knowledge, lack of training in the operational environment, recent events, pressure to avoid an abort, and you get an overcompensation.



They should just do all this in software. Anything that that's complex and life or death should be autopilot driven.


99 Percent Invisible just had some interesting episodes on automation and autopilot, especially as it relates to air travel:

    http://99percentinvisible.org/episode/children-of-the-magenta-automation-paradox-pt-1/
and self-driving cars:

    http://99percentinvisible.org/episode/johnnycab-automation-paradox-pt-2/


And then you get an autopilot that under abnormal conditions doesn't realize it should deploy the parachute...


Almost everything an autopilot should do should have a button and a switch for humans. The switch prevents the autopilot from doing it, the button makes it happen.

This isn't to disparage autopilots, but to assure control. The human should always be in control of the system, and capable of deciding when things happen.


And in this case, would have pushed the button too early, resulting in the exact same outcome.


No, in this case, by far the most likely outcome is the autopilot would have pushed the button at the right time, creating no need for the copilot to override it.


How do you figure? Pilot thinks "I think it's time to push the button." Autopilot doesn't push button. Pilot says "autopilot is broken. I need to override." Pilot pushes button early.

If the pilot thinks the button needs pushing, they're not going to wait til after the crash to see if the autopilot is working.


What happened here wasn't that the copilot sat there waiting for the intended time to come about and pressed the button exactly at that intended time and just didn't know what the time was supposed to be. What happened was that he was frantically trying to concentrate on a zillion things at once and pressed the button a bit early because he thought otherwise he might end up pressing it too late and he had been warned about the dangers of pressing it too late but not about the dangers of pressing it too early. If there had been an autopilot, most likely the autopilot would have pressed the button at the right time while he was looking at something else.


Lol. Got em


> Anything that that's complex and life or death should be autopilot driven.

What's your stance on autopilot driven cars and trucks?

Arguably, roadway accidents far, far outnumber the number per-capita of space accidents.


> What's your stance on autopilot driven cars and trucks?

The sooner we can switch to them, the better...

> Arguably, roadway accidents far, far outnumber the number per-capita of space accidents.

... for this reason.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: