Hacker News new | past | comments | ask | show | jobs | submit login

> That's a problem. That's a TRAINING problem.

Isn't the root problem still related to the fact that the AoA sensors were returning wrong information and the MCAS was simply reacting to that in a dangerous and non-obvious way?

I guess the pilots should be aware of this type of system reaction but still it seems like something that should be fixed at a higher level than training (indicators, backup AoA sensors, detecting dangerous descent after trigging nose down, etc).




There's never just one cause of an airline accident. That was one of the causes, but alone it wouldn't have been sufficient. Fixing the problems at all levels is how you achieve high reliability.


Seems like it would have been sufficient to cause the crash, or are you saying that if the pilots let MCAS do its thing that the plane would not have crashed?


Crash causes:

1. Faulty MCAS

2. Insufficient pilot training

3. Boeing design changes (MCAS used to be disabled automatically if the pilot applied sufficient counter-input, this feature was removed in the -MAX series, requiring the pilot to explicitly disable the system using a separate button)

Take away any 1 of those causes and the plane doesn't crash. They all had to happen at the same time for these two tragedies to occur.


additionally

4. Insufficient regulatory over site

5. Failed safety analysis

FAA pushed much of the safety analysis to Boeing engineers due to lack of funding and external pressures to help Boeing speed through certification.

The safety analysis was performed on incorrect data. The initial safety report said the MCAS system could lower the nose by 0.6 degrees. After flight testing, that value was increased to 2.5 degrees but the safety analysis was never updated to reflect the new parameters. Additionally, the MCAS system could reset each time the pilot attempted to override, and it could change 2.5 degrees for each reset. So effectively it was unbounded. The numbers on the safety report did not reflect the actual system. If the real numbers had been updated, the problem likely would have been caught sooner.


The Swiss cheese model of accident causation.

https://en.wikipedia.org/wiki/Swiss_cheese_model


>MCAS used to be disabled automatically if the pilot applied sufficient counter-input, this feature was removed in the -MAX series

What was the reasoning behind this change?


Presumably so that the pilot can be blamed for not knowing the procedure to manually disable the automatic system that is flying the plane into the ground.


Exactly - Defense in depth.


There's problems at multiple distinct levels. Regulatory/oversight, unsafe design, lack of information/training, and potentially poor maintenance practices.

If you'll excuse the expression, everyone is going to get poop on them after this one is over.


Those fixes are inarguably lower level than training. Not just to be pedantic but the whole reason Boeing is in this mess is because they figured all the lower level things could sort out issues, without engaging higher level fixes.

I'm skeptical of the amount of automation the big manufacturers are pushing. They're creating wildly more complex system interactions in the process.


> skeptical of the amount of automation the big manufacturers are pushing

We need to come to terms with automation as a civilization, lots of things that had a human in the loop will no longer have that person in 0-15 years. We don't realize how important these people are, esp their inaction and delay are very important at damping oscillations. As these automated systems start interacting they will have similar results as the MCAS but without pilot agency.

Look at the panic stops built into financial markets, what systems should have similar that currently do not?


Yes, you're right. Lower level is a more accurate description.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: