Hacker News new | past | comments | ask | show | jobs | submit login

They mention it once in the article, but this has basically been true since fly-by-wire, which was quite a while ago.

While it may seem timely to blame automation for this one, you could just as easily blame faulty sensors (like the iced up sensors they mention) or faulty mechanicals, such as pumps, hydraulics, wiring, electrical shorts, etc.

The sad thing is, as we go faster (both in terms of speed and technology), things will only get more complex. We're also far past the point where human pilots and crew might be able to do something about it.

Let's say it wasn't fly-by-wire, and it was hydraulics tied to the stick. If you lost power, you still probably wouldn't be able to move the stick, or it wouldn't do anything. Either way you are in deep trouble.

The real solution is redundancy. They mention the triple flight computers in the Airbus, but in the end, there will always be possible problems of all types. In computers, there are also hardware (not software bugs) errors that may occur at any time.

One of the things devs love to joke about is heisenbugs. If you have debugged these, and only found them on one machine, it's probably your memory doing a bit-flip. Solar radiation can do that, especially at higher altitudes. Just because we're behind our magnetic field doesn't make it go away, and at higher altitudes you do have a slightly higher error rate. Now we're just back to byzantine general territory where you have to figure out which one is lying. Again, coming down to probability that you didn't lose two at the same time.

While it's easy to say that the system should have just done "the right thing" - when you learn more about how the system works, and how complicated it is, you realize how hard that is. And problems in handling errors or trouble are some of the hardest to find, and have the most damaging consequences.

Spacecraft (manned or otherwise) have these exact same problems. (Source: I've worked on aerospace systems)

https://en.wikipedia.org/wiki/Fly-by-wire




> We're also far past the point where human pilots and crew might be able to do something about it.

I disagree. In the incident described in the article, the pilot could have prevented all of the damage--if only the airplane had responded to his control inputs. This was not a situation that would have been hard for a human to handle--if the human had had the ability to tell the airplane: stop listening to the computers and just listen to me.

> The real solution is redundancy.

There was plenty of redundancy in this case. It didn't help.

I think the real solution is, as I said above, to give the human pilot a last-ditch way of regaining complete manual control of the plane. After all, that's what the human is there for: to intervene if it is clear that the automation is doing the wrong thing. That was clearly the case here, but the humans had no way to stop it. That is not a good idea.


That assumes the human won't mistakenly misuse the manual override and cause a crash when the computer would otherwise have been fine on its own.

Computers and humans are both fallible and it's not obvious that the human should be given higher authority than the computer. There isn't a simple answer to this kind of problem. Ultimately someone or something has to make decisions based on sensor data/vision/etc and that something or someone might always get it wrong.


This reminds me of some dev tools (can't remember which ones) which require the user to do something like set an environment variable to "YES_THIS_IS_A_TERRIBLE_IDEA". If the switch to disable aircraft automation involves such cognitive barriers, it's possible that pilots could be relied upon not to use it except when truly necessary. Heck, the barrier could be that pilots aren't told of its existence until contacting maintenance. It sounds silly and useless but it would have offered greater assurance in this situation that landing could be done safely.


I don't know how hard it was to do, but here's a case of some Apollo astronauts who injured and nearly killed themselves by unnecessarily using manual override and doing it wrong. I suspect they know full well what it means but still decide that a special situation requires it. Not so much like a careless oversight as in programming.

https://youtu.be/V-jRjyl7Fg4

The story starts at 47:50. I can't quite get the link to jump straight to it.


React is notable for this: inserting HTML strings directly into the DOM uses `dangerouslySetInnerHTML`, and also has used names like `__SECRET_DOM_DO_NOT_USE_OR_YOU_WILL_BE_FIRED`: https://news.ycombinator.com/item?id=11447020


> Computers and humans are both fallible and it's not obvious that the human should be given higher authority than the computer.

I'm not saying the human should always be given higher authority than the computer. I'm saying that there should be some way, in a case where the human can tell that the computer is doing something obviously wrong, for the human to take higher authority in order to prevent the situation from worsening. Even if such an override were only possible in particular cases--for example, if a computer crashes or it is getting obviously wrong sensor data--that would be better than nothing at all.


> In the incident described in the article, the pilot could have prevented all of the damage--if only the airplane had responded to his control inputs.

Except that this is ALSO a failure mode: https://en.wikipedia.org/wiki/Air_France_Flight_447

I suspect that the real problem is that the aerospace programmers still regard the humans as a "last ditch" recovery point when that is now totally infeasible. The programming needs to start assuming that it gets no help but needs to recover anyway.

For a more terrestrial example, both Three Mile Island and Fukushima would have turned out better if the humans had sat on their hands and done NOTHING.


> Except that this is ALSO a failure mode

Yes, it is. But in the Air France incident, there was faulty data from the airspeed indicators; the computers were incapable of flying the aircraft under those conditions, so there was no choice but to let the humans try. They failed, but that's not an argument for not letting them try if the computers would have also failed.

In the Quantas incident, there was no faulty instrument data as far as I can tell; it was just a computer crash. So the human would have done fine if only given the chance.

> The programming needs to start assuming that it gets no help but needs to recover anyway.

How can that happen if the computer crashes? Or if the instruments are giving faulty data? Unless and until we have artificial intelligence equal to that of humans, I don't see how you can not have a way for the human to have some last ditch way of overriding the computer, since the human at least has some chance of handling conditions that the computer simply can't.


> so there was no choice but to let the humans try. They failed, but that's not an argument for not letting them try if the computers would have also failed.

The major issue in "letting the humans try" in the AF447 incident is that the pilots did not have the angle-of-attack measurement shown to them. AOA is the most critical flight parameter (it determines whether the wing is lifting or stalled) and while the flight computers have access to it, AOA is not displayed in the flight displays. This is a huge flaw; forcing the pilots to guesstimate AOA from airspeed readings is a horrid idea, especially if the airspeed sensors might be malfunctioning. There are already dedicated AOA sensors on the airframe and only a minor software change is needed to display AOA on the primary flight displays.

This design decision of hiding critical information from the humans (what AOA is, what the other pilot is doing with their stick) is what scares me most about fly-by-wire. I get that displaying all the information that's on all the avionics busses would overwhelm the crew, but displaying AOA is not pointless verbosity, it's the most crucial air data measurement on a fixed-wing aircraft.

Sullenberger agrees, "We have to infer angle of attack indirectly by referencing speed. That makes stall recognition and recovery that much more difficult. For more than half a century, we've had the capability to display Angle of Attack in the cockpits of most jet transports, one of the most critical parameters, yet we choose not to do it."

So does the AF447 report, "It is essential in order to ensure flight safety to reduce the angle of attack when a stall is imminent. Only a direct readout of the angle of attack could enable crews to rapidly identify the aerodynamic situation of the aeroplane and take the actions that may be required."


> The major issue in "letting the humans try" in the AF447 incident is that the pilots did not have the angle-of-attack measurement shown to them.

Yes, I agree, but that's not a reason not to give the humans an override option. It's a reason to provide AOA data directly to the pilots. I am actually flabbergasted that that is not done, since, as you say, AOA is the most critical flight parameter.

> This design decision of hiding critical information from the humans (what AOA is, what the other pilot is doing with their stick) is what scares me most about fly-by-wire.

I agree 100%.


Sorry for my wording issues -- I wasn't making a statement about whether giving such an option is good or bad, I was trying to talk about what went wrong in that case when the humans had to take over.

We both agree that the humans need to be provided critical data (including AOA) so when they need to take over, they have a proper set of information and aren't missing anything and don't need to reverse-engineer the AOA from the stall warning horn and the airspeed when the aircraft measures it in the first damn place!


> so there was no choice but to let the humans try.

Keeping status quo without an explicit human override while notifying humans would have been FAR more preferable.

Once common thing in all these failures is that the derivative of these readings/settings has a spike.

These are passenger jets. They have envelopes; they behave according to physics. They don't do anything suddenly.

Sudden anything should get flagged as a mistake and ignored until corroborated.

> How can that happen if the computer crashes? Or if the instruments are giving faulty data?

You have voting and redundancy for a reason. 3 votes is not enough. The space shuttle has 5 computers of which 4 are active and voting.

Embedded computers can and do reboot within milliseconds if that is part of the spec.

The real problem, I suspect, is lack of testing. Especially of a fully assembled system whose parts have come from disparate suppliers.

Once you take the attitude "The human cannot help you out" your testing regime needs to get a LOT more stringent.


> the computers were incapable of flying the aircraft under those conditions

Perhaps so, but a safe method of flying the aircraft under those conditions were known, and could have been programmed. "When airspeed indication fails, fly at this power setting". The computers handed over to humans, knowing exactly what indicators were faulty, as programmed with the intention being that they're better at dealing with this kind of situation; it turns out that they are not.


> a safe method of flying the aircraft under those conditions were known, and could have been programmed. "When airspeed indication fails, fly at this power setting".

That might be safe as an immediate response, but is it safe for long term control of the plane under changing conditions with no airspeed indication?


When flying straight and level, with the airframe set up for cruise (eg. no flaps, gear up, etc): yes. Weather conditions don't matter; the plane's speed through the air will remain the same, and it is the airspeed that matters when it comes to keeping the plane flying. Windshear can temporarily change the airspeed due to inertia, but keep the same power setting and the plane will catch up by itself.

When doing anything else, such as climbing, descending, etc: the required power settings will be different, but can be predetermined.

Turning will reduce airspeed for a given power setting, but if the fault is known, the turn can be done with a small enough angle of bank that it won't matter. This can be done by the pilot, or for the sake of this line of argument, a computer could easily be programmed to limit the bank angle when an airspeed indication is known to be unavailable.


In the 447 incident, the pilots were often giving contravening inputs, and nothing was communicating either the contradiction or the computer's resolution to the contradiction. Bad design. And the user interface for the computer indicating its own confusion, was a bogus overspeed warning when in fact the plane was stalling. There was no sensor failure causing that miscommunication, it's how the system was designed, but it only behaved that way in a particular software abstraction mode (one of the alternate laws), and there's no unambiguous UI to know what law applied. The cockpit was also dark, and there was severe turbulence.

Anyway, there is no such thing as an override when it's fly by wire. An "override" is just a different level of software abstraction, it's yet another mode, with yet another set of rules. And that's what I think is bullshit, are the layered rules. It's such bullshit it makes me angry. This is how the myth started that Airbus's can't be stalled, the computer won't let you do that. True in normal mode, and complete bull fucking shit in at least one of its alternate modes.


During Alternate Law, the ECAM displays "ALTN LAW: PROT LOST" I think that is fairly unambiguous.


From the Air France 447 findings:

"Between the autopilot disconnection and the triggering of the STALL 2 warning, numerous messages were displayed on the ECAM. None of these messages helped the crew to identify the problem associated with the anomalous airspeed."

Knowing merely that you're in alternate law is not sufficient to know the entire consequences of the mode you're in.


That's an excellent point. Even if it's not the people, the stick or manual control inputs are also a point of failure. So if suddenly your stick does something strange, and your flight computer allows it (thinking human knows best) you are in big trouble.

The opposite of this is the ground collision avoidance systems, where the humans are one of the weakest parts of the aircraft (due to G-loc).

https://www.youtube.com/watch?v=WkZGL7RQBVw


> I think the real solution is, as I said above, to give the human pilot a last-ditch way of regaining complete manual control of the plane.

This was my point all along, there is no such thing as "manual control" anymore. It's all a big system working in concert, even moving the control surfaces. And as systems get more complicated, and start to require faster than human response speeds, it will be even less "manual control".


> there is no such thing as "manual control" anymore

It's one thing to not have direct hydraulic or mechanical connections to the control surfaces. You can still have a direct wire from the controls in the cockpit to the actuators that move the control surfaces, with no computer intervention.

It's another thing to have no way to get the faulty computers out of the way and get that direct wire to the actuators.

> as systems get more complicated, and start to require faster than human response speeds

That might be the case for some systems, but it isn't the case for airliners. If it were, humans would not have been able to fly them before fly-by-wire controlled by computers came along.


> That might be the case for some systems, but it isn't the case for airliners.

Some planes like the U-2 are notoriously hard to fly, and spacecraft definitely require very fast reaction speeds.

> You can still have a direct wire from the controls in the cockpit to the actuators that move the control surfaces, with no computer intervention.

Not likely. You'll probably still need digital electronics to check that you aren't violating range of motion stops, or overtaxing the part or pushing it past its limits.

Then to move one wing or do some logical action, you're really twiddling a dozen or more wires at the same time, in just the right way. That requires a different control mechanism that basically mimics the flight computer, and it's not something that I think a human or even a pair of humans can do.

Now let's say you can wire everything up, and actually control it. That's so much wire which is a maintenance nightmare (just imagine if a few of these wires broke, and you didn't figure it out until you tried it). Not to mention the extreme weight of millions of long cable runs to the front of the aircraft.

> If it were, humans would not have been able to fly them before fly-by-wire controlled by computers came along.

I don't think airliners would be even close to where they are today without fly-by-wire. It's a game changing technology.


> Some planes like the U-2 are notoriously hard to fly

Yes, because of the high altitude, which makes the safe flight regime much narrower. An airliner doesn't fly that high.

> spacecraft definitely require very fast reaction speeds.

Yes, but a spacecraft isn't an airliner. I specifically said it isn't the case for airliners.

> You'll probably still need digital electronics to check that you aren't violating range of motion stops, or overtaxing the part or pushing it past its limits.

Possibly. But those electronics should be separate from the automated flight computer.

Btw, by "wire" I just meant "a direct means of sending information". It doesn't have to be a mechanical wire and pulley. I'm just saying that there ought to be a way of cutting out the flight computer's automated decision making and just letting the pilot's control inputs drive the control surfaces.

> to move one wing or do some logical action, you're really twiddling a dozen or more wires at the same time, in just the right way

There's no "twiddling". See above.

> That's so much wire

No, it's the same wire that already goes from the cockpit to the control actuators. See above.

> I don't think airliners would be even close to where they are today without fly-by-wire

In terms of comfort, convenience, and more economical operation when everything is working properly, sure, I absolutely agree. But as far as the basics of flight are concerned, they're still the same. It shouldn't be possible for a plane that was in safe straight and level flight to suddenly go haywire without the pilots being able to stop it, just because some computer crashed.


>Btw, by "wire" I just meant "a direct means of sending information". It doesn't have to be a mechanical wire and pulley. I'm just saying that there ought to be a way of cutting out the flight computer's automated decision making and just letting the pilot's control inputs drive the control surfaces.

This exists already in Airbus aircraft (Direct Law).


> This exists already in Airbus aircraft (Direct Law).

I can't tell from the article whether this mode was activated in the Quantas incident or not. It sounds like it was, but the article doesn't use precise terminology.

The question I would have is, what happens to Direct Law if there is a computer crash? Could what is described in the article--pilot giving control input but plane not responding because computer is overriding it--happen under Direct Law? Could it happen under Direct Law if a computer crashed?


>what happens to Direct Law if there is a computer crash?

I'm not quite sure what you mean by this. Are you talking about the flight control computers? If all of the redundant flight computers crash then the plane won't be controllable, but that is incredibly unlikely. It is like asking what will happen if all of the hydraulic systems fail, or if one of the wings fall off. If one of the flight control computers crashes, then that is one way of triggering a switch from Normal Law to Alternate Law.

>Could what is described in the article--pilot giving control input but plane not responding because computer is overriding it--happen under Direct Law?

No. Under Direct Law the pilots inputs are directly translated to control surface deflections.


> Are you talking about the flight control computers?

One out of three of them, yes. That's what happened in the Quantas flight. Would Direct Law still work under those conditions?

> If one of the flight control computers crashes, then that is one way of triggering a switch from Normal Law to Alternate Law.

Should that have happened in the Quantas flight described in the article?

> Under Direct Law the pilots inputs are directly translated to control surface deflections.

Hm. If that's true then the obvious thing for the pilots to do would have been to go into Direct Law mode, but I don't see that mentioned in the article.


>One out of three of them, yes. That's what happened in the Quantas flight. Would Direct Law still work under those conditions?

Yes, of course. I'm not sure what you're thinking is the alternative. If the aircraft stopped being controllable even under Direct Law when one of the flight control computers failed, then there wouldn't be any redundancy.

>Should that have happened in the Quantas flight described in the article?

No, because none of the flight control computers crashed. The issue was caused by bad sensor data.

>Hm. If that's true then the obvious thing for the pilots to do would have been to go into Direct Law mode, but I don't see that mentioned in the article.

It is not what they are trained to do, and probably isn't really a sensible response. Flying the aircraft without any protections isn't particularly safe either.


> Flying the aircraft without any protections isn't particularly safe either.

The plane was flying straight and level when the flight control computer crash caused the automatic control to start doing obviously wrong things. Going to Direct Law at that point would have allowed the pilot to reestablish straight and level flight while they figured things out. That seems like it would have been safer than what happened. If they weren't trained to do that, maybe their training needs to be changed.


>Going to Direct Law at that point would have allowed the pilot to reestablish straight and level flight while they figured things out. That seems like it would have been safer than what happened.

It may have been safer in this specific incident. The question is whether it would make things safer to tell pilots to switch to direct law whenever something occurs that they think might be due to a problem with the flight control computer. I suspect you would just see lots of accidents causes by the absence of the protections.

It's worth noting that an Airbus has never had a fatal accident as the result of this kind of incident. So the correct response to is probably no response at all. It's easy to get fixated on these kinds of incidents because there's something terrifying about the idea of a crazy computer overruling the pilots, but the actual chance of a fatal accident is way lower than the chance of an accident caused by pilot error owing to an absence of protections.

Thanks for the correction about the flight control computer. I'm not sure how reliable the technical info in the article is, though.


> The question is whether it would make things safer to tell pilots to switch to direct law whenever something occurs that they think might be due to a problem with the flight control computer.

In this case the pilots didn't "think" there was a problem with the flight control computer; they knew it, because they were given an indication that it had faulted. It was that positive indication plus the obviously wrong behavior that made it a sound conclusion that the flight control computer was causing the problem. That's a much narrower rule to follow than just "whenever you think it might be a problem with the flight control computer".

> It's worth noting that an Airbus has never had a fatal accident as the result of this kind of incident.

It depends on how narrowly you define "this kind of incident". If you define it as "flight control computer fault", then you're correct. But if you define it as "airplane not helping pilots to deal with a sudden emergency", then I think Air France 447 falls into the same category. (There has been a fair bit of discussion of that incident elsewhere in this thread.) See further comments below.

> the correct response to is probably no response at all

"Correct" by what standard? Yes, nobody died as a result of this Quantas incident, but plenty of people were injured, and it could have been worse.

Also, in more general terms, if the correct response is always no response at all, what is the human pilot there for in the first place? The reason to have humans in the cockpit is that there will be situations where the computers cannot provide the correct response, and doing nothing at all endangers the passengers. Air France 447 is an example of that.

If you're going to have humans in the cockpit to deal with situations of this kind, the airplane needs to be designed with that in mind. I'm not sure that design criterion is being given enough weight.

> the actual chance of a fatal accident is way lower than the chance of an accident caused by pilot error owing to an absence of protections

Averaged over all flights, yes, of course that's true. But we're not talking about all flights. We're talking about a small subset of flights in which humans, if given proper information, can potentially do better than computers at making the correct response. Unless that subset is empty, which I strongly doubt, there is a potential safety gain in identifying such situations and designing for them.


>because they were given an indication that it had faulted

I checked the accident report. The computer didn't fail. It was just getting inaccurate sensor information. This caused the control system to automatically switch to Alternate Law, which already removes a large number of the automatic protections. If the pilots had used circuit breakers to force a switch to Direct Law, this would most likely have decreased the safety of the remainder of the flight. Coffin Corner is a scary thing when you're flying without any automatic protections. Moreover, the injuries were all caused by the FIRST uncommanded pitch down. To avoid those injuries would have required the automatic protections to have been turned off from the beginning!

> then I think Air France 447 falls into the same category.

This was pilot error, as clearly explained in the accident report. All the stuff on reddit about linked control sticks is a red herring. See e.g. https://aviation.stackexchange.com/a/14045

>Also, in more general terms, if the correct response is always no response at all, what is the human pilot there for in the first place?

I meant the correct response to this incident in terms of making changes to aircraft systems and procedures, not that pilots should never do anything.

If we could get pilots to shut off the computer systems only when this would be likely to help, then sure, that would be great. But realistically, we would just have a spate of incidents where pilots shut down the computers for no good reason and then crashed the aircraft.


> The computer didn't fail. It was just getting inaccurate sensor information.

Yes, I see that after looking at the report--the one I'm looking at is here:

https://www.atsb.gov.au/media/3532398/ao2008070.pdf

> This caused the control system to automatically switch to Alternate Law, which already removes a large number of the automatic protections.

But it apparently didn't remove the high AOA protection, which was what caused the uncommanded pitch down events when false AOA information was provided by the failed unit.

> If the pilots had used circuit breakers to force a switch to Direct Law, this would most likely have decreased the safety of the remainder of the flight.

It would have prevented the uncommanded pitch down events, which were what caused injuries to passengers and crew, and which were due to faulty automatic function even in Alternate Law. And it seems to me that the only reason there weren't more uncommanded events due to faulty automatic function was just luck.

- After further reading in the report, it looks like high AOA protection is supposed to be turned off in Alternate Law. But it seems clear that automatic high AOA protection was what caused the uncommanded pitch down events. So it could be that Alternate Law was not actually triggered until after those events happened.

> Coffin Corner is a scary thing when you're flying without any automatic protections.

Yes, and the solution to that is to get out of Coffin Corner as soon as you know you have faulty automatic controls--gradually reduce altitude and airspeed to give more margin of safety while you look for the nearest place to land. I see no reason why that couldn't be made a standard contingency plan for the rare cases like this one where the human pilots can see that the automatic controls are doing obviously wrong things.

> realistically, we would just have a spate of incidents where pilots shut down the computers for no good reason and then crashed the aircraft.

So you don't think it's possible to come up with a good narrow set of rules that pilots can use to determine when the automatic controls are doing obviously wrong things? Or that it's possible to improve the designs of the automatic systems so that they can give pilots better feedback on why they are doing what they are doing? In this case, AOA data was faulty, and accurate AOA data is critical for proper automatic control of the flight. So a big red light saying "Faulty AOA data" would be an obvious trigger to tell the pilots that they need to take action. Instead, the automatic system was apparently designed to go ahead and pitch the aircraft down based on faulty AOA data.


> none of the flight control computers crashed. The issue was caused by bad sensor data.

I think you're confusing the Quantas flight with Air France 447. From the article on the Quantas flight, after the first dive:

"One of the aircraft's three flight control primary computers – which pilots refer to as PRIMs – is faulty. They begin to reset it by flicking the on-off switch."

And later:

"After QF72's second dive, the number three flight control primary computer faults again."


> spacecraft definitely require very fast reaction speeds

Transatmospheric "flight" is practically impossible for a human to directly control. Even in a simulation, the changing atmospheric density's nonlinear effects on lift and drag require real-time calculus in a way we have not evolved to process. Add to that turbulence, et cetera, and the effort quickly becomes futile.


Totally agree. Really I meant reaction / processing times.

Although there was one time that comes to mind (less flight and more falling): https://en.wikipedia.org/wiki/Mercury-Atlas_9



But will the computer let you go to Direct Law? One of the problems with the Air France flight was it switched laws without them realizing it. I'm sure the pilots tried to put it in whatever mode is most like no computer intervention. And from the wikipedia article, it sounds like Direct Law isn't completely direct: the "maximum deflection of the elevators is limited for each configuration as a function of the current aircraft centre of gravity," which sounds like the computer still has some control.


I'd be surprised if the AF447 pilots didn't realise the aeroplane had reverted to a set of laws with fewer protections, as that was why the autopilot disconnected in the first place. If the autopilot spontaneously disconnects your first question is presumably going to be why.

Then again, I'll probably never understand what the pilots of AF447 were thinking, and their actions lacked coherence. But I would expect a perceptive pilot to notice the change of law. Especially given the stall warning - normal law has full stall protection, so AFAIK hearing a stall warning is strongly indicative of non-normal law.

AFAIK pilots can always revert to direct law by disabling the primary computers on the overhead panel. The article is curious though, because it states the pilots tried to reenable the third primary computer after it failed, which resulted in a second dive. They gave up on restarting it after that, but left the other primary computers engaged, but were concerned about whether the plane would behave. My (potentially misguided) instinct there would be to throw out the primary computers altogether as a precaution.

It wouldn't surprise me if Airbus training encouraged pilots to always try to return to the maximum level of automation and envelope protection after a failure. In that context - the context of Airbus's culture - deliberately reverting to direct law could be seen as irresponsible, even if it seems like the opposite could be said to be the case in this incident. If so, it'd just be another way in which Airbus's approach to aviation raises questions...


Prior to these accidents pilots were trained to avoid stalls and unusual attitudes rather than learning how to recover from them, not least of which is lack of simulator fidelity when outside the flight envelope, which was cited as a factor in the AF 447 accident. While they did know alternate law applied, they didn't know why and the flight computer didn't give them sufficient information to figure this out quickly, and the indications they were getting (speed, attitude, c-chord warning, stall warning) pretty much confused all of them.


See my response to foldr upthread.


> If you lost power, you still probably wouldn't be able to move the stick, or it wouldn't do anything.

item of interest: some commercial aircraft have a mechanical flight control mode where the pilot can mechanically control some of the control surfaces. not sure if this applies to many other aircraft though.


Not available on an Airbus.. there is no mechanical linkage from the flight stick to anything other than it's own housing.

However, on both the modern Airbus and Boeing plans, there's a RAT[1] that will generate hydraulic and electrical power if the planes main and auxiliary systems fail. The (in)famous Gimli Glider[2] used it to survive total power loss after it ran it's fuel tanks dry.

[1]:https://en.wikipedia.org/wiki/Ram_air_turbine [2]:https://en.wikipedia.org/wiki/Gimli_Glider


On the 757, the pilot's controls move steel cables which open and close the valves on hydraulic rams which move the surfaces. Hydraulic power is necessary, which is why there's the backup RAT to supply enough hydraulic pressure to run the primary flight controls if the engines all fail.

A "feel computer" pushes back on the pilot's controls so moving the controls feels like it would on a machine where the surfaces were connected to the cables.

I'm pretty sure the 757 can be flown with no electric power.


...you could just as easily blame faulty sensors...

Well, no - faulty sensors are an inevitability, so the automation needs to be able to continue to function effectively in that eventuality. Which is, of course, a well-understood part of the design space.


Everything is an inevitability, that's the problem with the design space. We haven't yet invented some perfect machine that will handle everything.

But there are functional and realistic limits to how well you can perform under certain failure scenarios. You might have to increase weight to add more sensors and wiring. That logic to reconcile more sensors will be more complex, and possibly prone to error itself. Added complexity against failure doesn't necessarily increase safety, as much as we always wish it would.


> Everything is an inevitability, that's the problem with the design space. We haven't yet invented some perfect machine that will handle everything.

In the case of this flight, and the less fortunate Air Fance 447; it was a failure of a single sensor that caused loss of stable flight.. yet, there were still two other sensors that continued to function correctly.

In both cases, the fix is to turn a switch so that the primary flight display and auto pilot start getting their data from one of the alternate sources. Without having done a complex study, my impression is that there is a lack of pilot training when it comes to properly managing and understanding flight data sources and paths in modern aircraft.


Relying on just pitot tubes, even in multiples is dumb. Inertial guidance, GPS, GLONASS, etc. Feed it all into a Kalman filter. 447 would still be around if the autopilot didn't kick off.


You could say the same about everything - AF447 would still be here if Bonin, the co-pilot, would not have pulled the stick while the plane clearly stalled, because he wanted the plane to 'go up'.


There were lots of other confounding factors, the other two crew members didn't have good enough instrumentation to get the plane out of stall either.

The Apollo Guidance Computer really showed me the value of best effort and gradual failure. Too many things hard fail.

HCI kills.


Speaking of HCI killing, https://www.washingtonpost.com/archive/politics/2002/03/24/f...

    > Nonetheless, the official said the incident
    > shows that the Air Force and Army have a serious
    > training problem that needs to be corrected. "We
    > need to know how our equipment works; when the
    > battery is changed, it defaults to his own
    > location," the official said. "We've got to make
    > sure our people understand this."
The solution is almost never, "be more careful."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: