Hacker News new | past | comments | ask | show | jobs | submit login
Flight QF72: What happens when automation leaves pilots powerless? (theherald.com.au)
130 points by korethr on May 19, 2017 | hide | past | favorite | 111 comments



This was already submitted and discussed at https://news.ycombinator.com/item?id=14336897 and https://news.ycombinator.com/item?id=14328294.

My summary at https://news.ycombinator.com/item?id=14329082 is still the most in depth researched as far as I can tell (I actually read a lot of the air safety bureau report).

Basically Northrup Grumman's ADIRU units had sporadic software bugs (off by one - typical of software written in needlessly low level languages) and the Airbus control system that combined the output of the three redundant ADIRUs didn't properly deal with the case of failure in one of the three (even if 2/3 agreed with GPS!), didn't provide transparent training and resolution procedures to pilots for this class of situation, and was generally trusting the broken unit instead of the other two or the GPS which were operating fine. Even calling the crisis center of the major airline for assistance on the ground couldn't help resolve the situation. So... problems in both companies, but Airbus is mostly to blame, even if testing could not be expected to tease out all bugs in third party components, primarily because they were not logging issues and resolving them from the millions of flight hours these things already had, and because they had not tested their fly-by-wire system with worst case input, ie. what's the point of redundancy if you trust the broken unit?


After the Intel FDIV bug [1] they started using formal verification to make sure the cpu's did what they said they would. I don't understand why they don't do the same for these avionic systems. Maybe it's just not worth it to the company.

[1] https://en.wikipedia.org/wiki/Pentium_FDIV_bug


Would you say that this has limitations and it's not a silver bullet? Eg. What if your software added numbers, formal verification proved that your program was indeed correct because it added the numbers. However, it missed the requirement that you also had to do subtraction, which for some reason was omitted in the specification.


You would be no worse off in this situation. Writing a formal spec. arguably makes it harder to overlook such issues. Formal verification is good at catching things like off-by-one errors, where the problem is not in the spec.


Yes, I agree of course. Garbage-in-garbage-out has many manifestations.


Leaky abstractions... nominally, this appears to be a triply-redundant system, but due to the way the systems interacted, a single faulty unit apparently overrode two units that were working correctly (or were at least responding safely to the situation.) I have not finished reading the report, and I am interested to know if it contained anything to address this apparent systems-engineering level issue, such as how likely it is that there are other potential breakdowns of redundancy.

I am not sure whether there is much that can be done in the way of pilot training and resolution procedures for this class of problem. Trying to figure out if a subsystem is faulty, and if so, which one, when they are interconnected by complex and frequently opaque rules, is not easy. I do not think that more information would help, given that the crew was already getting more diagnostic messages than they could handle in real time.



They mention it once in the article, but this has basically been true since fly-by-wire, which was quite a while ago.

While it may seem timely to blame automation for this one, you could just as easily blame faulty sensors (like the iced up sensors they mention) or faulty mechanicals, such as pumps, hydraulics, wiring, electrical shorts, etc.

The sad thing is, as we go faster (both in terms of speed and technology), things will only get more complex. We're also far past the point where human pilots and crew might be able to do something about it.

Let's say it wasn't fly-by-wire, and it was hydraulics tied to the stick. If you lost power, you still probably wouldn't be able to move the stick, or it wouldn't do anything. Either way you are in deep trouble.

The real solution is redundancy. They mention the triple flight computers in the Airbus, but in the end, there will always be possible problems of all types. In computers, there are also hardware (not software bugs) errors that may occur at any time.

One of the things devs love to joke about is heisenbugs. If you have debugged these, and only found them on one machine, it's probably your memory doing a bit-flip. Solar radiation can do that, especially at higher altitudes. Just because we're behind our magnetic field doesn't make it go away, and at higher altitudes you do have a slightly higher error rate. Now we're just back to byzantine general territory where you have to figure out which one is lying. Again, coming down to probability that you didn't lose two at the same time.

While it's easy to say that the system should have just done "the right thing" - when you learn more about how the system works, and how complicated it is, you realize how hard that is. And problems in handling errors or trouble are some of the hardest to find, and have the most damaging consequences.

Spacecraft (manned or otherwise) have these exact same problems. (Source: I've worked on aerospace systems)

https://en.wikipedia.org/wiki/Fly-by-wire


> We're also far past the point where human pilots and crew might be able to do something about it.

I disagree. In the incident described in the article, the pilot could have prevented all of the damage--if only the airplane had responded to his control inputs. This was not a situation that would have been hard for a human to handle--if the human had had the ability to tell the airplane: stop listening to the computers and just listen to me.

> The real solution is redundancy.

There was plenty of redundancy in this case. It didn't help.

I think the real solution is, as I said above, to give the human pilot a last-ditch way of regaining complete manual control of the plane. After all, that's what the human is there for: to intervene if it is clear that the automation is doing the wrong thing. That was clearly the case here, but the humans had no way to stop it. That is not a good idea.


That assumes the human won't mistakenly misuse the manual override and cause a crash when the computer would otherwise have been fine on its own.

Computers and humans are both fallible and it's not obvious that the human should be given higher authority than the computer. There isn't a simple answer to this kind of problem. Ultimately someone or something has to make decisions based on sensor data/vision/etc and that something or someone might always get it wrong.


This reminds me of some dev tools (can't remember which ones) which require the user to do something like set an environment variable to "YES_THIS_IS_A_TERRIBLE_IDEA". If the switch to disable aircraft automation involves such cognitive barriers, it's possible that pilots could be relied upon not to use it except when truly necessary. Heck, the barrier could be that pilots aren't told of its existence until contacting maintenance. It sounds silly and useless but it would have offered greater assurance in this situation that landing could be done safely.


I don't know how hard it was to do, but here's a case of some Apollo astronauts who injured and nearly killed themselves by unnecessarily using manual override and doing it wrong. I suspect they know full well what it means but still decide that a special situation requires it. Not so much like a careless oversight as in programming.

https://youtu.be/V-jRjyl7Fg4

The story starts at 47:50. I can't quite get the link to jump straight to it.


React is notable for this: inserting HTML strings directly into the DOM uses `dangerouslySetInnerHTML`, and also has used names like `__SECRET_DOM_DO_NOT_USE_OR_YOU_WILL_BE_FIRED`: https://news.ycombinator.com/item?id=11447020


> Computers and humans are both fallible and it's not obvious that the human should be given higher authority than the computer.

I'm not saying the human should always be given higher authority than the computer. I'm saying that there should be some way, in a case where the human can tell that the computer is doing something obviously wrong, for the human to take higher authority in order to prevent the situation from worsening. Even if such an override were only possible in particular cases--for example, if a computer crashes or it is getting obviously wrong sensor data--that would be better than nothing at all.


> In the incident described in the article, the pilot could have prevented all of the damage--if only the airplane had responded to his control inputs.

Except that this is ALSO a failure mode: https://en.wikipedia.org/wiki/Air_France_Flight_447

I suspect that the real problem is that the aerospace programmers still regard the humans as a "last ditch" recovery point when that is now totally infeasible. The programming needs to start assuming that it gets no help but needs to recover anyway.

For a more terrestrial example, both Three Mile Island and Fukushima would have turned out better if the humans had sat on their hands and done NOTHING.


> Except that this is ALSO a failure mode

Yes, it is. But in the Air France incident, there was faulty data from the airspeed indicators; the computers were incapable of flying the aircraft under those conditions, so there was no choice but to let the humans try. They failed, but that's not an argument for not letting them try if the computers would have also failed.

In the Quantas incident, there was no faulty instrument data as far as I can tell; it was just a computer crash. So the human would have done fine if only given the chance.

> The programming needs to start assuming that it gets no help but needs to recover anyway.

How can that happen if the computer crashes? Or if the instruments are giving faulty data? Unless and until we have artificial intelligence equal to that of humans, I don't see how you can not have a way for the human to have some last ditch way of overriding the computer, since the human at least has some chance of handling conditions that the computer simply can't.


> so there was no choice but to let the humans try. They failed, but that's not an argument for not letting them try if the computers would have also failed.

The major issue in "letting the humans try" in the AF447 incident is that the pilots did not have the angle-of-attack measurement shown to them. AOA is the most critical flight parameter (it determines whether the wing is lifting or stalled) and while the flight computers have access to it, AOA is not displayed in the flight displays. This is a huge flaw; forcing the pilots to guesstimate AOA from airspeed readings is a horrid idea, especially if the airspeed sensors might be malfunctioning. There are already dedicated AOA sensors on the airframe and only a minor software change is needed to display AOA on the primary flight displays.

This design decision of hiding critical information from the humans (what AOA is, what the other pilot is doing with their stick) is what scares me most about fly-by-wire. I get that displaying all the information that's on all the avionics busses would overwhelm the crew, but displaying AOA is not pointless verbosity, it's the most crucial air data measurement on a fixed-wing aircraft.

Sullenberger agrees, "We have to infer angle of attack indirectly by referencing speed. That makes stall recognition and recovery that much more difficult. For more than half a century, we've had the capability to display Angle of Attack in the cockpits of most jet transports, one of the most critical parameters, yet we choose not to do it."

So does the AF447 report, "It is essential in order to ensure flight safety to reduce the angle of attack when a stall is imminent. Only a direct readout of the angle of attack could enable crews to rapidly identify the aerodynamic situation of the aeroplane and take the actions that may be required."


> The major issue in "letting the humans try" in the AF447 incident is that the pilots did not have the angle-of-attack measurement shown to them.

Yes, I agree, but that's not a reason not to give the humans an override option. It's a reason to provide AOA data directly to the pilots. I am actually flabbergasted that that is not done, since, as you say, AOA is the most critical flight parameter.

> This design decision of hiding critical information from the humans (what AOA is, what the other pilot is doing with their stick) is what scares me most about fly-by-wire.

I agree 100%.


Sorry for my wording issues -- I wasn't making a statement about whether giving such an option is good or bad, I was trying to talk about what went wrong in that case when the humans had to take over.

We both agree that the humans need to be provided critical data (including AOA) so when they need to take over, they have a proper set of information and aren't missing anything and don't need to reverse-engineer the AOA from the stall warning horn and the airspeed when the aircraft measures it in the first damn place!


> so there was no choice but to let the humans try.

Keeping status quo without an explicit human override while notifying humans would have been FAR more preferable.

Once common thing in all these failures is that the derivative of these readings/settings has a spike.

These are passenger jets. They have envelopes; they behave according to physics. They don't do anything suddenly.

Sudden anything should get flagged as a mistake and ignored until corroborated.

> How can that happen if the computer crashes? Or if the instruments are giving faulty data?

You have voting and redundancy for a reason. 3 votes is not enough. The space shuttle has 5 computers of which 4 are active and voting.

Embedded computers can and do reboot within milliseconds if that is part of the spec.

The real problem, I suspect, is lack of testing. Especially of a fully assembled system whose parts have come from disparate suppliers.

Once you take the attitude "The human cannot help you out" your testing regime needs to get a LOT more stringent.


> the computers were incapable of flying the aircraft under those conditions

Perhaps so, but a safe method of flying the aircraft under those conditions were known, and could have been programmed. "When airspeed indication fails, fly at this power setting". The computers handed over to humans, knowing exactly what indicators were faulty, as programmed with the intention being that they're better at dealing with this kind of situation; it turns out that they are not.


> a safe method of flying the aircraft under those conditions were known, and could have been programmed. "When airspeed indication fails, fly at this power setting".

That might be safe as an immediate response, but is it safe for long term control of the plane under changing conditions with no airspeed indication?


When flying straight and level, with the airframe set up for cruise (eg. no flaps, gear up, etc): yes. Weather conditions don't matter; the plane's speed through the air will remain the same, and it is the airspeed that matters when it comes to keeping the plane flying. Windshear can temporarily change the airspeed due to inertia, but keep the same power setting and the plane will catch up by itself.

When doing anything else, such as climbing, descending, etc: the required power settings will be different, but can be predetermined.

Turning will reduce airspeed for a given power setting, but if the fault is known, the turn can be done with a small enough angle of bank that it won't matter. This can be done by the pilot, or for the sake of this line of argument, a computer could easily be programmed to limit the bank angle when an airspeed indication is known to be unavailable.


In the 447 incident, the pilots were often giving contravening inputs, and nothing was communicating either the contradiction or the computer's resolution to the contradiction. Bad design. And the user interface for the computer indicating its own confusion, was a bogus overspeed warning when in fact the plane was stalling. There was no sensor failure causing that miscommunication, it's how the system was designed, but it only behaved that way in a particular software abstraction mode (one of the alternate laws), and there's no unambiguous UI to know what law applied. The cockpit was also dark, and there was severe turbulence.

Anyway, there is no such thing as an override when it's fly by wire. An "override" is just a different level of software abstraction, it's yet another mode, with yet another set of rules. And that's what I think is bullshit, are the layered rules. It's such bullshit it makes me angry. This is how the myth started that Airbus's can't be stalled, the computer won't let you do that. True in normal mode, and complete bull fucking shit in at least one of its alternate modes.


During Alternate Law, the ECAM displays "ALTN LAW: PROT LOST" I think that is fairly unambiguous.


From the Air France 447 findings:

"Between the autopilot disconnection and the triggering of the STALL 2 warning, numerous messages were displayed on the ECAM. None of these messages helped the crew to identify the problem associated with the anomalous airspeed."

Knowing merely that you're in alternate law is not sufficient to know the entire consequences of the mode you're in.


That's an excellent point. Even if it's not the people, the stick or manual control inputs are also a point of failure. So if suddenly your stick does something strange, and your flight computer allows it (thinking human knows best) you are in big trouble.

The opposite of this is the ground collision avoidance systems, where the humans are one of the weakest parts of the aircraft (due to G-loc).

https://www.youtube.com/watch?v=WkZGL7RQBVw


> I think the real solution is, as I said above, to give the human pilot a last-ditch way of regaining complete manual control of the plane.

This was my point all along, there is no such thing as "manual control" anymore. It's all a big system working in concert, even moving the control surfaces. And as systems get more complicated, and start to require faster than human response speeds, it will be even less "manual control".


> there is no such thing as "manual control" anymore

It's one thing to not have direct hydraulic or mechanical connections to the control surfaces. You can still have a direct wire from the controls in the cockpit to the actuators that move the control surfaces, with no computer intervention.

It's another thing to have no way to get the faulty computers out of the way and get that direct wire to the actuators.

> as systems get more complicated, and start to require faster than human response speeds

That might be the case for some systems, but it isn't the case for airliners. If it were, humans would not have been able to fly them before fly-by-wire controlled by computers came along.


> That might be the case for some systems, but it isn't the case for airliners.

Some planes like the U-2 are notoriously hard to fly, and spacecraft definitely require very fast reaction speeds.

> You can still have a direct wire from the controls in the cockpit to the actuators that move the control surfaces, with no computer intervention.

Not likely. You'll probably still need digital electronics to check that you aren't violating range of motion stops, or overtaxing the part or pushing it past its limits.

Then to move one wing or do some logical action, you're really twiddling a dozen or more wires at the same time, in just the right way. That requires a different control mechanism that basically mimics the flight computer, and it's not something that I think a human or even a pair of humans can do.

Now let's say you can wire everything up, and actually control it. That's so much wire which is a maintenance nightmare (just imagine if a few of these wires broke, and you didn't figure it out until you tried it). Not to mention the extreme weight of millions of long cable runs to the front of the aircraft.

> If it were, humans would not have been able to fly them before fly-by-wire controlled by computers came along.

I don't think airliners would be even close to where they are today without fly-by-wire. It's a game changing technology.


> Some planes like the U-2 are notoriously hard to fly

Yes, because of the high altitude, which makes the safe flight regime much narrower. An airliner doesn't fly that high.

> spacecraft definitely require very fast reaction speeds.

Yes, but a spacecraft isn't an airliner. I specifically said it isn't the case for airliners.

> You'll probably still need digital electronics to check that you aren't violating range of motion stops, or overtaxing the part or pushing it past its limits.

Possibly. But those electronics should be separate from the automated flight computer.

Btw, by "wire" I just meant "a direct means of sending information". It doesn't have to be a mechanical wire and pulley. I'm just saying that there ought to be a way of cutting out the flight computer's automated decision making and just letting the pilot's control inputs drive the control surfaces.

> to move one wing or do some logical action, you're really twiddling a dozen or more wires at the same time, in just the right way

There's no "twiddling". See above.

> That's so much wire

No, it's the same wire that already goes from the cockpit to the control actuators. See above.

> I don't think airliners would be even close to where they are today without fly-by-wire

In terms of comfort, convenience, and more economical operation when everything is working properly, sure, I absolutely agree. But as far as the basics of flight are concerned, they're still the same. It shouldn't be possible for a plane that was in safe straight and level flight to suddenly go haywire without the pilots being able to stop it, just because some computer crashed.


>Btw, by "wire" I just meant "a direct means of sending information". It doesn't have to be a mechanical wire and pulley. I'm just saying that there ought to be a way of cutting out the flight computer's automated decision making and just letting the pilot's control inputs drive the control surfaces.

This exists already in Airbus aircraft (Direct Law).


> This exists already in Airbus aircraft (Direct Law).

I can't tell from the article whether this mode was activated in the Quantas incident or not. It sounds like it was, but the article doesn't use precise terminology.

The question I would have is, what happens to Direct Law if there is a computer crash? Could what is described in the article--pilot giving control input but plane not responding because computer is overriding it--happen under Direct Law? Could it happen under Direct Law if a computer crashed?


>what happens to Direct Law if there is a computer crash?

I'm not quite sure what you mean by this. Are you talking about the flight control computers? If all of the redundant flight computers crash then the plane won't be controllable, but that is incredibly unlikely. It is like asking what will happen if all of the hydraulic systems fail, or if one of the wings fall off. If one of the flight control computers crashes, then that is one way of triggering a switch from Normal Law to Alternate Law.

>Could what is described in the article--pilot giving control input but plane not responding because computer is overriding it--happen under Direct Law?

No. Under Direct Law the pilots inputs are directly translated to control surface deflections.


> Are you talking about the flight control computers?

One out of three of them, yes. That's what happened in the Quantas flight. Would Direct Law still work under those conditions?

> If one of the flight control computers crashes, then that is one way of triggering a switch from Normal Law to Alternate Law.

Should that have happened in the Quantas flight described in the article?

> Under Direct Law the pilots inputs are directly translated to control surface deflections.

Hm. If that's true then the obvious thing for the pilots to do would have been to go into Direct Law mode, but I don't see that mentioned in the article.


>One out of three of them, yes. That's what happened in the Quantas flight. Would Direct Law still work under those conditions?

Yes, of course. I'm not sure what you're thinking is the alternative. If the aircraft stopped being controllable even under Direct Law when one of the flight control computers failed, then there wouldn't be any redundancy.

>Should that have happened in the Quantas flight described in the article?

No, because none of the flight control computers crashed. The issue was caused by bad sensor data.

>Hm. If that's true then the obvious thing for the pilots to do would have been to go into Direct Law mode, but I don't see that mentioned in the article.

It is not what they are trained to do, and probably isn't really a sensible response. Flying the aircraft without any protections isn't particularly safe either.


> Flying the aircraft without any protections isn't particularly safe either.

The plane was flying straight and level when the flight control computer crash caused the automatic control to start doing obviously wrong things. Going to Direct Law at that point would have allowed the pilot to reestablish straight and level flight while they figured things out. That seems like it would have been safer than what happened. If they weren't trained to do that, maybe their training needs to be changed.


>Going to Direct Law at that point would have allowed the pilot to reestablish straight and level flight while they figured things out. That seems like it would have been safer than what happened.

It may have been safer in this specific incident. The question is whether it would make things safer to tell pilots to switch to direct law whenever something occurs that they think might be due to a problem with the flight control computer. I suspect you would just see lots of accidents causes by the absence of the protections.

It's worth noting that an Airbus has never had a fatal accident as the result of this kind of incident. So the correct response to is probably no response at all. It's easy to get fixated on these kinds of incidents because there's something terrifying about the idea of a crazy computer overruling the pilots, but the actual chance of a fatal accident is way lower than the chance of an accident caused by pilot error owing to an absence of protections.

Thanks for the correction about the flight control computer. I'm not sure how reliable the technical info in the article is, though.


> The question is whether it would make things safer to tell pilots to switch to direct law whenever something occurs that they think might be due to a problem with the flight control computer.

In this case the pilots didn't "think" there was a problem with the flight control computer; they knew it, because they were given an indication that it had faulted. It was that positive indication plus the obviously wrong behavior that made it a sound conclusion that the flight control computer was causing the problem. That's a much narrower rule to follow than just "whenever you think it might be a problem with the flight control computer".

> It's worth noting that an Airbus has never had a fatal accident as the result of this kind of incident.

It depends on how narrowly you define "this kind of incident". If you define it as "flight control computer fault", then you're correct. But if you define it as "airplane not helping pilots to deal with a sudden emergency", then I think Air France 447 falls into the same category. (There has been a fair bit of discussion of that incident elsewhere in this thread.) See further comments below.

> the correct response to is probably no response at all

"Correct" by what standard? Yes, nobody died as a result of this Quantas incident, but plenty of people were injured, and it could have been worse.

Also, in more general terms, if the correct response is always no response at all, what is the human pilot there for in the first place? The reason to have humans in the cockpit is that there will be situations where the computers cannot provide the correct response, and doing nothing at all endangers the passengers. Air France 447 is an example of that.

If you're going to have humans in the cockpit to deal with situations of this kind, the airplane needs to be designed with that in mind. I'm not sure that design criterion is being given enough weight.

> the actual chance of a fatal accident is way lower than the chance of an accident caused by pilot error owing to an absence of protections

Averaged over all flights, yes, of course that's true. But we're not talking about all flights. We're talking about a small subset of flights in which humans, if given proper information, can potentially do better than computers at making the correct response. Unless that subset is empty, which I strongly doubt, there is a potential safety gain in identifying such situations and designing for them.


>because they were given an indication that it had faulted

I checked the accident report. The computer didn't fail. It was just getting inaccurate sensor information. This caused the control system to automatically switch to Alternate Law, which already removes a large number of the automatic protections. If the pilots had used circuit breakers to force a switch to Direct Law, this would most likely have decreased the safety of the remainder of the flight. Coffin Corner is a scary thing when you're flying without any automatic protections. Moreover, the injuries were all caused by the FIRST uncommanded pitch down. To avoid those injuries would have required the automatic protections to have been turned off from the beginning!

> then I think Air France 447 falls into the same category.

This was pilot error, as clearly explained in the accident report. All the stuff on reddit about linked control sticks is a red herring. See e.g. https://aviation.stackexchange.com/a/14045

>Also, in more general terms, if the correct response is always no response at all, what is the human pilot there for in the first place?

I meant the correct response to this incident in terms of making changes to aircraft systems and procedures, not that pilots should never do anything.

If we could get pilots to shut off the computer systems only when this would be likely to help, then sure, that would be great. But realistically, we would just have a spate of incidents where pilots shut down the computers for no good reason and then crashed the aircraft.


> The computer didn't fail. It was just getting inaccurate sensor information.

Yes, I see that after looking at the report--the one I'm looking at is here:

https://www.atsb.gov.au/media/3532398/ao2008070.pdf

> This caused the control system to automatically switch to Alternate Law, which already removes a large number of the automatic protections.

But it apparently didn't remove the high AOA protection, which was what caused the uncommanded pitch down events when false AOA information was provided by the failed unit.

> If the pilots had used circuit breakers to force a switch to Direct Law, this would most likely have decreased the safety of the remainder of the flight.

It would have prevented the uncommanded pitch down events, which were what caused injuries to passengers and crew, and which were due to faulty automatic function even in Alternate Law. And it seems to me that the only reason there weren't more uncommanded events due to faulty automatic function was just luck.

- After further reading in the report, it looks like high AOA protection is supposed to be turned off in Alternate Law. But it seems clear that automatic high AOA protection was what caused the uncommanded pitch down events. So it could be that Alternate Law was not actually triggered until after those events happened.

> Coffin Corner is a scary thing when you're flying without any automatic protections.

Yes, and the solution to that is to get out of Coffin Corner as soon as you know you have faulty automatic controls--gradually reduce altitude and airspeed to give more margin of safety while you look for the nearest place to land. I see no reason why that couldn't be made a standard contingency plan for the rare cases like this one where the human pilots can see that the automatic controls are doing obviously wrong things.

> realistically, we would just have a spate of incidents where pilots shut down the computers for no good reason and then crashed the aircraft.

So you don't think it's possible to come up with a good narrow set of rules that pilots can use to determine when the automatic controls are doing obviously wrong things? Or that it's possible to improve the designs of the automatic systems so that they can give pilots better feedback on why they are doing what they are doing? In this case, AOA data was faulty, and accurate AOA data is critical for proper automatic control of the flight. So a big red light saying "Faulty AOA data" would be an obvious trigger to tell the pilots that they need to take action. Instead, the automatic system was apparently designed to go ahead and pitch the aircraft down based on faulty AOA data.


> none of the flight control computers crashed. The issue was caused by bad sensor data.

I think you're confusing the Quantas flight with Air France 447. From the article on the Quantas flight, after the first dive:

"One of the aircraft's three flight control primary computers – which pilots refer to as PRIMs – is faulty. They begin to reset it by flicking the on-off switch."

And later:

"After QF72's second dive, the number three flight control primary computer faults again."


> spacecraft definitely require very fast reaction speeds

Transatmospheric "flight" is practically impossible for a human to directly control. Even in a simulation, the changing atmospheric density's nonlinear effects on lift and drag require real-time calculus in a way we have not evolved to process. Add to that turbulence, et cetera, and the effort quickly becomes futile.


Totally agree. Really I meant reaction / processing times.

Although there was one time that comes to mind (less flight and more falling): https://en.wikipedia.org/wiki/Mercury-Atlas_9



But will the computer let you go to Direct Law? One of the problems with the Air France flight was it switched laws without them realizing it. I'm sure the pilots tried to put it in whatever mode is most like no computer intervention. And from the wikipedia article, it sounds like Direct Law isn't completely direct: the "maximum deflection of the elevators is limited for each configuration as a function of the current aircraft centre of gravity," which sounds like the computer still has some control.


I'd be surprised if the AF447 pilots didn't realise the aeroplane had reverted to a set of laws with fewer protections, as that was why the autopilot disconnected in the first place. If the autopilot spontaneously disconnects your first question is presumably going to be why.

Then again, I'll probably never understand what the pilots of AF447 were thinking, and their actions lacked coherence. But I would expect a perceptive pilot to notice the change of law. Especially given the stall warning - normal law has full stall protection, so AFAIK hearing a stall warning is strongly indicative of non-normal law.

AFAIK pilots can always revert to direct law by disabling the primary computers on the overhead panel. The article is curious though, because it states the pilots tried to reenable the third primary computer after it failed, which resulted in a second dive. They gave up on restarting it after that, but left the other primary computers engaged, but were concerned about whether the plane would behave. My (potentially misguided) instinct there would be to throw out the primary computers altogether as a precaution.

It wouldn't surprise me if Airbus training encouraged pilots to always try to return to the maximum level of automation and envelope protection after a failure. In that context - the context of Airbus's culture - deliberately reverting to direct law could be seen as irresponsible, even if it seems like the opposite could be said to be the case in this incident. If so, it'd just be another way in which Airbus's approach to aviation raises questions...


Prior to these accidents pilots were trained to avoid stalls and unusual attitudes rather than learning how to recover from them, not least of which is lack of simulator fidelity when outside the flight envelope, which was cited as a factor in the AF 447 accident. While they did know alternate law applied, they didn't know why and the flight computer didn't give them sufficient information to figure this out quickly, and the indications they were getting (speed, attitude, c-chord warning, stall warning) pretty much confused all of them.


See my response to foldr upthread.


> If you lost power, you still probably wouldn't be able to move the stick, or it wouldn't do anything.

item of interest: some commercial aircraft have a mechanical flight control mode where the pilot can mechanically control some of the control surfaces. not sure if this applies to many other aircraft though.


Not available on an Airbus.. there is no mechanical linkage from the flight stick to anything other than it's own housing.

However, on both the modern Airbus and Boeing plans, there's a RAT[1] that will generate hydraulic and electrical power if the planes main and auxiliary systems fail. The (in)famous Gimli Glider[2] used it to survive total power loss after it ran it's fuel tanks dry.

[1]:https://en.wikipedia.org/wiki/Ram_air_turbine [2]:https://en.wikipedia.org/wiki/Gimli_Glider


On the 757, the pilot's controls move steel cables which open and close the valves on hydraulic rams which move the surfaces. Hydraulic power is necessary, which is why there's the backup RAT to supply enough hydraulic pressure to run the primary flight controls if the engines all fail.

A "feel computer" pushes back on the pilot's controls so moving the controls feels like it would on a machine where the surfaces were connected to the cables.

I'm pretty sure the 757 can be flown with no electric power.


...you could just as easily blame faulty sensors...

Well, no - faulty sensors are an inevitability, so the automation needs to be able to continue to function effectively in that eventuality. Which is, of course, a well-understood part of the design space.


Everything is an inevitability, that's the problem with the design space. We haven't yet invented some perfect machine that will handle everything.

But there are functional and realistic limits to how well you can perform under certain failure scenarios. You might have to increase weight to add more sensors and wiring. That logic to reconcile more sensors will be more complex, and possibly prone to error itself. Added complexity against failure doesn't necessarily increase safety, as much as we always wish it would.


> Everything is an inevitability, that's the problem with the design space. We haven't yet invented some perfect machine that will handle everything.

In the case of this flight, and the less fortunate Air Fance 447; it was a failure of a single sensor that caused loss of stable flight.. yet, there were still two other sensors that continued to function correctly.

In both cases, the fix is to turn a switch so that the primary flight display and auto pilot start getting their data from one of the alternate sources. Without having done a complex study, my impression is that there is a lack of pilot training when it comes to properly managing and understanding flight data sources and paths in modern aircraft.


Relying on just pitot tubes, even in multiples is dumb. Inertial guidance, GPS, GLONASS, etc. Feed it all into a Kalman filter. 447 would still be around if the autopilot didn't kick off.


You could say the same about everything - AF447 would still be here if Bonin, the co-pilot, would not have pulled the stick while the plane clearly stalled, because he wanted the plane to 'go up'.


There were lots of other confounding factors, the other two crew members didn't have good enough instrumentation to get the plane out of stall either.

The Apollo Guidance Computer really showed me the value of best effort and gradual failure. Too many things hard fail.

HCI kills.


Speaking of HCI killing, https://www.washingtonpost.com/archive/politics/2002/03/24/f...

    > Nonetheless, the official said the incident
    > shows that the Air Force and Army have a serious
    > training problem that needs to be corrected. "We
    > need to know how our equipment works; when the
    > battery is changed, it defaults to his own
    > location," the official said. "We've got to make
    > sure our people understand this."
The solution is almost never, "be more careful."


> control over the horizontal tail – 3000 pounds per square inch of pressure that can be moved at the speed of light.

Not quite. If I recall correctly the tail will move about 1/2 degree per second. Hardly the speed of light. The elevators are hydraulically operated and can go much faster, but hydraulics don't move at the speed of light, either. The 3000 psi refers to the hydraulic pressure in the system, which has kinda nothing to do with how fast things move. Boeing considered going to a higher pressure to save weight, but rejected the idea because a pinhole leak at such pressures would act like a knife on human skin.

Source: I worked on the horizontal tail design of the 757.


Yeah, that particular sentence really grated on me.

I also didn't like: Booooom. A crashing sound tears through the cabin. In a split second, the galley floor disappears beneath Maiava's feet, momentarily giving him a sense of floating in space.

I initially thought there was some sort of structural failure of the cabin floor.


Most articles written by journalists about technical issues read like the journalist took the facts and ran them through a Mixmaster before printing them.

One nice anomaly is the "Aviation Disasters" TV series, which gets the tech right as far as I can tell.


The wording is technically imprecise in a number of places. The mechanical components certainly can't move at any significant fraction of c, but they may be referring to the electrical fly-by-wire signals controlling them.

Still not precisely the speed of light, but close enough if that's what they meant.


Similarly, Captain Sullenberger said that in respect of US Airways Flight 1549, computer-imposed limits prevented him from achieving the optimum landing flare for the ditching, which would have softened the impact.


What I found most interesting was how unnerved the pilot (a bone fide Top Gun graduate) became by his newfound sense of lack of control. It ruined flying for him and effectively ended his career.

I would have assumed that coming to terms with a lack of control was one of the first things a fighter pilot would have to learn to deal with. But maybe you can't be a good fighter pilot unless you can fool yourself into thinking you're always in control, and that lack of control is really just a personal failing (didn't train hard enough, not thinking clearly enough, etc).

Or maybe with age you just lose the ability to handle that kind of stress. AFAIU the older you get the more likely you are to become overwhelmed by anxiety. Which seemed totally counterintuitive until I watched others get older--including myself--and saw how that works.

I guess that was sort of the plot to Top Gun, too. ;)


> I would have assumed that coming to terms with a lack of control was one of the first things a fighter pilot would have to learn to deal with.

I think it wasn't just the lack of control; it was the inability to do anything to regain control. My father was a test pilot; there were plenty of times where his airplane did something unexpected and he lost control. But he always had some option for regaining it. (In the last extremity, he could eject--which he actually had to in one test flight. I noticed that the lack of that option in an airliner was mentioned in the article.)

Also, this was a situation where, as the pilot said, he could easily have kept control of the plane, if only the plane had let him. But the plane was listening to the computers instead of him. To me, if you're going to have highly trained humans in the cockpit, it doesn't make sense not to give them a last-dicth way of telling the airplane to stop listening to the computers and start listening to them.


  > there were plenty of times where his airplane did
  > something unexpected and he lost control. But he always
  > had some option for regaining it. (In the last extremity,
  > he could eject [....])
But that's not really a lack of control. In those cases you're merely being challenged to regain control of the aircraft. You just need to fight harder, think smarter, or exit. In any event you still have control (or at least a sense of control) over your life.

I understand why this issue was so unnerving for the pilot. Heck, any passenger could sympathize: it's why flying can be so scary--total and complete lack of control; little to any knowledge about what's happening and why; you're completely at the mercy of somebody else, or nothing at all.

So, yeah, I get it. I was just thinking that of all sorts of pilots, a fighter pilot would be the last to be unnerved by losing control; unnerved to the point of not wanting to fly anymore. a) Fighter jets are super powerful and super sophisticated and when something goes wrong there may not even be any time to regain control no matter your training or experience; you're always flying at that envelope where the unexpected can and often does happen, so I just assumed you invariably accept that you're flipping a coin every time you take-off. b) I presume combat is one of those things where you either resolve yourself to a lack of control or realize you're not able to cope and move on to something else (if you can, and assuming you can move on emotionally). But I guess I shouldn't presume a modern fighter pilot (esp. from the 1980s) to have experienced the same sort of peril as someone on the ground during a war.

That said, I'm not trying to suggest he lost his mind or something. He responded appropriately in the moment. He was able to let his training and experience dictate his responses. He consciously understands that he was unnerved after the fact because of that lack of control, as opposed to not understanding those feelings and fears. I'm just surprised that those feelings (or at least their severity) were novel to him and changed his perception of flying, not that he was unnerved, per se.


> But that's not really a lack of control.

Not in the sense that the pilot had lack of control of the airliner in this incident, yes. That's my point.

> I presume combat is one of those things where you resolve yourself to a lack of control or learn you're not able to cope.

Lack of control to a certain extent, yes. No matter how good a pilot you are, you can still get shot down. (My dad flew in Vietnam before he became a test pilot, and was shot down once. Luckily, he was recovered.) But you can still fly your own airplane--often even if it gets hit. (My dad got part of his right wing shot off during one mission, but he still was able to make it back to the carrier.)

I think the reason this incident was so unnerving to the pilot was the lack of control over the one thing pilots are supposed to always have control over: that when they move the stick or the rudder, something happens.


> Or maybe with age you just lose the ability to handle that kind of stress.

Or as you age death becomes increasingly less abstract, especially for those fortune enough to have not experienced a situation that clearly illustrates mortality at a young age. A young fighter pilot probably has less to lose than a middle-aged parent so it's a lot easier to underestimate the risk.


> His reasoning is simple: how can the plane stall and over-speed at the same time?

It can. It's called the "coffin corner".


I remember reading comments by a U-2 pilot that turns had to be very shallow at height due to the possibility of the inside wing stalling and/or the outside wing entering mach buffet.

!


Link[1] for those who want to know more.

[1] https://en.wikipedia.org/wiki/Coffin_corner_(aerodynamics)


Hadn't heard of that, read the Wikipedia article, fascinating.


Also found these, the official investigation reports from the incident.

https://www.atsb.gov.au/publications/investigation_reports/2...


That's disconcerting that they still don't know for sure what caused this issue and if it could happen again.


It is disconcerting. I do take some heart in the fact that there's thousands of safe operational hours on the flight computers in question. That said, I'd hope that behind the scenes, there's some engineers pouring over data and test units to find exactly why it happened and what needs to be fixed to ensure it can never happen again.


They did that for the investigation and couldn't find any probable cause. At some point you have to admit defeat and accept that the cause will not be found.


We don't know for sure what caused the ADIRU to start outputting the current altitude numbers but tagged as angle of attack values but this shouldn't have caused an in-flight upset. A spike in AOA from 2.1 degrees to 50.625 degrees and back in 3 seconds is not physically possible, and indeed, the flight computers are capable of detecting and ignoring such flagrantly erroneous data, even if the ADIRU's built-in-test-equipment hasn't announced an ADIRU failure. However, the code that detected anomalous changes in ADIRU data output was a simplistic filter that was vulnerable to a very specific timing of erroneous spikes. It could not handle spikes that were 1.2 seconds (an arbitrary parameter of the filter) apart, and, well, this specific ADIRU failure happened to send erroneous data with that exact timing. The flight computer thus thought that 50.625 degrees was a valid AOA measurement, and reacted accordingly -- by commanding a pitch down with the elevators to avoid the "stall" it thought it was experiencing.

The only issue that hasn't been addressed in the report (which is good reading, https://www.atsb.gov.au/media/3532398/ao2008070.pdf) is that the angle-of-attack value is not directly displayed to the pilots on their flight displays. AOA is measured by the 3 ADIRUs and is one of the most crucial measurements used by the flight computers and their control laws -- after all, whether the wing is stalled or not is a direct function of AOA! AOA value is also critical to upset / loss-of-control / stall recovery but without an AoA instrument, the pilot needs to infer it from other instrument values -- which is nontrivial and also depends on instruments that might be malfunctioning.

Hiding the most crucial air data parameter from the pilots (who are expected to take over when the computers or the air data sensors act up) is a bad design decision. Sullenberger agrees, "We have to infer angle of attack indirectly by referencing speed. That makes stall recognition and recovery that much more difficult. For more than half a century, we've had the capability to display Angle of Attack in the cockpits of most jet transports, one of the most critical parameters, yet we choose not to do it.": http://www.safetyinengineering.com/FileUploads/Situation%20a... . This was a recommendation in the AF447 report as well, " It is essential in order to ensure flight safety to reduce the angle of attack when a stall is imminent. Only a direct readout of the angle of attack could enable crews to rapidly identify the aerodynamic situation of the aeroplane and take the actions that may be required."

Displaying AOA in the flight displays might have helped in this incident as well. Had the pilots seen indicated AOA values spiking immediately preceding the uncommanded pitch-downs, diagnosis and corrective action (which might have included disabling the offending ADIRU) would have been much easier.


No system, human or mechanical or computer, is perfect. Human pilots have crashed more airplanes and are more likely to crash airplanes than automated systems. Human pilots have also saved airplanes when the systems fail. For the foreseeable future, it's probably best to have both, but nothing in life can ever be perfectly safe.


Is there a site somewhere that attempts to catalog

1) instances where the flight computer prevented an incident due to pilot error

2) instances where the flight computer caused an incident

I understand it's not always (if ever) so black & white, and I'm sure there's a better way to break things down. But it'd be interesting to read something that attempts to keep some kind of score.


It's a question that can't really be answered. It's sort of like asking about

1) instances where your alarm clock prevented you from being late to work

2) instances where your alarm clock failed to wake you up

How could you possibly determine how often your alarm clock prevented you from being late? You can certainly tell how often it fails to do it's job, but you can't determine how often it's success is necessary because there aren't any near misses to detect. With no control group there can be no comparison.

You could count incidents which the flight computer is helpful, but you cannot count incidents that never happened in the first place because of the flight computer.


OTOH, Airbus and Boeing use different systems. Some alarm clocks kick you out of bed, shower you, and drive you to work. Other alarm clocks just pull the covers off and poor cold water on you, but still require you to do the rest.

So you could analyze instances where the latter proved insufficient and try to guess whether the former would have been sufficient.

The bigger problem is probably that there just aren't enough useful extreme incidents to be draw any conclusions. Or maybe there are. That's why I'm most curious if anybody has even tried. All I ever hear is that air safety has increased along with an increase in computerization. But that tells me very little. Everything about the aircraft and its ground support has been getting better, not to mention the training of the humans.


It's worth pointing out Qantas is the exception to your point. More incidents on Qantas flights are due to causes outside of the crew's control than due to human error. Even then, Qantas hasn't lost an airframe or had any fatalities since the dawn of the jet age.


> Even then, Qantas hasn't lost an airframe or had any fatalities since the dawn of the jet age.

They do have a record as a safe airline ( though Southwest and Ryanair are probably 'safer' given the number of short-haul daily flights they operate ) but QANTAS also go to extreme lengths to maintain that reputation.

For example in 1999 one of their 747s, reg VH-OJH, overran the runway in Bangkok after aquaplaning. The insurers examined it whilst it lay on a golf course and wrote it off as uneconomic to repair. QANTAS however decided to proceed on its own initiative, to maintain its 'no jet losses' reputation, and spent over $100 million on repairs. Probably more than or just about exactly what the aircraft was worth at that time.

They actually had another 11 years service out of that one before it was sent to Marana for storage.


I am a pilot, and my complaint is not the lack of a last ditch emergency manual mode, but that there are numerous modes (or alternate laws) in the software abtraction of fly by wire aircraft, and the plane responds differently to inputs in these different modes, the notification for what mode the system is in has to be inferred, and the pilot is expected to know all of these things.

I think it's hubris and incompetence in engineering that a system would be designed this way. And in fact not all of them have these same deficient behaviors, so this is not inherent to fly by wire or automation. It's a choice made by the software abstraction designers.

The idea there'd be a mode where advancing a throttle literally does nothing, no increase in power nor a notification that the input received is sensed but isn't going to do what you think it should so that you have a clue in that some other mode is applied, is so totally batshit insane to me. It's a huge betrayal of physics of flight which is something pilots do understand rather well, almost like muscle memory.

In a conventional plane, throttle advance always increases power. If that doesn't happen, it's an emergency. In a fly by wire plane if it doesn't happen, it almost certainly means the pilot is confused. And that to me is crap design.


What a poor design. Pilots should have a hard "manual control" switch that turns control completely to the pilots.

A friend of mine from childhood became a top expert in formal verification, and got a contract many years back to help Airbus perform formal verification on their control software. I don't know many details, but according to my friend's father, he would never fly on an Airbus jet after this experience.

A pilot friend told a story of some pilot friends of his who were piloting an Airbus in Canada, many years back, who couldn't get the jet to give control back and get out of a holding pattern to land. They had to wake up engineers in France in the middle of the night, who told them to take a hammer to certain fuses or breakers, and let them regain control.

From a UI point of view, the control system and displays in the Airbus are a disaster. For instance, the pilots in the Air France plane that stalled over the pacific couldn't figure out that the plane was stalled for several minutes before it was too late to correct, with all of the displays in front of them.


...Actually, while I'm criticising Airbus, it would be amiss not to mention the A400M incident [1] [2]. Unfortunately as a military accident the full report was never made available to the public, but based on the available information, failure to load calibration data for the engine computers during manufacturing resulted in the engines shutting down... once they got to the altitude where that calibration data was needed, and found to be absent.

The idea that Airbus would design engine computers not to check the validity of vital calibration data until the aeroplane is in the air, and then respond to that contingency when it is identified by shutting down the engines, is so obscene I think I've usually tried to assume that there must be something amiss about the (very limited) reporting as to the details of the issue.

But considering your words, and the general lack of consideration that Airbus seems to give to these things the more and more I look at accidents involving Airbus craft, maybe they just really are truly bad at this stuff... it's quite disturbing.

[1] https://en.wikipedia.org/wiki/Airbus_A400M#Accidents [2] https://arstechnica.com/information-technology/2015/06/repor...


> Safety officials are still investigating how safety checks failed to spot that the calibration data had been deleted.

So I guess the checks for checking the checks of the data failed? You can never have too much redundancy!


>A friend of mine from childhood became a top expert in formal verification, and got a contract many years back to help Airbus perform formal verification on their control software. I don't know many details, but according to my friend's father, he would never fly on an Airbus jet after this experience.

There are safety anecdotes about all of the major airplane manafacturers. If Airbuses were actually unsafe, they would crash more often.

>who told them to take a hammer to certain fuses or breakers

There would be an incident report about this if it happened. No need to rely on your friend's story.


AFAIK there is such a switch: there are switches to disconnect the primary computers on the overhead panel. If those are disconnected, presumably the aeroplane reverts to direct law and flight envelope protections become inoperative.

I might be mistaken though; I'm no pilot and you seem more knowledgable. The article mentions they were able to regain control of the plane by leaving one of the primary computers disabled. (The end of the article itself has a quote from Airbus suggesting there's always supposed to be a way to get full control; presumably this is what they're referring to.)

I was surprised that they left the other computers enabled, especially given that the article suggests they were worried whether it would happen again; to me, the obvious thing would be to do would be to disable all primary computers and use direct law. Malfunction of the flight envelope protections seemed the obvious culprit from the moment it was mentioned the sticks were ignoring commands, short of a mechanical failure in the control surfaces themselves.

Certainly AF447 has demonstrated that Airbus aircraft are very poorly designed from a user interface perspective.

The anecdotes you have about people working at Airbus sound highly interesting and would be worthy of blog posts in themselves. Would your friend ever consider writing about these concerns (anonymously even)?

Also, was an accident report ever filed for that holding pattern incident? Can I read about it?


Read this last weekend on the beach. Interesting.

That poor bastard who put the life jacket around his neck at 30,000 feet and inflated it...


To be fair, all the safety briefings I've seen tell you not to do that. ("Do not inflate while inside the airplane" is the line, I believe).


Most people assume that's just a precaution to make it easier to move around the cabin and not a choking hazard!


Massive part of the reason is because if the airplane ditches into the water, the life jacket can make it very difficult for you to escape [1].

https://en.wikipedia.org/wiki/Ethiopian_Airlines_Flight_961


I assume that the pressure differential between sea level and 30,000 feet is going to change how inflated it gets as well


As I understand it, Boeing planes physically move the cockpit controls to reflect computer control inputs. Would that approach have made a difference in this event?


Maybe, surely more so if the pilots resisting that intended change in control were given priority.

I very much like how the redundant sticks are /forced/ to stay in sync. What's done to one IS done to the other as well. That's a critical safety feature in the human interface.

I think a complete human over-ride of 'higher level' functions (disable the auto-pilot) should require 'two keys' like in War Games (with the nuke silo). The lower level systems still have to exist, and there should be a way of troubleshooting (graphs probably) their inputs and outputs over time.


IIRC, Boeing 777s (which are FBW) will automatically disengage the autopilot if you start forcing the stick to a different position. The flight envelope protections are implemented by adding a physical resistance to the stick, but can be overridden by using force, which seems a highly intuitive and deferential implementation. It certainly seems a lot more responsible than Airbus's design, both for the above reasons and for the mechanical linkage you mention.


Reminds me of SK751 [1] where an automated system prevented the pilots from powering down a broken engine, resulting in both engines failing and the plane crash landing in a field north of Stockholm.

[1] https://en.wikipedia.org/wiki/Scandinavian_Airlines_Flight_7...


Happen to another Qantas flight in the area, QF71 - hence the theory about the high-power VHF transmission station in the area being responsible:

https://en.wikipedia.org/wiki/Naval_Communication_Station_Ha...


I think the second event you're thinking of was a Malaysia Airlines flight from Perth to Kuala Lumpur. https://www.atsb.gov.au/publications/investigation_reports/2...


No, there was an incident several months later with QF71 flying the same route in the opposite direction.

https://en.m.wikipedia.org/wiki/Qantas_Flight_72 See the subheading in the Final Report section



This is unquestionably a very scary situation. Kudos to pilots who got the plane down without loss of life.

However painful this state is (broken autopilot, no full human override) I suspect fly by wire actually prevents many more accidents than it causes.


This story reminded me of the adage: "They say that to err is human, but to really fuck up, you need a computer..."


Aren't we always going to have two pilots?

a captain in case the computers fail, another in case the captain fails.

Automation (airpilot) has already been around...?


Don't the pilots have the option of engaging alternate flight rules in this case? Or is that not how it works?


On the whole I would still prefer well tested software over a human anytime.


Jesus, that article contains an ungodly amount of annoying padding.

TLDR; Plane makes two steep dives within 3 minutes without apparent mechanical failure or pilot intervention, injuring a number of passengers. Plain manages a safe emergency landing.

Dives were initiated by a flight computer, which could override pilot input, reacting to faulty data. No one died, but some persons have lasting physical damage.


Yeah, that was annoying to read.

https://en.wikipedia.org/wiki/Qantas_Flight_72

Root cause: a fault with one of the plane's three "Air Data Inertial Reference Units":

https://en.wikipedia.org/wiki/Air_data_inertial_reference_un...

This error condition then exposed a bug in the aircraft's software which caused the dives.


Note to mods: I modified the title slightly, as the original was too long. I dropped the word 'pycho' from 'psycho automation' as I felt it had the least impact of any word to substitute or drop from the title.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: