Failures In The New World

by Harold Green

This discussion is a continuation of previous articles dealing with the changes in training and operations brought about by the transition currently underway in general aviation.

The increased capability in avionics and aircraft offer tremendous potential for the future of general aviation. At this time, however, we are still confronted with the need to prepare pilots for safe operations when confronted with the increased workload these advances require.

Since most general aviation operations are conducted with only one pilot, the workload is much higher for the average general aviation pilot than for the airline folks who have the benefit of two or more pilots, plus a staff to help with flight planning. Does that mean we should resist these changes? Not at all. It simply means that we need to be alert, well trained and knowledgeable.

Following are one instructor’s thoughts on the subject. This is not intended to be a complete dissertation, but hopefully it may spark discussion or thought on the part of others. There will probably never be a final discussion on this subject.

There have been some changes in flight training guidelines put forth by the Federal Aviation Administration (FAA): FAA Industry Training Standards (FITS), being one. As often happens with new concepts, when FITS was first introduced, the standards were so focused on the new goal of systems and cross-country training that they downplayed maneuver-based training. Consequently, new pilots were not properly taught this aspect of flying. Now that this has been recognized, and maneuver-based training has been reinstated as a portion of FITS, flight students are receiving a much better balance of training. Part 121 operators also recognized this, and they, too, are re-including “fly the airplane” training.

The point is that any new thing is likely to come up short in some area and will require further fiddling with the system. That’s why there are such things as beta tests, etc. Obviously, I believe this applies to human systems as well as hardware.

Let’s look at what type of failures can be expected from advanced aircraft avionics.

As we do this, it is wise to remember that in systems as complex as today’s avionics, it is virtually impossible to predict all failures and combinations of failures that could occur. With diligent effort the engineers can predict and correct for the vast majority of them, but there is absolutely no guarantee that all have been found. Only time and experience will find the remainder.

For the really advanced system, several pieces of equipment are mounted remotely from the cockpit. This means there is a need for all of these units to communicate with each other. There is an Aeronautical Radio Incorporated (ARINC) standard for accomplishing this. This standard defines the protocol and signal levels. This means more equipment, more software and more connectors. All of these things contribute to an overall potential failure rate, even though system reliability is high.

For a sense of perspective, consider that even 20 years ago the state of electronic development would not support this level of complexity at a failure rate that would be acceptable. The failures will fall into a few limited categories. Total failures of system elements will generally be identified and dealt with automatically. The “dealing with” portion may consist of shutting down the affected elements and informing the pilot. This is what happens when the red Xs show up on the display or, the screen goes blank. (Sometimes referred to as the blue screen of death.) Failures at this level will be quite reliably detected and displayed. However, bear in mind two things: First, there may still be situations, which are not detected by the system and could lead to problems. Second, this situation can occur at any time and this includes on final to minimums while carrying ice. The pilot had better be ready to react rapidly and accurately.

Some failures create significant deviation from proper indications, but will not be detectable by the system.

Consider the Air France disaster over the Atlantic a few years ago. The situation was created by iced over pitot tubes, which the crew apparently failed to correlate with aircraft performance. The result was disastrous. A different, but similar, situation could occur in any technically advanced aircraft.

Warning the pilot of inconsistent data is possible, but the difficulty in defining inconsistent is extremely high. This is because there are so many possible interactions in the system, and conceiving all possible erroneous situations becomes a virtually impossible task. Once an unpredicted failure occurs and is reported, a correction to the system can eliminate it. The solution here is for the pilot to be aware of possible inconsistencies between the data presented and aircraft performance, and be prepared with the appropriate actions.

The third general type of failure may be considered as a perceived failure, which, in reality, does not exist at all. Typically this occurs when the pilot starts comparing the data presented by different technologies.

A classic case is when executing a VOR approach while monitoring progress on the GPS. The issue here is that the GPS is computing position, while the VOR is measuring position using a radio frequency signal. Often these two do not match.

There is one VOR approach to my home base of Middleton (Wis.) Municipal Airport – Morey Field (C29), which presents an arcing VOR path to the runway. Students comparing GPS with the VOR often want to fly the GPS, even though that is not legal since it is a VOR approach. The VOR approach, like most such approaches, is checked periodically by the FAA and found to meet their standards, so unless there is some extraneous interference, this is no reason to substitute the GPS for the VOR.

In short, the process here is to compare two sources of information and select the appropriate one. Since, in this example, the legal source is the VOR, it should be followed unless there is some reason to believe the VOR is in error. In that case the appropriate action is to execute a missed approach, then attempt a different approach not dependent on the VOR.

In order to be prepared to cope with the situations described above, we now need to add to our training tool kit the subtle and sometimes, the not so subtle, failures, which can occur in today’s systems. Further, these failures must include failure on the part of the pilot to recognize that there is a problem.

An example of the latter is when the autopilot disconnects and the pilot is unaware of it. But, in all of this it behooves us to make sure that the pilot can still fly the airplane even under stressful conditions.

Now, given all of the crepe hanging of the foregoing, what is the answer? In the opinion of this instructor, it means that we need to place even more emphasis on system operations, not only during instrument training, but also in primary flight training. From the get-go, the student needs to be made aware that the system can lie, or be misunderstood, and that being Pilot In Command also includes the responsibility to interpret and use all systems on board the aircraft.

In addition to the classic “OOPS! You just lost your engine,” and “Where ya goin?” the student should be confronted with the need to identify and react to system failures. This can be a difficult scenario for the instructor to implement, but it can be done with care and knowledge of the system. One way to do this is to place the student in an unfamiliar situation, which requires paying attention to a multitude of items.

A possible VFR scenario is for the instructor to alert the student to traffic, while requiring a change in aircraft heading and/or altitude. For IFR, a change in approach type while talking to a controller works very well. The purpose of this exercise is not to embarrass the student, but rather to aid the student in learning how to prioritize while maintaining system awareness and flying the airplane.

As instructors, we need to find ways to cause simulated failures of individual system elements. Sometimes we can do this simply by stating that a specific system element or function has failed. Of course, when applicable, the tried and true instrument covers still work well. Some aircraft allow access to circuit breakers, which the instructor can pull. Of course it is more important than ever before that the student be completely familiar with operation of the equipment on board the aircraft. This should include the use of the autopilot to reduce workload when things get rough, and a willingness to simply fly the airplane the old fashioned way, manually, when necessary.

Willingness to request, and receive, assistance from controllers when appropriate will greatly relieve stress, particularly in Instrument Meteorological Conditions (IMC). All too often pilots seem reluctant to ask for help.

Perhaps indicative of this fact is that the Cirrus parachute system has proven to be a tremendous lifesaver, virtually eliminating fatalities when deployed. Yet, in the majority of Cirrus accidents, pilots are reluctant to deploy the chute. This is a failure of the pilots. Cirrus training emphasizes the use of the chute. Insurance companies even say they would rather buy the insured a new airplane than face their survivors in court after the accident. A pilot’s reluctance to use the chute, may be an indication of the difficulties we face as instructors, as our world advances around us.

Finally, most have heard the old saw that the three most useless things in aviation are the runway behind you, the altitude above you, and the fuel in the truck. I would add a fourth to this: The equipment in your airplane that you don’t know how to use.

As new students come on board, there is a tendency, particularly among the younger ones, to avoid the VOR or anything that requires interpretation. I have had beginning instrument students who have very little capability with the VOR because of their reliance on the GPS. Before those of us with more experience and longer tenure judge too harshly, we need to ask ourselves how we feel about NDB approaches or tracking to an NDB. We need to overcome this tendency in order to maintain the highest level of safety in our flying. In short, if  it’s in the plane, we should know how to use it. Similarly, if it’s in the plane, we should know how to survive without it.

EDITOR’S NOTE: Harold Green is a Certified Instrument Flight Instructor (CFII) at Morey Airplane Company in Middleton, Wisconsin (C29).

Email questions or comments to: or call 608-836-1711 (

This entry was posted in Columns, Columns, June/July 2015, Pilot Proficiency and tagged , , , . Bookmark the permalink.

Leave a Reply