Smooth seas do not make skillful automatons

We usually associate technological mishaps with extenuating circumstances: bad weather, mechanical or electronic failure, poor decision making by software or humans. We tend to seek identification of a single, overriding root cause, thinking if that were isolated and dealt with, system failure would be avoided.

It has been demonstrated time and time again that major accidents are typically the end result of a sequence of smaller incidents. Individually, these incidents are often handled without consequence, but when strung together in rapid-fire fashion they accumulate and amplify into catastrophic trouble. The difficulty is this: humans generally trust the machine until the unthinkable worst-case scenario is joined, already in progress.

If a process is well understood and follows a fixed decision tree, it can be described by mathematics and thereby controlled. In the domain of traditional industrial automation and process control, automatons – simple control mechanisms, intelligent machines, robots, and the like – excel because they are good at:

  • Repeating a programmed sequence;
  • Ignoring or compensating for variations in inputs;
  • Maintaining a steady-state process;
  • Deciding quickly and following a defined course of action.

Smooth seas present little challenge, but automatons don’t do as well when they encounter dark seas, rough roads, or violent skies. As science fiction teaches, there several methods to increase the stress level and defeat most automatons:

  • Introduce exceptions simultaneously;
  • Remove external references of measurement;
  • Deprive the machine of communication or power;
  • Change the rules of the game entirely.

When too many things go wrong, automatons are lost. In these conditions, humans excel because they can recall prior experience, adapt, infer, extrapolate, and operate on “best guess” information to fill gaps. This is why automating fluid situations like healthcare, combat, emergency response, and others generally falls short.

Transportation is a gray area for automation, rife with outside perturbations and variable sequences that can push automatons over the edge, leaving a mess for the humans who intervene to try and correct. Systems with a notion of “traffic control”, external resources available to aid both the machines and humans, have achieved remarkable rates of safe operation overall, spectacular failures notwithstanding.

Short-haul rail systems have seen success in automation by eliminating unpredictability as much as possible. A good example is elevating a train and allowing only pre-programmed stops, such as the PHX Sky Train at Phoenix Sky Harbor Airport. This works well for short routes with specific tasks on a clock: move from station A to station B, open the doors for N seconds, proceed to the next stop. With minimal risk of things like people, cars, or opposing train traffic to present a hazard, the automaton can do what it does best: keep the schedule.

PHX Sky Train | Phoenix Sky Harbor Airport

Longer rail routes have a rule-breaker: high-speed trains do not stop easily. Even if brakes are applied and work properly, trains often continue rolling for up to a mile. Some say we have the technology for automated trains, but recognizing problems and reacting quickly enough can be challenging even for human operators. With hundreds or thousands of miles of track, often in remote areas, monitoring is difficult and expensive. Trains are also prone to run away due to failure or improper procedure, which this weekend’s rail disaster in Lac-Megantic, Quebec illustrates.

Air travel has become highly automated, with an extensive system of traffic control, monitoring, and communication. We also have the technology for remotely piloted aircraft, appropriate for situations where putting humans in harm’s way is risky and unnecessary. But the sky is an unforgiving place, and the dynamics of larger aircraft mean they don’t always do what they are asked to do immediately by pilots or automated control systems. The theory of “big sky, little plane” usually holds up in level flight out in airspace, but eventually air traffic comes together at an airport where problems occur quickly and tend to accelerate. A great post describing airliner instrument approaches illustrates both the complexity of the tasks a pilot faces and the incredible range of things that must be accounted for in automation and traffic control.

UHF glide slope beacon for ILS

Glide slope antenna array, Runway 09R, Hannover, GER – courtesy Wikimedia Commons

In another tragic example from this weekend, we have the “hard landing” of Asiana Airlines flight 214. One fact that has emerged is the instrument landing system (ILS) glide slope transmitter serving runway 28L at SFO was out of service for several weeks during construction. This suggests the pilot was on a manual approach without an external reference to provide warnings, which should not have been a problem given pristine weather. Evidence is already coming in showing the pilot may have realized several issues in progress – low airspeed, not enough throttle, bad angle of attack, excessive descent rate – too late to create enough of a reaction to save the aircraft, but might have averted a larger disaster by taking action seconds before impact and barely clearing the seawall. One wonders if an automated approach would have had different results.

The quest for the autonomous car is gathering steam, with tech companies like Google in the mix – but the reality differs from the headlines. Contests like the DARPA Grand Challenge have shown that the basic elements of technology are possible, if not yet repeatable or affordable. Conceivably, with technology like navigation, collision avoidance and intelligent spacing, freeway traffic may benefit from automation, even if the ride is unnerving.

Google Self-Driving Car

Relatively wide open interstate traffic is very different from rush hour, and completely different from congested neighborhood traffic. Distractions ranging from smartphones to pedestrians, dogs, and cats – not to mention other drivers – abound, and predictability is near zero. This is likely to be an important battleground for technology in coming years. As Google points out, navigation issues are solvable and technology is the same as in Google Maps, the reason they are investing so heavily. Navigation is only a small portion of the technology challenge, however. We are already seeing a significant increase in warning and avoidance systems, powered by advances in embedded vision reducing size and cost.

In a stark contrast to the other modes of transportation, the highways and byways lack any notion of “traffic control” beyond simplistic stop lights, passive video, and police patrols. There is no overseeing agency that controls and routes capacity, similar to the air or rail traffic control system – the roads are pretty much a free-for-all with a few toll-collecting exceptions. Services like Inrix are making headway in traffic measurement, but there will have to be major advances in infrastructure allowing cars and roadways to communicate directly, pervasively, and instantaneously. Speed, acceleration, direction, road conditions, visibility, and many other variables will factor in to even a straightforward scenario. This is a huge leap from the infotainment and navigation systems currently available. In fact, those toll roads may be the first places we see some autonomous infrastructure introduced, since cars can be identified and tracked using technology already in place.

All in all, we have amazingly safe transportation systems today given the volume of traffic that moves daily. We have the technology to do some incredible things in controlled conditions, but the leap in autonomous transportation and other areas will take a much deeper understanding of how to recognize and react quickly and safely to hazards and sudden changes, not leaving out-of-control situations for humans to save themselves.

, , , , , , , ,

This site is powered by WordPress, WP Engine, WooThemes 'Canvas', and Dr. Pepper.