Look ahead: The paradox of automation
Britain’s pound sterling plunged from $1.26 against the dollar to a little over $1.18 in two minutes, which is quite an extraordinary movement in normal market conditions. No one was able to explain the plunge satisfactorily. Theories ranged from the ‘fat finger syndrome’, well-known by now after having caused mayhem in the ‘flash crash’ in American bourses in 2010, or just the plain bloody mindedness of computer systems or maybe a glitch in the algorithm employed to automate trades as billions of dollars worth of currency or stocks and shares change hands in a trice across a connected world.
The pound has been most vulnerable ever since the British Prime Minister, Theresa May, addressed the Conservative party conference a couple of weeks ago. There is plenty of opinion from economists and bankers that the pound may as well fall from having been “driven to artificial highs by global property speculators and the banking elites acting in destructive synergy, causing serious damage to Britain’s manufacturing base and long-term competitiveness,” as an analyst put it. But that is beside the argument here which is about whether a system failure or a rogue algorithm destroyed whatever little credibility the pound had left after coming through the big Brexit test.
The fate of the pound suffering a plunge caused by computer systems handling an erroneous input by man is part of what is known as the paradox of automation. The paradox applies in a wide variety of tasks that man has left to machines and systems in order to make his life more comfortable. We have deputed even the most complex tasks like the operating of nuclear power stations to aircraft being flown by fly-by-wire systems, so much so human beings are out of practice when it comes to these tasks. Even in an emergency, when a human mind and action is supposed to take over such tasks for safety, we may find we are so out of practice that taking over the controls is no more the best solution as was found in the Airbus crash when flying out of Rio to Paris.
When a relatively inexperienced pilot had to take the controls in an emergency he is thought to have done an asinine thing in a steep climb to an altitude beyond 37,000 feet at which the wings stop operating as wings and the aircraft drops several thousand feet with its nose up in the sky. Fly-by-wire systems are so good these days that the autopilot lets pilots relax right through the trip until it comes to landing. But when a crisis comes, like ice forming on the exteriors of the plane or running into very rough weather, a human brain and body may have to come into play to work around the problem. It is here that pilots are tested most.
Driverless cars are running about today having logged millions of miles in vast experiments aimed at automating tasks average humans are not great at. Companies like Tesla and Google foresee a future in which automobiles really live up to their ‘auto’ name. Here again, the problem arises when the unexpected crops up and the car has to pass on the controls to a human. The technical as well as philosophical question to be asked is how will an artificial intelligence device know when a crisis at hand.
Is a machine capable of thinking to determine when it can’t handle an issue anymore? Did Isaac Asimov leave behind a law of robotics to cover such eventualities? It is posited that the better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The paradox is there for all to see. The better an automatic system gets, the greater the problem because it breeds incompetence in man. But when an atypical situation arises, the system needs human help which is when man may fail as his skills at manual control have not been used for too long. Welcome to the future!