My mother (Amma as I call her) has a washing machine that has been working for well over ten years. Last week, it started displaying the real signs of aging – by being nonresponsive and by working in ways it should not. Although we managed to press some buttons and make it function properly, we knew it was dying. What most intrigued me was the fact that Amma started questioning the quality of the output. Whether the clothes were being washed and cleaned? Whether the machine was taking enough water to clean all the clothes? Whether the clothes were dry enough after the spin? Who knows, it might have been the case all along. However, it just didn’t show any signs to make Amma question its performance. As history has taught us, signs have the power to change belief systems.
A few months ago, a US-based asset management company (world’s largest in fact), BlackRock, Inc., announced its focus on the usage of data science and tools (Press Releases, 2017). Also, WSJ reported that BlackRock has resorted to using “robots” for stock picking (Krouse, 2017). The “robot” here is Aladdin®, a set of tools backed by data science that help the company make better decisions on whether to buy or sell stocks – something Blackrock presumes would reshape the traditional methods of equity investing (Aladdin, 2017). Mark Wiseman, Global Head of Active Equities at BlackRock, justified this move by arguing that traditionally, asset managers use the same techniques and tools over and over again, limiting their ability to generate alpha (profits). This new method helps distil unstructured information into more innovative investable insights.
Now, the problem here is not actually how machines are replacing human beings and their jobs. That discussion was concluded long back when the industrial revolution dawned in Europe around the 1800s. It was thought that unemployment would increase with mechanization and automation. However, in reality, it created new jobs, reduced prices of goods, improved productivity and gradually resulted in growing population globally. Gradually, with advances in technology, automation slowly stepped into areas which needed critical control, such as nuclear power plants, flight control, high-speed trading and intensive care units.
According to James R Bright (Bright, 2009), automation impacts the skillset requirement of employees and helps create new opportunities for them. Most of the sophisticated dashboards and controls of automatic machinery call for better training and a better understanding of the process, for them to be operated efficiently. Such a mechanism requires advanced skills for design, build, installation and maintenance. Employees experienced in using manual processes could help in improving the design by giving their feedbacks and also by making more informed decisions using the automated dashboards.
The same applies to the case of BlackRock as well. A few portfolio managers would lose their jobs. However, the public would spare them because of any sympathy as they are well off even without a job. When they lose their jobs, some other people will get jobs in supporting the automated process. Meanwhile, BlackRock partners with more trade robots across the world.
Human Being Human
Back in 2012, a similar automated algorithmic trading mechanism cost Knight Capital $465 million in trading loss (Francis, 2017). The loss was the result of a standard error by the IT staff, who deployed a new server code on all its servers, assuming that all the servers that ran the algorithm were identical in configuration.
In reality, the servers were not identical. Neither the IT support staff, who supported the IT infrastructure during business hours, nor the operations who monitored the trading, were able to point out the error and stop the incorrect trading. It continued unchecked for approximately 45 minutes. By that time the firm had lost millions and most importantly, client confidence and reputation. There must have been signs, which were ignored, just as in the case of Amma and our washing machine.
History teaches us that it is possible that one’s strength can also be one’s greatest weakness. Along with the stability, ease, and safety, automation brought some unprecedented consequences. The more the automation is advantageous in reducing human interference, the less the operator of the machine has to do (Bright, 2009). It, however, reduces the attention, concentration and mental effort required by the operator, until something goes wrong. As per a study conducted by Linda J. Skitka (University of Illinois, Chicago), participants with a computer aid made more errors in a simulated flight task than participants without a computer support (Linda J. Skitka, 1999). Thus, although automation reduces errors from manual intervention, it may result in the creation of a new class of errors.
Linda further points out that the prolonged usage of automatic mechanisms makes the operators feel less responsible and less compelled to put an individual effort to understand. They treat the machines as a team member that they blindly trust. Also, there might be cases when the operator operates the machine as a decision-making authority and conforms to its demands, even if those demands does not seem right. Even if someone points out the error, he or she are ridiculed or isolated by his or her human colleagues. How can the machine be wrong? It is interesting to note that, in all the above cases, the machine is treated as a group member and an alpha at that. Therefore, groupthink behaviors are exhibited by the human members.
As Linda pointed out in an example, 21/22 nurses (more than 95%) administered a serious dose of a drug to patients when ordered over the phone to do so by an unfamiliar physician! Similar was the case of the Air France Flight 447 accident when the fly-by-wire system of the flight that never had a record of failure for 15 years corrupted the judgment of an experienced pilot.
“Manual control is a highly skilled activity, and skills need to be practiced continuously to maintain them. An automatic control system that fails only rarely denies operators the opportunity for practicing these basic control skills. When a manual takeover is necessary, something has usually gone wrong. This means that operators need to be more rather than less skilled to cope with these atypical conditions.”- James Reason, psychologist, author of Human Error (Harford, 2016)
The paradox of automation
The problem mentioned above has a name: the paradox of automation or Automation Bias. It has three strands to it (Harford, 2016). The Air France Flight 447 incident (Air_France_Flight_447, n.d.) helps us understand it more clearly.
- As automated systems make it easier to operate, any incompetence of the operator will not be evident until a situation arises that requires expertise beyond the automatic controls. In this case, the incompetency of the much-inexperienced co-pilot became apparent only during the crisis.
- As practice reduces, experts erode their skills once they start operating automatic systems. In this case, the captain was too tired to respond on time. He had over-relied on the automated system rather than trusting his judgment.
- A more capable and reliable automatic system makes the situation worse. The situation might be unusual and requires an unusually skillful response. The fly-by-wire systems kept the Airbus330 from any accidents for 15 years and hence was considered very reliable.
What was created to reduce simple errors could create more complicated errors!
“When the algorithms are making the decisions, people often stop working to get better. The algorithms can make it hard to diagnose reasons for failures. As people become more dependent on algorithms, their judgment may erode, make them depend even more on the algorithms. That process sets up a vicious cycle. People get passive and less vigilant when algorithms make the decisions.” – Gary Klein, a psychologist who specializes in the study of expert and intuitive decision-making (Harford, 2016)
Automation errors can take the form of omission or commission (Automation bias, n.d.). Omission errors occur when the automated process fails to recognize the problems, like when auto-correct failed to detect a misspelled word. Commission errors, on the other hand, occurs when the operator blindly follows the indication from the automated process, without considering/ignoring signs or evidence that show otherwise, like how the captain of Air France Flight 447 blindly followed the directives from the fly-by-wire system.
Automation bias can be mitigated (Automation bias, n.d.) through less intimidating or simple dashboards that provide suggestions rather than directives, and training operators on an automated system that deliberately shows errors. However, too many checks can increase the response time of the operator to a situation and hence reduce the benefit of automation.
There is a general tendency to push all the work to an automated process. This is because the blame can be put on the automatic process when an error occurs. However, what many of the users or corporate presidents, who commission the automation request, fail to understand is the biggest problem with this approach. The automated process can handle only a limited number of scenarios as the process was designed by IT professionals who are also human. Even machine learning has its limits. Similar to people, machine learning acquires wisdom only with time and experience.
In the case of automated trading, the trading bots run algorithms designed based on certain assumptions and expectations from the market. However, a market is not something that can be modeled, though it might have some predictable elements. A market is more like a living organism that at times behaves in ways we can only explain with hindsight after an incident. Most of the automation in trading focus on menial tasks, such as booking low-risk trades or for clients who are trustworthy or, in the case of BlackRock, passive investments which guarantee returns in almost any circumstance. However, even these tasks have been designed only for certain market conditions. When there is an exception, such automated processes give unexpected results. As the market has a mind of its own, there are chances for exceptions to occur more often that one expects. Fixing any of these exceptions leads to opening of another one. The whole approach to automation programming in dynamic environments needs to be changed.
One might wonder as to how to make the best out of automation in such environments.
The usage of an automated process in such environments can be restricted only to the process, to generate recommendation and then wait for the operator. The operator should have an option to get an explanation as to why that recommendation was made. If that is relevant in the current scenario, the operator authorizes all such trades for the time of the day. Although there is a level of manual intervention and delay initially, it reduces the firm’s risk.
The firm can also think of building a Red team, which consists of experts in the field, who were not involved directly with building and operating the automation process. They can review the trades that were handled by automation and validate it.
Rebecca Pliske, a psychologist, found that veteran meteorologists make weather forecasts by first looking at the data and then comparing it with computerized forecasts to see if they had missed something (Thomas H. Davenport, 2015). Similarly, the operators should have some experience and training for performing the same automated activity manually. This procedure improves their understanding of the automated process and helps them question it without any bias.
Not two, but one!
As per Thomas H. Davenport (Thomas H. Davenport, 2015), traditional automation promises cost savings but limits the thinking to the parameters of work accomplished today. He argues that partnering or augmenting skilled knowledge workers with smart machines or automated processes could expand the thinking beyond these parameters. A skilled knowledge worker will be able to see the bigger picture and a higher level of abstraction than the automated process and use some creativity to bring out innovative solutions from the findings. Accomplishments of data science at CERN show many instances of such an augmentation.
What is required is a change of mindset from both employers and employees. Businesses should think beyond cost savings and staff, in turn, should start considering the automated processes as their partners. The main blinder is that employees think of automation as job killers. To save their jobs, employees should either step up beyond the prowess of the automated process, or compliment on the intelligence which automation lacks or take part in creating more automatic processes that augment human strength.
A Pinch of Optimism
People never quit driving even though millions die due to traffic related accidents every year. Similarly, a few unfortunate incidents that have occurred in the past do not scare our kind from going forward with further advancements in the field of automation. This is evident from the fact that the aviation and safety industries are increasingly pushing to automate more of their flight systems and processes (Linda J. Skitka, 1999). The fly-by-wire system has kept our flights safe for a very long time. Moreover, the automation bias is a problem with human interaction with automated machines rather than machines itself. The advantages considerably outweigh the disadvantages.
We have recently seen an upsurge in the number of customer service and self-service mechanisms replaced by chatbots, who have become increasingly humane. Such bots allow 24×7 customer service, scheduling menial tasks such as setting up a meeting or booking a ticket or providing weather or news updates (Latest Thinking, 2017). Even with its almost human persona, it lacks a certain empathy or warmth. It fails to accomplish what successful customer service executives have done in the past – form a more human customer relationship and explore new business opportunities. The customer service executives need to be more of a psychiatrist than a bot with a fixed script.
Movements such as the Technocracy movement, the Venus Project and the Zeitgeist Movement claim that ultimately in the future, mechanization, automation, and artificial intelligence would shower social justice, order, and peace to our kind. Much skepticism exists in the society in this matter.
AI Demi Gods – There is a general tendency for the human race to display a group think behavior and bow down to a superior entity. This superior entity can be people of authority like kings, priests, dictators or even ideas like religion, politics, and nationalism. Now, the Internet, mobile devices, and social media have enslaved us with its ease and ability to perform tasks.
As mentioned earlier, one of the ultimate goals of creating an AI is to create a just system that can tell us what to do, lead our way, and give us fairness and justice. We would trust the AI the same way we trust banks with our money and Paypal with our credit card. Such an AI will replace our belief systems, and we will find ourselves following it blindly. Depending on how the AI was programmed and the data it used as reference, it will have significant influence in the way the human race will progress.
Unemployment – The world population is growing at a rate of about 80 million per year. Traditionally, industries that made good earnings employed many people. As the technology develops exponentially, the number of employees required for a company that makes good earnings reduced to a good extent (e.g., compare GE and Google). Moreover, no new jobs were created to replace the unemployment thus created. Besides, the advent of AIs is only more likely to increase this kind of unemployment. Many suggestions have been made by industrial experts to mitigate this – A universal basic income (Galeon, 2017)., taxing automated industries and using it for the welfare of the unemployed. The question is to what extend any government can support this sort of welfare as the numbers keep increasing. In any case, with the increase in population, unemployment can only lead to more resentment and unrests.
However, all this fuss around Terminator authoritarian AI and the dream of a luxurious socialist utopia where everyone has everything, and no one has to work – all this is only going to be in fiction. Wondering why?
Because I believe in limitations of our imaginations. Human imagination is limited by only what they know. They are so self-centered that when they wrote the story of how God created them, they wrote: “God created man in his image.” The science fictions imagine humans living across galaxies and aliens with cockroach heads. The same applies to what people can program. Even when they design AIs, they are designing it to do something they can, but fast. There is no imagination beyond the limits set by nature.
To create an amazing God-like AI who can imagine beyond humans and the nature around him, humans should think like a God. Alternatively, as they show in “Hitchhikers of the Galaxy,” create an AI that can design better AIs.
Back in the present, as we advance more towards automated systems, the extent of automation bias is bound to increase. For e.g. Auto-driving cars eliminates the need of knowing how to drive. Imagine yourself sitting in an auto-driving car, and it tries to take you through a wall. Unless this wall is at Platform 9 ¾ at King’s Cross station in London, you better take control of the car!
Aladdin. (2017). Retrieved from http://www.blackrock.com: https://www.blackrock.com/aladdin/offerings/aladdin-overview
Automation bias. (n.d.). Retrieved from https://en.wikipedia.org: https://en.wikipedia.org/wiki/Automation_bias#Automation-induced_complacency
Bright, J. R. (2009). Does Automation Raise Skill Requirements. Harvard Business Review, 85-98.
Francis, R. (2017, April 14). The 7 worst automation failures . Retrieved from http://www.csoonline.com: http://www.csoonline.com/article/3188426/security/the-7-worst-automation-failures.html
Galeon, D. (2017, February 14). Elon Musk: Automation Will Force Governments to Introduce Universal Basic Income. Retrieved from futurism.com: https://futurism.com/elon-musk-automation-will-force-governments-to-introduce-universal-basic-income/
Harford, T. (2016, October 11). Crash: how computers are setting us up for disaster . Retrieved from http://www.theguardian.com: https://www.theguardian.com/technology/2016/oct/11/crash-how-computers-are-setting-us-up-disaster
Hoffman, B. (2017, April 02). What BlackRock’s Robots Don’t Know Can Hurt Them. Retrieved from http://www.forbes.com: https://www-forbes-com.cdn.ampproject.org/c/s/www.forbes.com/sites/brycehoffman/2017/04/02/what-blackrocks-robots-dont-know-can-hurt-them/amp/
Krouse, S. (2017, March 28). BlackRock Bets on Robots to Improve Its Stock Picking . Retrieved from Wall Street Journal: https://www.wsj.com/articles/blackrock-bets-on-robots-to-improve-its-stock-picking-1490736002#livefyre-toggle-SB12711645461998504051204583050971352980636
Latest Thinking. (2017, 03 09). Retrieved from http://www.cognizant.com: https://www.cognizant.com/perspectives/the-real-value-of-chatbots-part1?utm_source=Social&utm_medium=Organic&utm_content=&utm_campaign=ThoughtLeadership
Linda J. Skitka, K. L. (1999). Does automation bias decision-making? Int. J. Human-Computer Studies, 991-1006.
Press Releases. (2017, March 28). BlackRock Positions Equity Investment Platform for Future of Active Management. Retrieved from http://www.blackrock.com: https://www.blackrock.com/corporate/en-us/newsroom/press-releases/article/corporate-one/press-releases/blk-active-equity_US
Thomas H. Davenport, J. K. (2015). Beyond Automation. Harvard Business Review, 58-65.