One could debate endlessly the dire warnings on artificial intelligence from Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking, the former contending that AI could instigate World War III, while the latter says it could eventually lead to mankind’s demise. But if history has taught us anything, it’s this: Despite the social, political and governmental costs, technological innovation cannot be stopped and is occurring at an increasingly faster pace.
Consider the democratization of knowledge with the printing press, the industrialization of the textile business with mechanized looms, the upending of the transportation industry with the automobile, and, more recently, the ravaging of the retail industry because of e-commerce, all prime examples of unstoppable innovation whose collective impact has been profound and lasting.
AI’s emergence as a disruptive force will be no different. It will inevitably transform society, but to what end it’s for good or ill is up to humans.
To argue otherwise is a fool’s errand. All progress comes at a price—what Austrian economist Joseph Schumpeter termed “creative destruction”—and, like business innovations that preceded it, AI will undoubtedly prompt fear and loathing in some corners, while inspiring hope, ambition, and market opportunism in others. In the global capital markets, regulators can fear and attempt to govern AI, but they won’t be able to thwart its growing adoption.
The financial industry is only beginning its AI journey, with “narrow” AI implementations that can replace human processing on unique and defined tasks, as opposed to artificial general intelligence that can do everything. Nonetheless, the rollout of even early-stage solutions in the capital markets poses serious concern to some market participants and regulators around responsibilities and liabilities should market conditions head south and AI systems lack the judgment to suitably respond.
The current approach to pre-empting potential issues is to rely on human monitoring and oversight of the AI output. The analogy often used by market participants is that it’s similar to the aerospace industry with its development of auto-pilot. The system operates automatically but, if needed, the pilot can override the machine and gain full control of the plane. This assumes that, in case of an emergency, human judgment will be better than that of the machine. However, as the Air France 447 crash proved, human intervention can make things worse. In this case, the auto-pilot disengaged because an aircraft sensor speed failed, and when the pilots took control, they did so with conflicting information about the airplane’s speed, and their actions ultimately led to the crash.
If we extrapolate this example to the financial industry, then by handing over monitoring to humans, AI solution providers believe that the ultimate responsibility for a failure lies with the financial institution employing the technology. But it could become much more complex, and various bodies could be held accountable if an AI system gets wonky and any of the following are at play:
- The AI system was ill designed by the provider;
- The company that was responsible for training the AI system did an inadequate job;
- Data providers could be responsible if “fake” news or data is consumed by the AI system and compels it to react inappropriately (given the vast amounts of data that AI systems consume, it will be nearly impossible to spot it in real time);
- The individual in charge of monitoring the AI system fails to ensure that it operates according to its specified intent;
- The financial institution that green-lighted the implementation of the AI system.
The difficulty in defining the parameters of responsibilities in the case of an AI system going awry could create a major issue of trust between AI and humans. Ideally, you need AI that can explain to humans how it came up to its conclusion or recommendation, therefore providing some assurance on the validity of its output. However, that defeats the benefit of AI implementations, notably the ability to process vast amounts of data to create models that are unreachable by human minds and with results that can be surprising or counterintuitive.
In addition, complex and adaptive AI is not transparent by nature, often having so many layers of analysis within the neural network that it would be impossible for a human to validate the approach. One way to deal with this is to implement within the AI some level of reporting or an audit trail, for example, so the system could explain how it clustered the various data that have been provided to it based on their importance, priority, and weight in its model.
It stands to reason, though, that putting too much trust in AI could backfire. If a human becomes too confident that the system will automatically and appropriately deal with any new situation, then it could create the perfect environment for disaster. Human monitoring of AI operations is essential if one is to have deep knowledge of the complex system under his or her control and of the larger ecosystem in which it operates to be able to intervene, when appropriate or necessary.
To ensure that AI systems implemented in the capital markets do not pose a serious threat to operations or, in the event of a market crisis, create a domino effect globally, regulators can take an approach similar to the EU’s with high-frequency trading (HFT). The EU requires clear documentation around algorithms implemented by HFTs. In the case of AI, regulatory guidelines could include elements such as:
- Full documentation on the machine-learning program employed, including data fed, output, iteration, and outcome;
- Exhaustive list of the sources of data being fed to the system;
- A clear set of “golden” rules that are embedded in the system, e.g., the trading strategy should not go over a certain ratio of market risk. The challenge is that the industry would need to determine and abide by a global set of AI golden rules; otherwise, some firms would be at a significant disadvantage when going up against other firms’ AI systems that do not adhere to the same level of constraint;
- Detailed documentation of the circuit breakers and recovery plan;
- A framework for on-going evaluation. A key concern is the potential of AI to self-improve. How does the industry ensure that an AI system is not deviating from its initial intent? One way to solve this is to feed it with a reference scenario and monitor over time if the output from this scenario differs from the expected output. But that only works if humans understand the expected output and if feedback is clear;
- A certification of operators to ensure that they have the relevant training and expertise to monitor AI activity.
While such steps could lead to significant overload for regulators, AI can also help to deal with the current regulatory framework and the vast amount of reporting required. From a regulatory standpoint, the benefits of AI machines are, unlike humans, unlikely to break the rules embedded within the system. Best execution requirements could be enforced at the AI level and, therefore, diminish the level of reporting required by market participants. In terms of conduct, regulators could also compel the industry to ensure that AI systems always act in the client’s best interest first, rather than that of the institution’s (Remember, robots have no career plans or end-of-year bonuses to aim for!).
Even with best practices in place, AI implementations in various capital markets operations will certainly lead to the emergence of new and unknown sources of risk. This is new territory and, with limited technology providers, the innovation includes the interactions of complementary and opposing AI that could create an unpredictable feedback loop.
Given AI’s early days in the industry, regulators will have to learn and adapt any new rules as progress is made, much the same way they did with the adoption of electronic trading four decades ago. But different from the past, the goal with AI “intelligence” is not to use it to create advanced auto-pilots, but rather for AI to become the new co-pilot in the capital markets.