Posted on December 15, 2018 at 12:00 PM
Developed countries in AI and cyber capabilities have a clear head start in establishing the control mechanisms to provide security for their citizens.
Unfortunately maintaining that comparative advantage requires significant ongoing commitment from a plethora of resources.
First of the three specific risks includes hidden biases, not necessarily derived from the part of the designer but from the data provided to train the system. For example, if a system learns which job applicants to select for an job by using a data set of decisions made by human recruiters in the past, it may unknowingly learn to perpetuate racial, gender, ethnic or other biases. These biases may not appear as an explicit rule but embedded in interactions among the thousands of factors considered.
The second risk is that, unlike traditional systems neural networks deal with statistical truths rather than literal truths. Thus it makes it difficult to prove with complete certainty that a system will work in all cases. Particularly in situations where it is not represented in training data. Limitations to verifiability can be a concern in mission-critical applications.
The third risk is that, when learning systems make errors, diagnosing and correcting the precise nature of the problem can be a bit difficult. What led to the solution set may be complex, and the solution may far from optimal. If the conditions under which the system was trained happen to change the appropriate benchmark is not the pursuit of perfection, but rather, the best available alternative.
Risk managers are more prone to integrate unknown unknowns into their risk calculations, but this presumes that they do have a firm grounding in the subject matter.
For instance, as cyberrisk evolved, many risk managers had the opportunity to become more familiar with this risk actually is. The insurers have had time to develop new insurance products to address these risks.
The truth however is not he same for AI, most risk managers and decision makers have relatively little knowledge about what AI and machine learning are, how they function, how the sector is advancing, or what impact all this is likely to have on their ability to protect their organisations against the threats that naturally emanate from AI and machine learning.
Risk managers clearly need to become more knowledgeable about the threats that continue to be produced from AI. Some organisations devote resources to develop these systems internally, but few recognise the need to anticipate the threats to allocate resources specifically designed to address such threats.
Risk managers have a vital role to play by ensuring that management is well aware of the potential threats while proposing solutions for those threats to be neutralised.
The AI world that getting created will be kinder to organisations that excel at embracing the technology and anticipating its impacts. In future, these organisations that attempt to maintain this wall between human and machine are going to be at a ever-greater competitive disadvantage when compared to their rivals who prefer to break this barrier and make use of Artificial Intelligence in every possible way to effectively integrate their capabilities with those of humans. These organisations that can quickly sense and respond to opportunities will acquire the opportunities in the AI landscape. In the near term, AI will not replace risk managers, but risk managers who use AI will replace the rest.