"If this bias is injected into codes and algorithms it could have a major impact on decisions – from what constitutes fraud to who is granted credit."
From retail banking to capital markets, companies are rushing to take advantage of artificial intelligence.
In the past couple of months alone we’ve seen Metro Bank launch a new AI platform to analyse consumer spending patterns, while JPMorgan has been awarding multiple research grants to dig deeper into the potential of this advancing technology.
Financial services firms have clearly recognised that to succeed in today’s competitive environment they need to explore future uses of AI. Whether that is to help them to work more efficiently or to deliver more personalised interactions.
However, there are two points of failure which could put these programmes at risk. These include developer diversity and the long-term impact of a workforce reliant on machines.
A dearth of developer diversity
In the AI-centric world of banking, you may expect bias to be a thing of the past. Surely, codes and algorithms have fewer prejudices than a human being?
The reality, however, is that through AI the industry is inadvertently allowing bias to thrive underground. In fact, recent research suggests that 88 percent of leading machine researchers are male, and many of those at the top of their game have been developing for 15-20 years.
The result of this is that too many developers fall into the same stale demographics. This creates a set of decision-makers who may be unconsciously biased. But they hold huge power. If this bias is injected into codes and algorithms it could have a major impact on decisions – from what constitutes fraud to who is granted credit.
Of course, diversity is not just a social concern. The UK regulator The Financial Conduct Authorityhas made it clear that this is a key supervisory issue and a core part of how it assesses culture in a firm. In fact, the regulator has confirmed that its focus on diversity is driven by the fact that firms with monocultures are significantly more likely to have governance-related issues.
The dangers of a poorly informed response system
Another threat of AI is an emerging pattern in banking where fewer individuals are experienced in handling a crisis. Imagine a world where traders have been replaced by algorithms and market turbulence becomes a crisis. We will be lacking in the critical skills to know how to respond.
This is because most risk management systems are based on the risk we know today. Yet AI will not know how to respond to ‘the unknown unknowns’. There’s a real danger that this becomes the point of failure from which a bank cannot come back.
Financial services organisations must consider how an AI-driven workforce will impact on the size, scope and root causes of regulatory risk. Risk management programmes should focus on developer diversity, as well as nurturing individuals who are able to respond to a crisis situation, to reduce governance-related issues.
Smart companies are banking on AI to build a successful business. However, they need to make sure it is building the future, and doesn’t become the source of a future downfall.