Insights from the Bank of England’s and Financial Conduct Authority’s joint report on machine learning
Machine learning (ML) is a sub-category of artificial intelligence (AI) whereby computer programmes develop predictive models or recognise patterns from data, with limited or no human intervention. In October 2019 the Bank of England (BoE) and Financial Conduct Authority (FCA) produced a report which outlines the findings from a joint survey they conducted earlier this year, the aim of which was to better understand the current use of ML in UK financial services.
The report acknowledges that the UK economy is increasingly powered by big data and innovation is dramatically changing the markets the BoE and FCA regulate. It recognises the need for regulators to support the safe and robust use of ML, given that currently there is no regulation specific to ML, or indeed AI generally. A number of firms consider that regulatory expectation-setting in best practices around ML use could promote greater deployment.
Benefits of machine learning in business
Two thirds of respondents to the survey are already using ML in their business to support client interaction, business decisions or transactions. ML is most frequently used for back office operations i.e. risk management and compliance (e.g. anti-money laundering (AML) and fraud detection) but is increasingly being used in front office functions for customer engagement (by way of chatbots), credit checks, securities sales and trading and in general insurance. Firms report already seeing the benefits of using ML, in improved AML, fraud detection and overall efficiency gains and significant benefits are expected to be realised in the coming years.
Risks of machine learning applications
ML potentially increases the risk profile for a firm as ML applications are complex, they use a broad range of data and are large in scale. Such risks can vary depending on size of the firm and the nature and extent of its activities. Potential risks identified in the survey include:
- The explainability (or lack thereof) of ML models. If the inner workings of a model cannot be understood, then it will be more difficult to validate its design and performance. Increasing complexity of ML methods could make this more of a challenge in the future.
- The difficulties ML models may face when presented with situations they have not encountered before or where some form of human judgment, knowledge or experience is required.
- Potential model drift, which is where model outcomes change over time due to new or different data.
- Biased and/or incorrect data.
Firms stated, however, that ML does not necessarily present new risks. Rather, it potentially amplifies existing risks. 57% of respondents indicated that their applications are managed through their existing risk management framework, which may need to be updated to reflect the increasing use and complexity of models.
How to manage machine learning risks
Steps that could be undertaken to manage ML risks include:
- Model validation to ensure the models work as intended.
- Data quality validation to ensure potential issues with data are understood and taken into consideration.
- Alert systems that are triggered in response to unusual or unexpected actions.
- Systems where ML recommendations/decisions have to be reviewed by a human before being executed i.e. “humans in the loop” safeguards.
- Ensuring that individuals at different levels (including senior management) have an appropriate level of knowledge of ML and its potential implications.
The significant benefits of ML should not be understated. This survey was just the first step towards the BoE/ FCA better understanding the impact of ML in UK financial services and they will now establish a public-private working group on AI to further the discussion on ML innovation and explore potential policy changes.
It is clear that the financial regulators are closely watching use of new technologies to ensure that markets continue to work well and deliver good outcomes for consumers. The use of ML will only increase and the report concludes that firms are best placed to progress developing these technologies and integrating them into their business. However, as the use of ML increases and becomes more complex, firms should continue to assess and update their risk management frameworks to respond to any risks. Firms should also continue to educate appropriate individuals at all levels so that they have the right skill sets to identify and deal with the challenges presented by ML.
Related item: Augmented analytics: the human-AI hybrid