Technology could guarantee history repeats itself… unless we remove the bias

As the Black Lives Matter Movement continues to dominate global headlines, a more subtle form of discrimination is also under scrutiny. According to analyst firm Gartner, the adoption of artificial intelligence technologies (AI) by businesses increased 270% between 2015-2019. And there’s a clear reason why. If the process of analysis and arriving at a decision can be sped up, there are efficiency gains to be made, competitive advantage to be had and profit to be made. However, those that choose to utilise such technologies must be aware that without ensuring the right safeguards are in place its output can be, or can lead to, discrimination.

If prejudices can be found within the substantial datasets upon which such technology is trained, or within the people who train it, ultimately (unless checks and balances are put in place to spot and neutralize such prejudices at the outset of a technology’s development) the product's output will reflect that prejudice.

Prejudice in, prejudice out

Predictive crime mapping provides a clear example of the issue (see our previous discussions regarding the company, Clearview). Since as early as 2012, police forces across the UK have been using AI to help predict where and when specific types of crime will happen, in order to advance decisions on officer resourcing and deployment. The technology uses data from historical police records in order to create hotspots of where officers should patrol and when. The critical question is, exactly what historical data is being used to inform this technology and its ultimate decision?

According to official Gov.UK figures, between April 2018 and March 2019, there were “four stop and searches for every 1,000 white people, compared with 38 for every 1,000 black people”. If the data upon which predictive crime mapping technology has been trained includes historical reports of suspected crimes or where stop and search has been carried out – clearly the output (where to send police officers) will recommend areas with higher minority-ethnic populations. The technology will reinforce and strengthen the racial bias. Prejudice in, prejudice out.

Commercial insurer impact

The negative impact of biased AI decision making could be significant for companies and when applied in a commercial context. The insurance sector is not exempt. As the adoption of AI solutions to assist with different parts of the insurance cycle becomes increasingly prevalent, could, for example, the application of AI to assist with applicant profiling and the underwriting of risk reinforce an inherent prejudice against a particular minority? For example, technology trained on data that recognises transgender individuals are at increased risk of suicide, but that then discriminates against that individual via the automated application of an increased health/life insurance premium.

The human influence

At the heart of the issue is that a) the datasets used for training AI technologies have been created and influenced by humans; and b) the training of the technology is conducted by humans. And, humans are inherently biased. Our thinking is shaped by our environment, authoritative influence and past experiences. That leads to hard-wired and often unconscious cognitive biases. Compounding this issue is that diversity amongst those who create AI technologies is drastically lacking and minorities hugely under represented; just 18% of computer science degrees go to women and only 11.9% of data science industry workers are black.

If left unchecked the biases inherent in company data and the data scientists themselves will find their way into the output of AI technologies, and that could have a significant impact upon already disadvantaged consumers; perpetuating inequality.

So what can be done? How can we begin to minimise and mitigate human prejudicial bias within the AI model? How do we stop technology from exacerbating the prejudice and bias of days gone by?

Just as has happened within the sphere of HR and employment, the duty now falls on governments to install legal frameworks that ensure companies are obliged to take steps to eradicate bias when processing data and developing AI technologies. For both Europe and the UK, that is now looking like the direction of travel.

Europe: existing guidelines and regulations

The European General Data Protection Regulations (GDPR) and guidance, go some way to obliging data scientists to build AI models, which seek to prevent discrimination. Specifically, Recital 71 of the GDPR requires the promotion of diversity and prevention of discrimination when engaging in statistical data processing and profiling techniques and that data controllers should take measures to prevent ‘discriminatory effects on natural persons’.

Supplementing this, the European Union High Level Expert Group on Artificial Intelligence has provided guidelines to AI model building, which set out seven key requirements, to help ensure that AI machine models are lawful, ethical, and robust from both a technical and social perspective. They require:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination, and fairness
  6. Societal and environmental well-being
  7. Accountability.

More is to come from the EU following the appointment of Ursula von der Leyen as president of the European Commission in December 2019 who has stated an intention to put forward legislation for a coordinated European approach on the human and ethical implications of artificial intelligence (AI) within her first 100 days in office. The first step towards that was the publishing of the “White Paper on Artificial Intelligence: a European approach to excellence and trust” in February 2020.

UK: an anti-discriminatory framework in development

In the UK anti-discrimination legislation, notably the UK Equality Act 2010, offers individuals protection from discrimination, whether generated by a human or automated decision-making system.

However, the ICO has expressly recognised how flaws in training data can result in algorithms that perpetuate or magnify unfair biases and is now on a path to take steps further, via the development of an ‘AI Auditing Framework’. ICO investigations have so far focussed three broad technical approaches to mitigate discrimination risk in AI models, they are:

  1. ‘Anti-classification’ – excluding protected characteristics from consideration in data models (e.g. sex, religion, race).
  2. ‘Outcome and error parity’ – ensuring the model is inspected to understand how it treats different groups and to ensure it attributes equal positives or negatives to those different groups.
  3. ‘Equal calibration’ – calibrating AI models to ensure their estimation of the likelihood of something happening matches the actual frequency of the event happening.

How can we start to make a difference?

All eyes should be on the regulatory developments coming out of the ICO and EDPB in the coming months. In the meantime there are some effective practical steps organisations can take in order to mitigate AI bias:

  • Ensure all technical staff are educated on cognitive bias and how best to combat it
  • Seek to ensure their cultural values are inherent to any AI projects
  • Ensure as diverse a tech team of data engineers and data scientists as possible
  • Utilise data from a wide variety of sources
  • Ensure data collection is not too selective
  • Ensure the data input is strong enough to minimise subjectivity
  • Detect and mitigate unwanted bias in AI models
  • Train AI models to ensure bias is identified and removed from output.


The Kennedys approach

As we have written about previously, at Kennedys we are actively involved in the creation and testing of solutions that apply predictive analytics, natural language processing, and machine learning, to allow our clients to: analyse insurance policy wording, predict legal case outcomes, make quicker and better decisions on liability and settlement offers, and as a result - use lawyers less.

Key to this is our diverse and representative team of data scientists, experts in their field from world-renowned universities, who also alive to the issue of bias, and take a proactive approach towards the need to identify and combat it. They ensure the data we use is pre-processed, truly representative and balanced - as we pursue our ambitious plans for technological innovation. Our code of ethics helps ensure bias is eliminated to the highest extent possible, so that we can continue to make a difference for our clients.

Comment

As long as AI models are created by humans and trained on data collected by humans, they will inherit human prejudices. Even before legal frameworks come into play, there are strategies that companies that utilise AI technology can develop to mitigate such prejudices and unwanted bias in their AI models not only to protect their profits but to protect their people.

Read others items in London Market Brief - July 2020

Related items: