Machine Learning (ML) and more broadly Artificial Intelligence (AI) have quickly become everyday vocabulary for those of us that work in finance. AI and its subsets have the power to revolutionise ho
Contributor
Machine Learning (ML) and more broadly Artificial Intelligence (AI) have quickly become everyday vocabulary for those of us that work in finance. AI and its subsets have the power to revolutionise how financial institutions work and have created opportunities to improve many aspects of the financial services value chain. It can make investment predictions smarter through NLP, improve financial monitoring, credit decisioning, help automate key business processes and even transform how talent is hired.
Less discussed, however, is the darker side, the inherent risks and vulnerabilities artificial intelligence can introduce into the system in the pursuit of speed, experience and cost reduction.
Automated facial recognition utilizing ML has been increasingly used by financial institutions in onboarding journey’s for banking customers, providing a smoother and quicker customer journey. However less is known about the use of adversarial machine learning which allows for imperceptible changes to a photo ID to fool an identification system. As the collaborative research team from Delta Capita, CTO of dragonfly and Raphael Clifford note in their recent paper ‘ML risks in financial services’ “an attacker can maliciously perturb their face image so that the AI system matches it to a target individual. Yet to the human observer, the adversarial face image appears as a legitimate face photo of the attacker”. How do banks protect themselves against something they can’t see?
Take this further to automated processing of documents. OCR is now the go to for the automatic processing of customer documentation. However again an adversarial attack can now cause even ID numbers of legitimate ID documents to be read by an AI based OCR system as a completely different number.
Regulatory bodies are quickly jumping on this issue, but the risk is a moving target with the attackers quickly moving onto the next weakness.
And that is just the deliberate attacks. Next you must consider the unsightly reality uncovered by many of the models which shows unquestionable bias and discrimination in machine learning. This is not a deliberate attack but is instead based on inherent issues with the data sets being used to train the models. The outcomes of which can be dire. There have been multiple stories in the news over the past two years whether in regards to US parole selection using AI to provide data driven recommendations to judges which perpetuated embedded biases, or Facebook posting ads for better paid jobs to white men. We also saw a more recent example in the UK with the ‘mutant algorithm’ as described my Boris Johnson, which disproportionately benefited students from private schools when they were unable to take their exams.
Earlier this year Algorithm Watch discovered that Google AI in the form of computer vision which automates image labelling was labelling temperature check devices as ‘guns’ when being held by dark skinned people but ‘electronic device’ when held by light skinned people. This was swiftly resolved by google but how many other biases lie in the underlying datasets which we are not aware of?
Machines alone do not have a capacity to be biased but artificial intelligence requires human intelligence or rather human data to learn from and this is where the problem arises. Artificial intelligence is becoming more embedded in our everyday lives but is there enough scrutiny on the outputs they produce? This is where explainability and transparency of the model will become increasingly critical and with regulation set to continue to increase in this space it is critical for organisations to act.
At Delta Capita we’ve created a solution to solve for this called DC Mint which essentially helps you gain trust in your AI model. It can help uncover underlying bias in your model, ensure regulatory compliance with regs such as GDPR and SR 11-7 4 and can help instil model confidence by understanding how the model behaves and works.
Machine learning has a huge capacity to streamline processes, improve human decisions and balance out the unconscious biases we all have. Human decisions are significantly more difficult to investigate and challenge, but a machines decision can be reviewed, and the algorithm or training data updated. Awareness and explainability is key: where did the data come from, has there been sufficient critique to the data sets on which the models have been trained. What are the outputs and can they be validated and deemed correct, legal and fair.
If we recognise the dark side, explore the potential attack avenues and understand the potential biases then we can address them. If we do this, machine learning offers a vast opportunity for good.
If you are interested to know more about how we can help you with your AI model, please email us at marketing@deltacapita.com.