With seizure rates still alarmingly low, and with false positives still afflicting banks’ detection processes, there’s naturally a huge amount of interest in how artificial intelligence (AI) might impact the fight against money laundering and financial crime. So it was great to sit down at CogX 2018 with Marc Fungard, Global Head of Intelligence and Analytics for Financial Crime at HSBC, and hear his thoughts on how new intelligent techniques are changing the game.
Marc began with a really interesting point: “A lot of what you’re looking for in financial crime is essentially pattern recognition”. And that makes it potentially ideally suited for machine learning and natural language processing. It means banks can get much better at picking out what financial crime actually looks like – rather than spending all their time documenting why huge numbers of transactions turned out to be false positives.
That’s because traditional techniques are actually quite ineffective. In transaction monitoring for instance, Marc noted that despite a huge amount of work banks only convert about 5 to 10 per cent of transactions into suspicious activity reports. Then, only about 10 per cent of those get investigated by authorities. And with seizure rates running at about 1 or 2 per cent, the number of criminal transactions that end up with law enforcement action is actually tiny.
Machine learning changes that because “it lets banks take far more advanced cases to law enforcement who can then take more direct action, instead of simply saying ‘here’s a transaction from a couple of months ago which we thought might be suspicious’”. And natural language processing (NLP) helps banks process their vast amounts of data – especially documents like trade reports – more quickly.
Just as in other artificial intelligence fields, the ethical dimension is front of mind in financial crime mitigation. Marc made a point that really stood out, saying “there’s no such thing as unbiased” when it comes to using artificial intelligence. As banks start to look for certain typologies of activity, they can unwittingly introduce biases that end up taking them places they don’t want to go. So banks should be asking not just can we do this, but should we do this – and how?
Another key point is explainability. Regulators rightly expect banks to explain how they made the decisions they did. But with artificial intelligence, where the underlying processes are often hidden by their very nature, that’s much harder.
Marc’s view on this question was really interesting. He pointed out that any process that relies on large numbers of individuals making decisions has an error rate baked into it because “people will come in and they’ll be bored, or in a bad mood, or their mind will be off-topic”. In contrast, the conversations around artificial intelligence sometimes seem to presume it has to be error free – but “of course it doesn’t, it just has to be better than what you’re doing now”. That’s a really important point to bear in mind.
The good news is that regulators and businesses are creating a space for more open discussions about what techniques should be used, and what role the regulators should have in the process. In an evolving field, that’s a really valuable development.
In the end, it’s always worth recalling that money laundering isn’t a victimless crime. It underpins some of the most serious criminal activity there is, from terrorism to drug trafficking to modern slavery. I thought Marc summed it up really well when he said, “one of the really compelling things about financial crime artificial intelligence use cases is the potential human impact they can have”.
Watch the video of Adam Markson and Marc Fungard at the CogX 2018 session: