This is the second of four in the series “Bridging digital capability gaps: an AI perspective”.

New cyberthreats with data proliferation

If you have worked or are familiar with the banking industry, you would know that onboarding courses or training in AML (Anti-Money Laundering), KYC (Know-Your-Customer) and Compliance & Risk mitigation are integral parts of the work carried out by bankers, from tellers to executives. Bank employees and their customers are made aware of the cyberattacks with regular communication and information of the potential dangers. Nevertheless, there is an ongoing threat for financial institutions. 

Cybersecurity is a hot topic for risk managers and company leaders. Reports from the Global Risk Institute and the CSFI identified financial crime and cyberattacks/ data breach as the most important risks and challenges financial institutions and risk managers are currently facing. 

“Cyber risk remains a top concern as a result of the pandemic with more staff working from home, more devices accessing digital banking environment and more people shopping online”  says Ms Sonia Baxendale.

Also, in the Banking Banana Skins 2021 – CSFI report, Andrew Warren notes that “banks are having to implement new tactics and to develop defensible strategies that leverage big data & analytics to help tackle this rapid evolution of financial fraud”.

An article from the IET 1 shows that money laundering is estimated to amount to around 2% to 5% of global GDP. This equates to about $800 billion to $2 trillion. And IBM found that Automation and security AI, when fully deployed provided the biggest cost mitigation, up to USD 3.81 million less than organisations without it 2.

Indeed, the proliferation of data has brought up opportunities for AI innovations as well as challenges. We may recall that IBM reported a few years ago that 90% of all the data in the world was created in the 2 years just prior to 2017. Yet, it’s estimated that around 90% of digital data is not used and 90% of data is never accessed again just 3 months after it was first stored. This causes a major problem insofar as poor data quality leads to less-than-optimal decision-making.

Professor David Hand at Imperial College London coins the term “dark data”, saying that missing or overlooked data can be more important than the data we draw on. This lack of awareness he explains, can unfortunately lead to inappropriate or dangerous actions.

That is to say that as much as companies are eager to augment client intimacy (understand their needs, customise products and services, mining data to exceed expectations) through the various channels made available, they are confronted with certain challenges in order to take – depending on either side of science, art or humanities one stands – the signal from the noise, the needle from the haystack, or to separate the wheat from the chaff.

Machine Learning and processing power

Banks have ramped up their AI capabilities and use AI for trading and investment strategies, risk management, stress testing, capital optimisation, etc.

Insurers are taking advantage of AI tools in various ways. AI is used for example for mining purposes, to streamline processes in areas such as policy administration, data extraction and claims processing. For Life & Health in particular, AI is used to price insurance policies as well as to up-sell or cross-sell investment/ wealth management products. 

In other industries, like Manufacturing or Oil & Gas, precision control and monitoring are crucial to ensure safety and availability of systems. AI is used to regulate fluctuations in temperature, humidity and other sensors in order to prevent human or environmental disasters. 

To be sure, the key AI innovations have to do with statistically sound decision-making. Machine Learning (ML), a key subset of AI teaches a machine how to react to some type of data. It’s a fairly simple process, yet it gets complex when more data is added. For a company of a given size, if for example

   – you are concerned that your organisation is too far from becoming AI or data-driven

   – you have too many human errors because large data has made predictions, optimisation and insights difficult

   – you have trouble to develop and deploy models into production and add value to your business

Then you may need some kind of ML.

ML can automatically detect unusual or irregular activities (in systems and applications), to identify email spam or malware (from a virus definition), to spot fraudulent credit-card transactions (from past occurrences). It does so by having labelled data sets, annotated classification from a database. This particular technique is called supervised learning. It relies upon large amounts of training data to deliver on its promises. 

At the other end of the spectrum, there is unsupervised learning used for producing descriptive models, and which operates without any labelling of the data. Here, the algorithm makes associations for “clusters” of data that exhibit similar behaviours.

There are a number of ML techniques. However, one should not be unmindful that ML has intrinsically a certain percentage of error. If the training data is biased, the result will subsequently be an AI system that will make unfair decisions. 

A survey on bias and fairness in ML, 17 September 2019 from Mehrabi et. al identifies up to 20 possible statistical fails. And accuracy of the system is a measure of how large the percentage of correct answer is, while transparency will aim to enhance accuracy, reliability and overall performance. 

This can give rise to issues such as adverse or discriminatory impact from the decision-making process. To aid with this challenge, Deep Learning (DL) which is a type of ML, creates models that simulate the way neurons interact in the brain. It departs from the traditional ML approaches in that it does not require as much data input by humans. More importantly, it is meant to produce more accurate results. 

From the seminal work on ML of French mathematician Adrien-Marie Legendre in 1805 to the team of Geoff Hinton at the University of Toronto achieving high accuracy at the ImageNet contest in 2012, it is this breakthrough in DL plus the data proliferation, coupled with improvements in hardware (computing power and storage) that has finally turned AI into the disruptive technology it is today.

Indeed, AI innovations have only been made possible thanks to exponential increases in processing power. The big shift occurred when the capabilities of Graphics Processing Units (GPUs), the high-performance chips running in video games consoles were utilised to serve computation-hungry DL algorithms. Scaling up AI, by training these massive models will then require state-of-the-art hardware and advanced physical infrastructure. 

Although these hyperscale computing powers could only be built by deep pockets in the tech industry, dominant players have made them available to the wider research community. The major actors are also willing to make their complex models accessible to SMEs, civil society, government – all for more innovation, collaboration and transparency, notwithstanding issues such as market competition and algorithm misuse. And on the subject of a trustworthy deployment, Dr Adrian Weller, Programme Director for AI at the Alan Turing Institute identifies 3 requirements as a framework that aims to:

  • Identify requirements: Engage with users, practitioners, and stakeholders: What do people need? What are their concerns?
  • Build requirements: Theoretical and technical foundations to build in requirements into AI systems
  • Check requirements: Enforceable governance which is legally sound and technically feasible.

Thus, there are various potential challenges organisations should consider when formulating their AI strategy. With every rising tide, there are also troublesome undercurrents, so the saying goes. And from the algorithmic trade-off between accuracy and fairness to the affordability of computing power to the quality of data, AI innovations make no exception. 

In closing

Cybersecurity is one of the biggest threats organisations have to tackle in the new remote and online work. These threats are increasing in number and sophistication, with no signs of abating. Yet, intelligent automation provides a tool for mitigation.

This piece has provided an overview of the challenges and issues that derive from the implementation of AI. However, neither ignoring the risks nor pushing off advanced technologies is a sensible option in this new competitive environment where business ecosystems are forming, with partnerships and collaboration growing in popularity; and where urgent calls to foster a responsible AI are becoming ever more pressing economic and societal needs.