Skip to content
Thomson Reuters

Effective discharge of risk professionals’ responsibilities using AI

As the financial services sector sees more deployment of artificial intelligence (AI) in various functions, risk and compliance professionals will face increasing challenges in discharging their responsibilities unless they understand the technology, an official said.

A recent paper, titled ‘Artificial intelligence in banking and risk management: keeping pace and reaping benefits in a new age of analytics’, jointly published by analytics provider SAS and the Global Association of Risk Professionals (GARP) showed AI is maturing as a tool and that 81 percent of risk professionals have seen the benefits of using AI technology.

The paper, based on a survey conducted earlier, outlined the benefits of using AI based on responses from risk professionals as well as the challenges that come with the deployment of this new form of technology. These benefits include improvements in process automation, credit scoring and data preparation. Respondents also reported benefits seen in model validation, calibration and selection.

The paper largely focused on how the risk management function can use AI to improve the outcome and the quality of risk analytics as well as offer efficiency and productivity. What is also striking about the survey result is that it highlighted the need for risk professionals to be able to understand AI technology and to have the necessary skillsets to discharge their risk management responsibilities more effectively.

Risk & compliance pros need to understand the technology

If the business is using AI, for example, to identify and target certain segments of the business or develop tailored products, risk and compliance professionals need to understand the technology, how these tools are used and the risks associated with them, said Gary Mellody, financial services risk leader, ASEAN at EY in Singapore.

“Unless you understand the new technology, it is going to be increasingly challenging for you to discharge your responsibilities as a risk or compliance professional”, Mellody said.

Mellody also noted that “over time, the risk function will become smaller and the skills required of risk professionals will have a far more technology component than in the past.

“The demand will be for people who can bring domain knowledge, experience in understanding risk management and the business as well as an understanding of the technology that is necessary to meet or solve business problems”, Mellody said.

Upskilling and retraining required

The increasing use of AI in the financial services sector has revealed a dearth of skilled practitioners who are able to implement and maintain the use of AI, according to the SAS/GARP paper. This will require financial institutions to look into retraining and upskilling their existing workforce to ensure there is sufficient talent to enable the technology, according to Mellody.

Regulators also need to be able to keep up with the new AI technology which financial institutions are adopting, and so an open dialogue across all the industries—and not just within the financial services sector—will benefit all parties, said Terisa Roberts, Global Solutions Lead, Credit Risk Modelling at SAS in Singapore.

“We have seen regulators asking for new technology to be tested in the sandbox environment. There is also a need to put in place the necessary training given the lack of talent and skills in the field of AI”, Roberts said.

Choosing the right AI technology

Artificial intelligence is a catch-all term encompassing a broad range of technology including machine learning, robotics automation and natural language processing, among others. The broad range of AI technology could make choosing the right technology a challenge.

“Because the [AI] technology is so varied, it is important for risk functions to know clearly what problems they are trying to solve and then choose the right technology to solve those problems as opposed to choosing the technology and trying to find applications for it”, Mellody said.

Roberts also emphasised the importance of identifying well-defined business problems and in making sure the right data is available to make AI a useful technology.

Mellody pointed out that AI technology, which is not new, has evolved, particularly in computing power. To use AI to the best advantage would require vast amount of data, large-scale data processing and storage capability, and strong algorithms to ensure accuracy in analytics and predictive analytics.

How AI provides value to risk professionals

“Risk professionals can benefit from AI by combining their expertise with data screening to determine the areas where AI can provide genuine value”, Roberts said.

“The right way to implement AI effectively involves more than algorithms. Risk professionals will do well if they consider data management, the governance processes around it and the deployment of models”, Roberts said.

The broad range of AI capability and the large amount of data-point access for analytics enabled by AI will lead to better customer experience and new financial products because of better insights into what customers want, according to Roberts.

“We will see more data-driven decisions made using AI in risk management,” Roberts said.

“For risk professionals, one of the key benefits of using AI is the ability to derive insights about risks which were either previously unattainable or unavailable in a timely or real-time fashion which allows them to make pro-active decisions”, Mellody said.

“AI can assist risk professionals to gain insights which they previously were unable to, for example, around credit model performance and cyber risks,” Mellody noted.

Biggest impediments: availability of data and data quality

The biggest impediment to reaping the benefits of using the AI technology lies in the availability of data and data quality, according to Mellody. “Traditional market participants such as the large global banking groups face significant challenges because of their greater dependence on legacy infrastructure, which tended to be product-specific or silo in nature,” Mellody said. Whilst older infrastructure is in the process of being upgraded, the ability to rapidly adopt AI is still impaired,” Mellody stated.

“Traditional market participants have massive challenges around data quality and data acquisition,” Mellody said.

New participants such as those that offer digital banking face fewer challenges in data quality because of their ability to build data quality controls more readily into their business models without the legacy systems problems faced by traditional financial institutions, Mellody said. Even smaller, regional institutions can move in a more agile way because they do not have the same legacy challenges, Mellody said.

Unintended consequences

There are unintended consequences of using AI however. For example, the opaqueness of the algorithms is often seen as obstacles toward the adoption of AI in the financial services sector because of their lack of transparency and the difficulty in interpreting them, according to Roberts.

“The algorithms can lack transparency and interpretability for compliance purposes. It is difficult for regulators to understand the algorithms, so it is difficult for them to regulate financial institutions that use AI. Transparency and interpretability are some of the biggest challenges for financial institutions. In other industries we see a faster adoption of AI but in the financial services sector because the model is not so transparent, adoption of AI is slower”, Roberts said.

The use of AI may see the emergence of new risks, another aspect of unintended consequences.

“For example, as the value of data increases, new risk may materialise. Other new risks include potential data breaches and cyber threats. All these are risks which the risk management function has to be aware. It is important for risk professionals to be aware of the tangible benefits [of using AI] and the challenges associated with the risks so that they can then look at AI as an option to address specific business problems”, Roberts said.

This article was authored by Patricia Lee, Chief Correspondent in Banking and Securities Regulation in Asia for Thomson Reuters Regulatory Intelligence

 

How to implement CLM: Adopting best practices, avoiding pitfalls, and measuring success AI-powered contract analysis Three reasons why generative AI will not take over lawyer jobs Generative AI for in-house counsel: What it is and what it can do for you Why CLM? Unlocking the power of contract lifecycle management for your legal team Legal AI tools and AI assistants: must-have for legal professionals 8 strategies to help GCs manage growing workloads Creating a seamless legal transaction management workflow has never been easier How to simplify M&A due diligence and smooth out transaction management AI’s impact on law firms of every size