The Credit Risk Systems of Most Banks Are Not Fit for Purpose
As COVID-19 locked down many economies and financial markets erupted in turmoil, bank credit risk departments were faced with a deluge of urgent questions, some of them existential. How large is our direct and indirect exposure to oil and gas companies? For how long can airlines service outstanding credit lines? Will our commercial property portfolio hold up? Under what conditions will we consume our capital buffer?
Confronted with many time-critical analyses to complete, many credit risk departments (re-) discovered an unsettling truth: Credit IT and data infrastructure are not up to the job. Credit systems at many banks are monolithic and inflexible with complex and poorly constructed links to source systems. The data they contain is often inaccurate, incomplete and out-of-date.
As a result, completing even basic sensitivity analyses at a client or portfolio level can be a major headache, requiring extensive manual work. These delays can undermine decision-making and prevent decisive action.
The problem, however, is not new. Most banks experienced similar problems in the wake of the global financial crisis. So, why haven’t they managed to fix their credit risk infrastructure?
Too Hard To Fix?
Many banks have relegated their legacy credit risk infrastructure to the “too-hard-to-fix” bucket. Credit risk systems tend to be old and have evolved to comply with successive waves of regulatory requirements over a long period. The resulting legacy environment has become a Gordian knot that is simply too costly and too risky to untie. Credit risk infrastructure can contain thousands of daily data feeds, dozens of different processing environments and millions of lines of code.
However, banks have become adept at workarounds to compensate for poor credit infrastructure. The low cost of labor has allowed banks to deploy large numbers of offshore staff to perform manual analyses and data remediation. In the short run, such an approach was easier and cheaper than replacing existing infrastructure. In the wake of the most recent crisis, however, this approach seems like a false economy.
The upside of sorting out the credit infrastructure mess looks more compelling than ever. The quality and efficiency of banks’ credit risk decision-making will improve across the board, allowing them to manage their capital more effectively. They will be in a better position to proactively manage clients, support the growth strategies of the front-line businesses and realize cost savings from the operational burden of manual work and data remediation.
To re-platform credit risk IT effectively, banks should learn the lessons from peers that have successfully transformed from old to new technology. We have condensed these lessons into the following five areas.
Know Where You’re Going
Banks must have a clear and detailed picture of the future credit risk IT architecture and how it supports the future state vision of the credit risk function. Without this north star, banks have no basis to assess whether proposed technology investments are moving toward a strategic solution or not. Agreeing on the architecture also forces banks to debate and resolve the key business trade-offs involved with more difficult architectural decisions. Unresolved core design questions can become highly politicized and block meaningful action to remediate deep-seated technology issues.
Credit officers and banks with sub-standard credit risk infrastructure are vulnerable in today’s highly uncertain and volatile world and will need to adapt a more modern architecture.
Deliver, Deliver, Deliver
Successful credit risk re-platforming is a complex and time-consuming exercise. For the largest multi-national banks, fully decommissioning critical credit infrastructure and switching over to a new environment can take years. Waiting until the end of the journey to deliver all benefits in a “big bang” always ends in disappointment.
Leading banks are following migration paths that deliver benefits from very early in the program, producing tangible improvements at regular intervals. In this way, they build momentum to ensure that all stakeholders continue to engage with and support the program.
Pioneer banks are looking to incorporate modern technology concepts — microservices, cloud storage, stateless calculators and data rules libraries — into their solutions.
Risk teams tend to be conservative by nature, especially when it comes to adopting new technology. Many banks replace old systems with new systems that look very similar. Against the backdrop of rapid improvements in data management, storage and calculation technology, such an approach is not only expensive but also will put banks at a competitive disadvantage.
One successful bank is seeding experienced engineers from non-risk backgrounds as technology disruptors into the risk architecture teams to ensure that the new technologies flow into emerging designs.
One Dream Team
Traditional ways of working between risk and IT persist at many large banks — despite extensive efforts to adopt agile methodologies. This involves risk practitioners giving their requirements to change teams that act as go-betweens with IT. IT teams take these requirements and build or procure a system that is tested by the risk practitioners (or change teams). Defects are logged and passed back to the IT team for remediation. And the cycle continues through multiple iterations. This approach is not only lengthy but also often results in systems that are not fit-for-purpose.
Leading banks are employing a fully joined-up approach through the entire development lifecycle. Critically, credit officers, credit control staff, quantitative data analysts, IT architects, and other data specialists commit significant time through the entire design, build and rollout process — and are evaluated on the team’s success. Not only does this approach speed up decision-making but also it ensures that the new systems incorporate features that address the most important practical needs of the users.
Clean In, Clean Out
Innovative banks are addressing the perennial issue of poor-quality data as an integral part of their re-platforming efforts, putting in place data management mechanisms to use “golden sources” of credit risk data and ensure that data in these golden sources is correct. Inaccurate, incomplete and out-of-date data has long been the Achilles Heel of credit risk. New calculation engines, data repositories and reporting engines count for little if the data that they are using are of poor quality. It’s no coincidence that credit data has become a major focus for regulators.
Credit officers and banks with sub-standard credit risk infrastructure are vulnerable in today’s highly uncertain and volatile world. Banks that move to a more modern architecture will facilitate informed credit decision-making and enjoy a distinct competitive advantage.