Your contacts
The use of artificial intelligence (AI) in the financial market is on the rise, presenting both opportunities and risks. FINMA emphasizes in its new Guidance 8/2024 the need to properly identify, calibrate, and manage these risks. While there are no specific laws on AI in Switzerland, existing regulatory requirements concerning governance and risk management also apply to AI. This is in line with the technology-neutral and principle-based financial market regulation.
To adapt the current compliance, risk management and internal control system (ICS) to AI, FINMA-supervised institutions using AI are required to develop AI risk awareness by specifically addressing AI in their processes, identifying, weighting, assessing and managing the specific AI risks they face. This requires a clear understanding of the AI solutions used, no matter whether they are internal or outsourced solutions.
FINMA has shared its findings regarding the risks typically arising from AI. Conceptually, these risks are operational risks, arising during the whole value chain of AI products/services. It starts with making sure that the data quality is not incomplete, inaccurate, unrepresentative or outdated. Certain data such as unstructured data can distort the quality assessment. Furthermore, the decentralization of processes poses challenges in assigning responsibility due to the autonomous actions of these systems. As several solutions are outsourced, the applicable regulatory outsourcing requirements must be complied with.
Generally, not only does the output need to be checked, but the way applications function needs to be understood by financial institutions (explainability), in order that the relevant risks can be identified and assessed. Finally, the potential failures as well as vulnerabilities to cybersecurity and IT risks, along with business continuity risks, need to be anticipated. Data protection risks are also relevant, even if according to FINMA, supervised institutions are already considering these risks as opposed to other risks related to the models.
Once the risks are identified, they must be weighted to address their materiality. FINMA names, on a non-exhaustive basis, certain factors that lead to a higher materiality or lead to a higher likelihood of materialisation of risks. Amongst them, FINMA refers to the potential impact on compliance, on the balance sheet, the legal and reputational impact, the number of customers affected and their profile (retail or institutional), the importance of the product/s affected, what are the expected consequences of potential failures, the complexity, predictability and explainability of processes, as well as the possibility to monitor them. Furthermore, the type of data used (unstructured data, integrity, personal data etc.) are to be weighted as well.
In a third step, appropriate mechanisms must be defined to identify and assess the specific risks on an ongoing basis. For this purpose, performance indicators, data quality tests and stability and robustness of systems are reviewed, as well as fallback mechanisms, adversarial tests, stress tests and backtests must be implemented on an ongoing basis.
From a governance perspective, the following measures should be implemented to identify, mitigate and control the risks around AI: central management and accountability, independent review by skilled personnel, third party contractual and liability management, training of employees, definition of models for testing and establishment of a policy and documentation standards.
As the understanding of AI-related risks is still evolving, FINMA will continue to refine its expectations on governance and risk management. In conclusion, the FINMA Guidance underscores the importance of diligent risk management related to AI, stressing the need for financial institutions to implement strong governance practices, accurately classify risks, ensure data quality, perform adequate testing and monitoring, document processes, explain AI application results, and conduct independent reviews.
Disclaimer: This Newsletter is only a descriptive overview and is not intended to be used as legal advice. MLL Legal does accordingly not assume any liability in connection with the contents of this Newsletter. Please feel free to contact us if you would like to seek legal advice.