-
Audit Industry, Services, Institutions
More security, more trust: Audit services for national and international business clients
-
Audit Financial Services
More security, more trust: Audit services for banks and other financial companies
-
Corporate Tax
National and international tax consulting and planning
-
Individual Tax
Individual Tax
-
Indirect Tax/VAT
Our services in the area of value-added tax
-
Transfer Pricing
Our transfer pricing services.
-
M&A Tax
Advice throughout the transaction and deal cycle
-
Tax Financial Services
Our tax services for financial service providers.
-
Advisory IT & Digitalisation
Generating security with IT.
-
Forensic Services
Nowadays, the investigation of criminal offences in companies increasingly involves digital data and entire IT systems.
-
Regulatory & Compliance Financial Services
Advisory services in financial market law and sustainable finance.
-
Mergers & Acquisitions / Transaction Services
Successfully handling transactions with good advice.
-
Legal Services
Experts in commercial law.
-
Trust Services
We are there for you.
-
Business Risk Services
Sustainable growth for your company.
-
IFRS Services
Die Rechnungslegung nach den International Financial Reporting Standards (IFRS) und die Finanzberichterstattung stehen ständig vor neuen Herausforderungen durch Gesetzgeber, Regulierungsbehörden und Gremien. Einige IFRS-Rechnungslegungsthemen sind so komplex, dass sie generell schwer zu handhaben sind.
-
Abacus
Grant Thornton Switzerland Liechtenstein has been an official sales partner of Abacus Business Software since 2020.
-
Accounting Services
We keep accounts for you.
-
Payroll Services
Leave your payroll accounting to us.
-
Real Estate Management
Leave the management of your real estate to us.
-
Apprentices
Career with an apprenticeship?!

Classification1
1 This is a highly simplified presentation intended to enable a quick initial categorisation of the topic. Each institution should determine the relevance and the specific need for action individually and specifically.
A few days ago, based on the overview of the regulation of AI prepared by DETEC and the FDFA, the Federal Council defined key parameters for the regulation of AI in Switzerland. Switzerland should ratify the Council of Europe's AI Convention and tackle its implementation both by adapting sector-specific legislation and by means of legally non-binding measures. However, in the financial market it is FINMA in particular that is contributing to the concretisation of AI regulation with its Guidance 08/2024. After FINMA first formulated its expectations for four particularly challenging areas in its risk monitor at the end of 2023 , the supervisory authority continues to draw attention to the risks associated with the use of artificial intelligence in the financial market in its guidance and communicates more specific expectations for appropriate governance and risk management. These developments also require Swiss financial service providers who wish to utilise AI applications to engage with AI regulation.
FINMA expects active engagement with the impact of AI applications on risk profile
FINMA expects supervised institutions that use AI to actively consider the impact on their risk profile and adapt their governance and ICS accordingly. In particular, the materiality of the AI applications used and the probability of materialisation of the resulting risks must be taken into account, with FINMA citing possible factors here:
Materiality: significance for compliance with financial market legislation, financial impact, legal and reputational risks, relevance of the product for the company, number of clients affected, types of clients, importance of the product for clients and consequences of errors or failure.
Probability of materialisiation: complexity (e.g. explainability and predictability), type and amount of data used (e.g. unstructured data, integrity, appropriateness, personal data), unsuitable development or monitoring processes, degree of autonomy and process integration, dynamics (e.g. short calibration cycles), linkage of several models and the potential for attacks or failures (e.g. increased due to outsourcing).
When using AI, FINMA focuses in particular on operational risks, i.e. the risk of losses resulting from the inappropriateness or failure of internal processes, people or systems or as a result of external events. These primarily include model risks (e.g. lack of robustness, correctness, bias and explainability) as well as IT and cyber risks. In addition, from FINMA's perspective, relevant risks exist in particular in the dependence on third parties, in legal and reputational risks and in the allocation of responsibilities.
Measures to address AI risks
To support supervised institutions in identifying, assessing, managing and monitoring risks, FINMA then lists various measures to address them, divided into seven areas. Financial intermediaries can use these exemplary measures as a guide when defining measures in the area of AI compliance:
Governance
- Inclusion of the use of AI in the internal control system (i.e. identifying risks, managing and classifying them in a central inventory, deriving measures, defining responsibilities and accountabilities for development, implementation, monitoring and use, as well as defining specifications for model testing and supporting system controls, documentation standards and broad training measures)
- In the case of outsourcing relationships, implementing additional tests, controls and contractual clauses that regulate responsibilities and liability issues, as well as ensuring that the delegatee has sufficient skills and experience
Inventory and risk classification
- Definition of AI (FINMA refers to the OECD definition approach)
- Maintaining AI inventories, including ensuring their completeness and risk classification of AI applications
Data quality
- Creation of internal directives and guidelines with specifications for ensuring the completeness, correctness, integrity and availability of data as well as access to the data
- Implementation of appropriate controls
Tests and ongoing monitoring
- Schedule tests to assess the performance and results of an AI application. These include, among others:
- "Backtesting" or "out-of-sample testing": tests in which the users know the correct result and check whether the application delivers it
- Sensitivity analyses or "stress testing": Constructed tests to understand how the application behaves in certain borderline cases
- "Adversarial testing": Tests with incorrect input data
- Setting predefined performance indicators to assess how well an AI application is achieving the defined goals
- Consideration of fallback mechanisms in the event that the AI develops in an undesirable direction and would no longer fulfil the defined objectives
- Definition of threshold values or other validation methods to ensure the correctness and continuous quality of the outputs of an AI application (e.g. using random samples, backtesting, predefined test cases or benchmarking)
Documentation
- Documentation of the purpose of the application, data selection and preparation, model selection, performance measures, assumptions, limitations, testing and controls as well as fallback solutions
- Documentation of data sources and data quality checks including integrity, correctness, appropriateness, relevance, bias and stability
- Ensuring the robustness, reliability and traceability of applications
- Carrying out an appropriate risk classification including justification and review
Explainability
- Ensure that the results of AI applications can be understood, explained and reproduced. To this end, it should be understood what the drivers of the applications are and how the application behaves under different conditions in order to be able to assess the plausibility and robustness of the results.
Independent review
- Ensure that, for material AI applications, the independent review includes an objective, experienced and unbiased opinion on the appropriateness and reliability of a process for a particular use case and that the findings of the review are taken into account in the development of the application.
Conclusion
As a result of the transparency created by FINMA in the area of AI regulation with the Guidance 08/2024, financial intermediaries that use or intend to use AI should sharpen their understanding of risk in this area and align their governance and risk management accordingly. AI risks should therefore be included in the risk analysis in addition to the other operational risks, with subsequent definition of risk-mitigating measures and controls for effectiveness. FINMA will continue to develop its expectations in this regard based on its supervisory experience and in line with international regulatory approaches and create further transparency in the market as required. By the end of 2026, the FDJP, together with DETEC and the FDFA, will also draft a bill for consultation setting out the legal measures for implementing the Council of Europe's AI Convention. By this time, a plan for further, legally non-binding measures will also be drawn up, which will provide further clarity on Switzerland's approach to the regulation of artificial intelligence.
Do you have questions about AI regulation? Our specialists from the Regulatory & Compliance FS team will be happy to support you. We look forward to hearing from you.