AI Governance in Regulated Swiss Companies

Artificial intelligence (AI) is increasingly being used in regulated industries such as finance and healthcare. To ensure responsible use of these technologies, companies need clear rules, structures, and controls. This article highlights key requirements and implementation options for effective AI governance regulated Swiss companies.

1. Legal and Regulatory Requirements

1.1. General Legal Framework

Companies must ensure that their AI applications comply with applicable legal requirements. This includes data protection regulations such as the European General Data Protection Regulation (GDPR) and the Swiss Federal Act on Data Protection (FADP), especially if AI systems process personal data. The EU Artificial Intelligence Act (AI Act) applies to Swiss companies that operate in the EU or whose AI products are used within the EU.

1.2. Specific Requirements for Regulated Entities
Finance Sector:

  • The Swiss Financial Market Supervisory Authority (FINMA) sets clear requirements regarding risk management, transparency, and customer protection in the use of AI and algorithms. Supervised financial institutions must actively assess risks and align their control systems accordingly when using AI. This includes considering the complexity of the risk profile, the materiality of the AI application, and the probability of associated risks.

  • Credit scoring algorithms must be demonstrably fair and transparent to prevent discrimination.

  • MiFID II imposes strict rules on the use of AI, especially where AI is employed to meet regulatory requirements. For instance, AI is used in algorithmic and high-frequency trading (HFT) to optimize trading strategies and prevent market manipulation. AI-powered robo-advisors must ensure accurate analysis of client profiles and compliance with suitability assessment rules.

Healthcare Sector:

  • The use of AI in medical applications in Switzerland is subject to the requirements of the Federal Office of Public Health (FOPH) and must meet both ethical and data protection standards. This includes ensuring data security and confidentiality, as well as the quality assurance of medical algorithms. Patients must be informed about the use of AI and understand how it may impact their treatment.

2. Key Principles

The following principles must be observed when using AI systems in (regulated) companies:

  • Fairness: AI systems must be designed to avoid discriminatory effects on customers, employees, or patients.

  • Explainability/Traceability: AI decisions must be understandable and documented for all stakeholders. This is especially critical in regulated environments, where system behavior under various conditions must be well understood.

  • Transparency/Consent: Individuals must be informed about when and how AI is used. Whenever possible, they should be able to give their informed consent.

  • Data Quality: The data used must be accurate, complete, and up to date. Access to relevant data must be ensured at all times.

  • Data Security: Handling large volumes of data requires high security standards, including access controls, encryption, and anonymization of training data.

3. Risk Management and Control

The introduction of AI systems must be accompanied by a risk management process to identify and mitigate potential legal, operational, or security risks. This includes:

  • Evaluating potential harm from faulty AI decisions or security gaps

  • Implementing contingency plans

  • Introducing risk classification systems and appropriate risk mitigation measures

  • Maintaining proper documentation, such as an AI inventory that provides a detailed overview of all AI systems used

  • Establishing control mechanisms to ensure continuous monitoring and adjustment of AI systems. This includes regular reviews of the entire model development process by independent, qualified internal or external experts. Such measures help identify model risks and avoid undesirable effects.

4. Clear Governance Structures

  • Defining Responsibilities: Clear accountability for AI governance is essential. Companies should define specific roles, such as a Chief AI Officer or an AI Governance Committee responsible for overseeing compliance with governance policies.

  • Interdisciplinary Collaboration: Effective AI governance requires collaboration across departments such as Legal, Compliance, IT, Data Management, and Ethics. This ensures all perspectives are considered.

5. Ongoing Training and Education

  • Employee Training: Companies must ensure that employees are adequately trained to understand and responsibly use AI technologies. This includes both technical training and education on ethical and legal aspects.

  • Risk Awareness: Employees should be aware of potential risks and challenges associated with AI use, especially ethical dilemmas and regulatory obligations.

6. Technological and Organizational Adaptation

AI models must be robust, accurate, and stable. Continuous maintenance, updates, and development are required to ensure alignment with the latest technologies and legal requirements. AI systems must be able to adapt quickly to such changes.
FINMA assesses whether supervised companies conduct tests to verify data quality and AI functionality. Companies should test their AI systems for accuracy, robustness, and stability (and bias, where applicable). Thresholds and validation mechanisms should be defined to ensure output accuracy and quality.

7. Outsourcing

If AI systems are provided by third parties, additional tests, controls, and contractual clauses covering liability and accountability must be in place. External providers must have the required skills and experience.

8. Two Practical Examples

8.1. AI in Wealth Management

A Swiss wealth manager uses an AI-powered robo-advisor for digital portfolio management. Based on a structured questionnaire, the tool collects data about clients’ financial situations, investment goals, and risk tolerance. Automated investment proposals are generated and regularly adjusted to market developments.
In accordance with FINMA requirements—particularly those concerning suitability and appropriateness—companies must ensure that the recommended investment strategy matches the client’s risk profile. Control mechanisms are implemented to trigger manual review by qualified professionals in case of deviations.
To ensure explainability, AI-generated recommendations are systematically documented. A modular system highlights key influencing factors (e.g., risk category, investment horizon, liquidity preferences), enabling clients and internal reviewers to understand the results.
The model undergoes continuous monitoring and is reviewed quarterly by an internal validation team, focusing on data quality, performance, and potential bias. Regular training ensures the digital advisory service complies with supervisory due diligence requirements.
FINMA’s outsourcing requirements are also met: Contracts with external tech partners include provisions on data protection, audit rights, and liability—thus supporting both regulatory compliance and trust in the AI-driven advisory process.

8.2. AI in Healthcare

A leading Swiss hospital uses AI to assist in radiology diagnostics. The application automatically analyzes X-ray and MRI images and flags potential abnormalities for medical review—aiming to speed up diagnostics and enhance quality.
An interdisciplinary team reviewed the AI before deployment from radiology, data protection, IT security, and ethics. The training data were anonymized and free from bias.
Physicians retain final decision-making authority. AI serves purely as a support tool. Patients are transparently informed about the use of AI and can understand how the assessments are derived.
The AI system’s performance is continuously monitored. The hospital defines specific thresholds for sensitivity and specificity, which are checked as part of ongoing quality monitoring. In case of anomalies, a revalidation is conducted by the internal AI competence team to ensure robustness, reliability, and ethical acceptability in daily clinical practice.

9. Conclusion

For regulated companies, a well-structured AI governance framework is essential to align AI use with legal, ethical, and business requirements. The foundation is a comprehensive current-state analysis: This includes assessing the status of AI systems in the company, reviewing relevant regulations (e.g., FADP, EU AI Act, GDPR), evaluating existing governance structures, and identifying gaps and risks. Targeted risk mitigation measures can be derived from this analysis.
Effective implementation requires close collaboration across all relevant functions—from Legal and Compliance to IT—and ongoing monitoring of risks and regulatory developments.

10. Support by LezziLegal

We support you in developing and implementing legally compliant and practical AI governance—targeted, efficient, and based on many years of experience:

  • Current-State Analysis: We assess the current state of your AI applications and identify regulatory action points.

  • Project Support: From concept to implementation, we support you as external legal advisors—with a focus on data protection, governance, and compliance.

  • Training: We raise awareness among your employees for legal, ethical, and technical AI requirements—clear and practical.

  • Regulatory Communication: We assist you in interactions with supervisory and expert authorities.

  • Audit Preparation: Whether for internal review or external audit—we help you prepare thoroughly and accompany you throughout the process if needed.

Interested in learning more? Explore related articles on our blog:

Entschlüsselung des AI-Gesetzes: Was ist es und wie wirkt es sich auf Sie aus?Supervised financial institutions must actively assess risks

Share:

LinkedIn
You might be interested in

Related Posts

AI Technology als Microship

AI Governance in Regulated Swiss Companies

Artificial intelligence (AI) is increasingly being used in regulated industries such as finance and healthcare. To ensure responsible use of these technologies, companies need clear