LEZZI sub white RZ

Unlocking the AI Act: What is it and how does it affect you?

  1. Introduction

The European Union has agreed on a set of rules for the use of AI. This legal framework aims to strengthen public trust in AI while protecting safety and fundamental rights. On 8 December 2023, the European Commission, the Council of the European Union and the European Parliament reached a political compromise on the AI Act. The AI Act concerns AI systems that are introduced or used in the EU and those that have an impact on people in the EU. The AI Act is expected to come into force in the first quarter of 2024. A two-year implementation period will then be introduced. Although it will still be a while before the regulation becomes relevant for Swiss companies, it is advisable to familiarise yourself with the current draft already. Given its extraterritorial effect, the AI Act could become significant for Swiss companies, especially if they offer an AI system on the EU market or the data generated by their AI system is used in the EU.

 

This briefing provides an overview of the AI Act. This is followed by the impact that the AI Act could have on Swiss companies. However, there may still be certain changes in the final version.

 

2. Classification of AI systems

a) AI systems as basic models

New guidelines have been introduced for AI systems to regulate situations where they are used for multiple purposes (general purpose AI) and later integrated into high-risk systems. The provisional agreement also takes into account specific cases of general purpose AI systems (GPAI). It is crucial to note that the so-called “high-impact” foundation models are also included in the basic models. However, there are differences that should be taken into account:Clear rules have been established for basic models, i.e. large systems[1] . According to the provisional agreement, they must fulfil certain transparency requirements before they can be placed on the market. Stricter regulation has been introduced for “highly effective” foundation models. These models are trained with extensive data sets and are characterised by above-average complexity, capabilities and performance. They harbour the potential to spread systematic risks along the entire value chain.

 

b) AI systems according to the risk-based approach

The regulation of AI systems is based on their potential risks to society and fundamental rights. They are categorised into the following risk levels: A distinction is made between i) unacceptable risk, ii) high risk, iii) limited risk and iv) minimal or no risk:

i) Unacceptable risk

This risk level includes AI systems that pose a major threat to society and the fundamental rights of individuals. AI systems are considered unacceptable if they violate the values of the European Union, e.g. fundamental rights. The use of these systems is strictly prohibited and must be discontinued within six months of the AI Act coming into force. This includes, for example

  • Social scoring systems;
  • proactive police work:
  • biometric identification systems in public spaces (except for special law enforcement purposes):
  • Untargeted reading of facial images from the Internet or video surveillance recordings to create facial recognition databases;
  • Emotion recognition in the workplace and in educational institutions;
  • Children’s toys with voice assistance that could lead to dangerous behaviour by children;
  • AI systems that exploit a weakness or vulnerability of a particular group of people due to their age or physical or mental disability in order to significantly influence the behaviour of a person belonging to that group in a way that causes or is likely to cause physical or psychological harm to that person or another person.

 

 

ii)High-risk AI systems

High-risk AI systems are those that pose a high risk to the health and safety or fundamental rights of natural persons. In accordance with the risk-based approach, such high-risk AI systems are authorised on the European market provided they meet certain mandatory requirements and a conformity assessment is carried out in advance. Examples of such AI systems are

  • AI systems for screening applicants during the recruitment process;
  • AI in critical infrastructures (e.g. transport, energy, gas);
  • AI for credit scoring or assessing the creditworthiness of individuals, AI in safety components of products (e.g. application of AI in robot-assisted surgery);
  • biometric identification, categorisation and emotion recognition systems (unless they are completely banned) or AI systems for influencing elections.

 

iii) Systems with low risk

This category of AI systems includes systems that entail low risks. Low-risk systems are those where there is a risk of manipulation. Such AI systems should fulfil special transparency requirements so that users can make informed decisions. Users should be aware that they are interacting with an AI system. This includes, for example

  • “Chatbots” or “deep fakes”;
  • AI-supported video games;
  • AI systems that are used in virtual assistants;
  • AI systems that use synthetic audio, video, text or image data.

 

These examples mentioned above must be designed in such a way that they are perceived by users as artificially created or artificially manipulated.

 

vi) Systems with minimal or no risk

The category of systems with minimal or no risk can be developed and used in compliance with generally applicable law. No additional legal obligations should be introduced. Providers of such systems can voluntarily undertake to comply with codes of conduct. This category includes, for example, AI-supported video games, spam filters or AI that is used to sort documents in offices.

 

  1. Sanctions

The AI Act will provide for high fines. The following fines, among others, are to be expected[2] :

A fine of up to EUR 35 million or 7% of annual global turnover is envisaged for violations of prohibited applications or non-compliance with data and data governance requirements.

Furthermore, a fine of up to EUR 15 million or 3% of annual worldwide turnover will be imposed for violations of other requirements or obligations under the Regulation, including violations of the provisions on GPAI models.

Finally, companies will be penalised with a fine of up to 7.5 million euros or 1.5% of their global annual turnover for providing false, incomplete or misleading information to notified bodies and competent national authorities.

  1. What do Swiss companies need to do?

Swiss companies that develop or use an AI system are advised to carefully monitor when the AI Act comes into force and to prepare accordingly.

Companies should then check whether the AI Act is applicable as a first step. Applicability exists if they offer their AI system in the EU, their AI system is deployed, imported or used in the EU or the output of their AI system is used in the EU. If this is the case, the second step is to check which risk category your AI system falls into.

 

If the Swiss company uses AI systems that present unacceptable risks, the company must discontinue them within six months of the AI Act coming into force. If the company has AI systems with limited risk, companies must ensure that they comply with the transparency obligations or other obligations until the AI Act comes into force.

When dealing with high-risk AI systems, Swiss companies must check or implement the following checklist in particular:

 

Checklist for high-risk AI systems

Is AI governance in place?

Yes/No

Information notifications to ensure compliance with transparency obligations.

Yes/No

Is there a procedure in place to ensure data quality and governance in the training of AI systems?

Yes/No

Are risk management systems needed or do they need to be expanded as required?

Yes/No

Are cyber security measures in place? If they exist, they should be reviewed and updated if necessary.

Yes/No

Are conformity assessment procedures in place? If so, existing sector-specific conformity procedures should be expanded if necessary.

Yes/No

Has a procedure for carrying out a fundamental rights impact assessment been introduced? It is possible to build on previous experience with data protection impact assessments.

Yes/No

Has a procedure been put in place to ensure human oversight?

Yes/No

Is the handling of high-risk AI systems technically documented?

Yes/No

Has the AI system been registered with the relevant authorities?

Yes/No

 

  1. The development of Switzerland

On 22 November 2023, the Federal Council decided to evaluate the regulation of AI in Switzerland. The aim was to ensure that the potential of AI can be utilised while reducing risks such as discrimination or misinformation. Possible approaches to regulating AI in Switzerland are to be identified by the end of 2024. It is therefore advisable for Swiss companies to follow the current development of the AI Act and make appropriate adjustments if necessary. This is particularly true in light of the fact that Swiss politicians are increasingly concerned with the regulation of AI and it is to be expected that they will follow the requirements of the AI Act.

  1. Conclusion

What happens next? The AI Act still needs to be formally adopted in order to become law. It is expected to come into force in the first quarter of 2024. The AI Act will be applicable two years after it comes into force. It should nevertheless be noted that the ban on AI systems with unacceptable risks will take effect after six months and the provisions on GPAI after twelve months from the date of entry into force.

[1] Large systems are able to fulfil a wide range of different tasks, such as video, text and image generation, conversation in page language, computing power or the generation of computer code.

[2] In the case of fines, the higher value is used. This means that if 7% of the annual turnover is higher than EUR 35 million, this is used as the fine.

Share:

LinkedIn
You might be interested in

Related Posts