What is the Artificial Intelligence Act?



What is the Artificial Intelligence Act?

The Artificial Intelligence (AI) Act is a proposal by the European Commission to regulate AI that is developed in or used in Europe. The Act tries to ensure that AI complies with human rights, including human safety, privacy, non-discrimination, transparency, human oversight and social and environmental well-being.  

The AI Act also focuses on consumer protection and “regulation of the market”. When the European Commission proposed the AI Act, it emphasised that clear and predictable rules will encourage innovation. The Commission felt that if the EU can provide legal certainty, companies will have more confidence to invest time and money in developing products. They would no longer have to worry about whether their products might run into trouble with the law in the future.   

This legislation has been under discussion for some time. The European Parliament adopted its negotiating position in June. Their position proposes better protection of human rights, such as mandatory accessibility for high-risk AI systems, banning the use of real-time facial recognition in public places, and banning the use of emotion recognition in highly sensitive situations (e.g. when used by employers, schools, police and migration authorities). 

The European Union’s two legislative bodies (the European Parliament and the Council of the European Union) are now negotiating the final text of the law. It is important to keep this in mind, as some points described in the rest of the article may change. 

But why is it important? In this article, we will explore its possible impact on Disability rights.

Navigation menu:

Impact on disability rights: accessibility

Keyboard with the word accessibility and a wheelchair sign overlayed

One of the main demands of the European Disability Forum was to ensure that tools and systems that use AI – and tools to develop AI systems – are accessible. This would allow access to these tools by users, make it easier for developers with disabilities to cooperate, and for activists to assess its risks. 

A new accessibility requirement, suggested by us, was introduced by the European Parliament in its position, requiring high-risk AI to be subject to mandatory accessibility requirements.  

We are now advocating for medium and low risk AI to be subject to these mandatory requirements as well

Impact on disability rights: risks

The Act classifies AI systems into four categories based on the severity of the risks they pose to fundamental human rights. Let us examine the different categories and the restrictions associated with them under the proposed AI Act.

Unacceptable risks

Some uses of AI are considered so dangerous that their use must be prohibited. In this case, it is not enough to introduce strict rules limiting the use of the technology. These risks include social scoring, which would violate basic human rights and human dignity.

For example, an AI social scoring system that determines if you are a “good citizen”. It can analyse the type of food you eat or if you shop a lot of alcohol at the supermarket to calculate your “trustworthiness” or “health”. Such a system would recommend, based on your lifestyle, whether you need to pay a higher insurance premium or recommend to your potential employer whether you should be hired.

The EU Commission suggested that the law should also restrict real-time facial recognition in public spaces by law enforcement officers unless it is “necessary”. This is one of the most contentious points.

High Risk

Portrait of a young black man being analysed by a face recognition AI system
Portrait of a young black man being analysed by a face recognition AI system

One feature of the AI Act is that it defines certain systems as high-risk. These include systems that are used to decide who gets called for a job interview, who is accepted to certain university programmes, if applications for government allowance are approved or, for example, if a bank loan should be granted.

It is important to remember that the classification of AI systems into different risk categories is a political decision, and politicians have expressed different ideological views on this classification. 

The Act proposes strict regulation for the use of high-risk systems.  

In addition, providers of high-risk AI systems should provide technical documentation before the system is released to the market. Ideally, most of these uses will be forbidden – we especially called for bans on facial recognition used by police or during job interviews. But if they are allowed, it is crucial that the manual of any facial recognition software used by police clearly informs the user of the limitations of the technology. Facial recognition software has a higher error rate with certain marginalised groups, such as persons with disabilities that affect their physical appearance and people with darker skin color. To reduce the risk of police arresting the wrong person, it would be very important that the manual advises police that the technology is unreliable for marginalised groups. So the police must be extra careful and do more manual checks and not rely on a facial recognition system that identifies a person with disabilities as a possible match.  

High-risk AI solutions should also be designed to be monitored by humans so that problems within the systems can be detected. This is to ensure that the systems are accurate, robust and secure.

Medium Risk

A blue angry smiley face in the middle of a group of white happy cubes

Medium risk systems are systems that politicians think are not very dangerous. These are systems that for example try to analyse emotions or determine which demographic group you belong to. 

The proposed regulation imposes minimal obligations to AI systems with medium risks.  

AI systems with medium risk can also include emotion recognition or biometric categorisation systems that do not identify individuals (they can categorise you according to your gender identity, eye colour, age, ethnicity or others) or “deep fake” AI systems that create or manipulate content. Providers must inform users when these systems are used for responses or to provide a service, unless they are used by law enforcement.

Low Risk

Low-risk AI systems are systems that politicians believe do not pose a threat to human rights. These systems still have to comply with other laws such as the General Data Protection Regulation, but the AI act will not introduce additional rules on them.

The proposed AI Act imposes no obligations on low-risk AI systems such as spam filters. However, the Act proposes to create codes of conduct for providers of medium to low risk AI systems that voluntarily follow rules similar to the mandatory rules for high-risk AI systems.

“Foundational models” – a new category?

a row of robots with headphones on their heads

The European Parliament introduced a new risk category that was not included in the European Commission’s original proposal – foundational models. 

Foundational models are AI solutions that anyone can take and integrate into their own app. Let’s say a maths teacher wants to create an app to help students with homework. The teacher can teach the app about maths, but the app still needs to process human language to chat with the students. That’s where foundational models like GPT4 or llama come in – they already process human language, so they can be used as the base for the chat feature in the app. This way, the teacher can focus on developing the maths part, which is their area of expertise.

Under the original proposal of the Commission, chatbots like ChatGPT would fall under medium risk. For instance, if one is booking a flight ticket and chatting with customer care, the customer would have the right to know if they are engaging with a chatbot instead of a human attendant.

The release of ChatGPT led to a re-evaluation of the approach by the European Parliament, resulting in stricter control measures.

When ChatGPT “took the world by storm”, members of the European Parliament realised its potential and danger and wanted to classify these technologies as high risk. However, due to negotiations – and intense lobbying from companies supplying these models – it is now likely that foundational models will be categorised as medium risk – but with extra obligations.

Human rights and privacy organisations are still advocating to classify foundational models as high risk, but due to this, it will likely not happen.

Why should foundational models be on the high-risk list?

Artificial intelligence can be trained in different ways to solve tasks, resulting in what is called a base model. This model is trained on a variety of inputs and can thus solve a range of tasks. For example, a travel company might use a foundational model to deploy a chatbot that is fine-tuned using company-specific documents to understand and respond to customer queries.

However, foundational models, like other AI, can have weaknesses such as biases and making up stuff. These flaws can be repeated in any product or service built on the foundational model, potentially exposing the travel company to legal liability. For example, if a bias in the foundational model causes a customer support chatbot to treat customers in a discriminatory way, the travel agency could face legal consequences.

Danger still ahead

 someone pointing to a paper with design of a mobila app

In September 2023, more than 100 civil society organisations called on the negotiators of the AI Act to close major loopholes that could be used to “misclassify” AI systems. The negotiators are considering exemptions that would introduce a dangerous loophole in relation to the rules for high-risk AI. For example, a company producing AI to be used in a situation normally considered high-risk, such as recruitment or policing, could exempt themselves from the rules. 

In practice, this would mean that the company developing an AI would be given the power to carry out a self-assessment instead of the company having to seek approval from a supervisory authority. The company would decide if its system posed a risk to human rights. The organisations feared that this would significantly weaken the ambitious rules designed to protect people from harm caused by high-risk AI. Therefore, the organisations called on negotiators to revert to the European Commission’s original proposal.

AI Act Timeline

  • April 2021

Commision’s Proposal

  • December 2022

Council position

  • June 2023

Parliament position

Negotiations start

 

The agreement is planned for before the end of 2023.

Read more about Artificial Intelligence and the AI Act