The EU's Artificial Intelligence Act

We follow the work on artificial intelligence done by the European Union. In 2024, the EU adopted its first key legislation on AI. The law was adopted on 13 June 2024. From 1 August 2024, a countdown timer started ticking as various parts of the AI Act will come into force on specific dates.

The AI Act is an EU regulation that aims to establish rules and standards for artificial intelligence systems in the EU. Its main objectives are to protect human rights, ensure public safety and promote trust and innovation in AI technologies.

A lot will happen in the next two years. Different parts of the AI Act will come into force at different times. The EU Commission will also adopt so-called delegated acts, which are binding rules. It will also adopt non-binding guidelines that help interpret the AI Act. Politicians in EU countries also must make many decisions to make their country ready for the AI Act.

The EDF will support our members by developing tools to advocate for strong human rights protection in their EU countries and to ensure that guidelines at EU level are inclusive, especially for persons with disabilities. We will also monitor other EU initiatives on artificial intelligence and see how they may impact persons with disabilities.

Timeline

  • 1 August 2024: AI Act enters into force
  • October 2024: EDF publishes its toolkit on the AI Act.
  • 15 October 2024: EDF workshop on the AI Act
  • 2 February 2025: Prohibitions of forbidden AI practices enter into force;
  • February 2025: Member states can decide on national laws that permit the use of live facial recognition in public spaces.
  • August 2025: Rules on “general-purpose AI” enter into force.
  • August 2026: Most rules in the AI Act go into force.
  • August 2027: Rules on high-risk AI come into force.

The timeline is based on documents published by Kai Zenner on Member state responsibilities and implementation outlines under the AI Act.

Content of the Act

Risk levels

The AI Act divides AI systems into four different levels of risk categories.

  • Unacceptable risk
  • High risk
  • General purpose AI (Medium or Limited risk)
  • Low or no risk

Thanks to our advocacy efforts during the negotiation, there is now a provision in the AI Act about accessibility.

Mandatory accessibility for AI that is high-risk

This requirement is now written in Article 16. This will ensure that no high-risk AI system can be deployed in the EU unless it meets the necessary accessibility standards to ensure that people with disabilities are not excluded or discriminated against. The AI Act refers specifically to the European Accessibility Act and the Web Accessibility Directive.

Forbidden to take advantage of someone’s disability

Article 5(1)(b) contains a provision that prohibits the use of artificial intelligence to exploit people who are at risk of being exploited, e.g. because they have a disability, live in extreme poverty, etc.

This does not refer to a specific type of AI, but rather to undesirable behaviour by different types of AI. It can be systems that are designed to manipulate, but the rule also applies when the bot acts manipulatively without the intention of the AI’s creator.

An example to illustrate this can be a personal health assistant designed to support individuals managing a chronic condition. If the assistant exaggerates the risks associated with the condition and pushes them into buying unnecessary medical treatments or expensive monitoring devices, it could exploit the user’s situation.

Another example would be if you are limited in your mobility and use a virtual personal assistant like Alexa to control your automated home, for example to close the door, switch on the lights, buy products to be delivered to your home, etc. If you use this device on a daily basis, you could be taken advantage of if it recommends you buy premium services or products to automate your home that you don’t actually need.

There is a risk that you will be manipulated to buy things you don’t need, or that you will buy a version that is more expensive than you need. If you interact a lot with an AI, it gets a lot of data about you. It can make assumptions about your health, your goals and your fears.

The purpose of banning this type of commercial practise is to prevent the AI from using your fears to persuade you to do something that you don’t want or that is bad for you. This rule helps people with disabilities to maintain autonomy and control over our lives, even as we increasingly rely on AI and smart technology.

Transparency for chatbots and AI made content

The EU Act requires that companies developing chatbots for direct interaction with individuals (such as when contacting an airline or asking a question to a government authority) must make it clear to users that they are communicating with a machine. Additionally, companies using AI to create or edit text, video, images, audio, or deep fakes must inform users that the content was produced by AI. This notification must also comply with accessibility standards (Article 50.5)

EU-database of high-risk AI systems

The AI Act requires high-risk AI systems to be registered in a public database maintained by the European Commission and EU member states for transparency purposes. This database is valuable for journalists and human rights organisations investigating the use of high-risk AI. The database itself must be accessible to persons with disabilities (Article 71.6).

However, the database will include a restricted section for AI systems used for law enforcement and migration authorities. These systems will be harder to scrutinise since access will be exclusive to authorities in EU Member States and the European Commission.

Codes of conduct

The AI Act encourages companies that produce and use general purpose or low-risk AI systems to voluntarily follow codes of conduct. These codes draw inspiration from the strict rules for high-risk AI, and companies can use them as guidelines. The Act states that the EU Commission’s AI Office and member states should encourage creating such codes to assess and prevent negative impacts on people with disabilities and other groups at risk of marginalisation (Article 95.2.e).