We launched our guide “A disability-inclusive Artificial Intelligence Act: A guide to monitor implementation in your country”, during a webinar on 15 October.
The webinar joined digital rights campaigners and disability advocates to discuss how to ensure disability-inclusive Artificial Intelligence (AI).
Panellists agreed that:
- The EU’s AI Act is an opportunity for strong laws, but national organisations and activists must be vigilant to ensure it’s well implemented by national authorities.
- Civil society actors and activists need to form strong alliances in all countries to pressure authorities to create strong AI laws that complement the EU AI Act.
- Advocates must be wary of certain provisions, like exemptions for law enforcement and for dangerous systems, and of collaboration of certain groups that divert attention from pressing human rights concerns.
The meeting was opened by Maureen Piggot, our Executive Committee member, who presented the positive and negative aspects of current AI technology.
She shared some examples of how it can be used to empower assistive technology: personal voice assistants, and how it can support people with disabilities in making information accessible or providing real-time translation. She then outlined the risks, including how AI can amplify discrimination (for example by using AI to analyse facial expressions), inaccessible AI systems. AI can also be misused and make people more vulnerable to fraud and disinformation.
She explained how the EU’s Artificial Intelligence Act is a good start to limit the risks of AI systems and the exclusion of persons with disabilities, but it needs to be well applied by the Member States. Our guide will support advocates and activists to put pressure on authorities to create strong laws.
Our guide to implement disability-inclusive AI
Kave Noori, our Artificial Intelligence Policy Officer, stated that the purpose of this guide is to explain the EU AI Act to disability advocates. The guide also aims to explain how they can urge national authorities to ensure that the rights of people with disabilities are upheld.
He explained that the AI Act classifies systems into different risk levels, and high-risk systems must be accessible and used transparently, with exceptions.
Kave highlighted how advocacy by EDF and digital rights organisations like European Digital Rights and the European Center for Not-for-Profit Law helped include accessibility and transparency requirements in the AI Act. He emphasised the importance of extending these mandatory requirements to “general-purpose” and low-risk AI systems.
Explore the full “A disability-inclusive Artificial Intelligence Act: A guide to monitor implementation in your country”.
National advocacy on AI Act
Karolina Iwanska, a Digital Civic Space Advisor at the European Center for Not-for-Profit law, explained the EU law on Artificial Intelligence and the timeline to implement it.
She mentioned that the majority of the provisions in the Act will come into force in August 2026, so there are, at maximum, two years to influence national efforts. However, some countries have already prepared national legislation.
She recommended contacting national human rights institutions and equality bodies to advocate for their countries to adopt human rights-centred legislation and human rights provisions. She also recommended that different civil society actors organise and advocate for a national advisory body to implement and support the enforcement of AI laws.
She warned against dangerous systems like remote biometric identification such as facial recognition, the broad exemption of the Act for “national security uses”, and the exceptions for law enforcement.
Role of national agencies protecting fundamental rights
Milla Vidina, senior policy advisor at European Network of Equality Bodies (Equinet), explained that the AI Act is a consumer protection regulation with obligations that try to make companies behave in a way that prevents them from violating human rights.
While individuals have the right to submit a complaint under the AI Act, this right is not very strong. It is, for example, not possible for an organisation to submit a complaint on behalf of an individual or support a person to complain.
Concerningly, the complaint must be made by an identifiable “affected party”; however, that clashes with the very nature of algorithms, where the harm cannot be easily identified.
However, equality bodies have a special role under the AI Act. Equality bodies have a special mandate to investigate AI systems: they the right to ask companies that use AI to give Equality body documentation on how an algorithm works. However, this does not extend to border agencies or law enforcement, for example.
There is also a possibility, if national authorities implement it into national laws, for consumer organisations to launch collective complaints.
Organisations of persons with disabilities should explore articles 77 to 79 to understand who can help them enforce the law.
Building alliances at national level
Ella Jakubowska, Head of Policy at the European Digital Rights, shared the example of the coalition called “AI core group”, a coalition of European organisations protecting digital rights and human rights that worked together to influence the AI Act.
She explained that human rights organisations already have expertise on issues that the AI Act affect and that often, the concern of not having technical capability can be overcome by partnering with digital rights organisations and academics. She underlined that these coalitions can take the shape of a long-lasting partnership like the “AI core group”, or they can be short-term collaborations on specific topics.
She warned against allying with groups that don’t have rights or justice as their focus. For example, some groups are very focused on cosmetic fixes to AI issues without looking at the underlying power structures, in ways that can overshadow the real issues. Others have claimed that we should only focus on the existential threat that AI poses to humanity – which can detract away from the very real problems that people face today.
National implementation of AI law
Mia Ahlgren from our member, the Swedish Disability Rights Federation, shared their experiences in being involved in national advocacy and monitoring of AI regulation. She explained that there will be many actors that want to invite civil society organisations and organisations of persons with disabilities. It is just that organisations need to be careful to make sure they do not overstretch themselves.
She called for more financing and funding for civil society to get involved in AI issues.
Wouter Bolier, from our Dutch member Ieder(in), shared their experience on a public consultation regarding algorithmic decision-making. He shared how the Dutch member reached out to the European Disability Forum and organised meetings with the two digital rights organisations based in the Netherlands. The three organisations submitted a response to the consultation together.
The experience underlines how national disability organisations, together with the EDF guide and the support of digital rights organisations, can add persons with disabilities into a group that is active in the bigger context of human and digital rights in AI.
He explained that this led to authorities being altered to disability rights issues in AI and considering action on it.