Civil society and EDF reacts to European Parliament's Artificial Intelligence Act draft Report



Civil society and EDF reacts to European Parliament's Artificial Intelligence Act draft Report

Today, 04 May 2022, a number of civil society organisations, including EDF, Access Now, Algorithm Watch, Bits of Freedom, European Digital Rights (EDRi), European Not for Profit Law Center, Fair Trials, Panoptykon Foundation, and PICUM, published a joint statement on the European Parliament’s AI report.

This joint statement is an evaluation of how far the IMCO-LIBE draft Report on the EU’s Artificial Intelligence (AI) Act, released on 20th April 2022, addresses the recommendations from the EU Artificial Intelligence Act for Fundamental Rights statement (published in November 2021).

 

Read now the statement

 

Through this statement, we call on Members of the European Parliament to support those amendments which centre people affected by AI systems, prevent harm in the use of AI systems, and offer comprehensive protection for fundamental rights in the AI Act.

More specifically, we call for further amendments to be considered:

  • A cohesive, flexible, and future-proof approach to the ‘risk’ of AI systems

Expand the scope of delegated acts to allow for the list of prohibited practices and the list of AI systems to be updated. Add new biometric systems to Annex III to reduce the high risk to fundamental rights and expand the number of AI systems that pose a high risk in migration management, such as predictive analytics, systems used for monitoring and surveillance in border control, and biometric identification systems.

  • Prohibitions on AI systems posing an unacceptable risk to fundamental rights

Introduce further bans on emotion recognition, as well as on biometric categorisation systems used to track, categorise, and/ or judge people in publicly accessible spaces and systems which amount to AI physiognomy by using data about our bodies to make problematic inferences about personality, character, political and religious beliefs.

Add specific prohibitions to Article 5, such as on the use of AI-based individual risk assessment and profiling systems, on AI polygraphs, and predictive analytic systems used to interdict, curtail and prevent migration, to ensure the protection of migrants and people on the move.

  • Obligations on users (deployers) of high-risk AI systems to facilitate accountability to those impacted by AI systems

Ensure the accountability of users deploying high-risk AI systems by requiring a fundamental rights impact assessments before deployment of AI systems, allowing users to conduct and publish their assessment of the likely impact of AI systems on fundamental rights, the environment, and the broader public interest is necessary to ensure there is contextual information about how such systems will affect people and society, including full transparency as to how users intend to mitigate those harms.

  • Consistent and meaningful public transparency

Add an obligation for private entities to register their use of a high-risk AI system to reduce the gap in public transparency as to how the private sector uses high-risk AI systems, particularly as such systems can produce crucial impacts on people’s lives, rights, and well-being, for example in the labour market. Extend public authorities’ obligation to register uses of high-risk AI beyond the category of high-risk: information on all uses of AI systems by public authorities, regardless of the systems’ risk level, should be made public in the EU database.

  • Meaningful rights and redress for people impacted by AI systems

People affected by an AI system should be notified that an AI system is in use but also receive detailed additional information about the purpose of the system, what rights they have, and where they can find more information about the functioning and logic of the system. The Parliament should also consider extending these basic transparency rights to other systems which affect people but have not necessarily been classified as high-risk.

The co-rapporteurs must include the right to an explanation of individual decision-making to understand why a system produced a certain prediction or assessment in an individual case in order to challenge discriminatory or otherwise harmful outcomes.

The draft report should include a mechanism for participation of public interest organisations in the investigation and enforcement process. The AI Act should ensure a full range of rights and redress mechanisms for people affected by AI systems.

  • Accessibility throughout the AI life-cycle

To truly ensure that AI systems are ‘trustworthy’, work for all people, and that accessibility is guaranteed for all AI systems and throughout their deployment, the AI Act must include accessibility requirements in compliance with the European Accessibility Act (Directive 2019/882). Accessibility should not be a voluntary measure but be mandated for the development and deployment of all AI systems.

Accessibility should be mainstreamed throughout the AI Act, including with reference to the information and transparency clauses in the Regulation, the EU database, the rights of natural persons to be notified and seek an explanation, and within any future obligation on fundamental rights impact assessments.

Accessibility should be ensured in all consultation and involvement of rights-holders and civil society organisations when implementing this Regulation.

  • Sustainability and environmental protections when developing and using AI systems

Include an obligation for providers and/or users to include information regarding the environmental impact of developing or deploying AI systems. Introduce public-facing transparency requirements on the resource consumption and greenhouse gas emission impacts of AI systems.

  • Improved and future-proof standards for AI systems

Address the risk that some fundamental rights and legal issues may be determined by the standardisation process, meaning substantive decisions could be made in an undemocratic way. The Act needs to explicitly set clear and comprehensive political rules on the aspects of the Act that will be subject standardisation, in order to limit the scope of harmonised standards established in Article 40 to what is actually within the competence of standardisation organisations.

  • A truly comprehensive AIA that works for everyone

Remove the exemption for AI systems as part of  large-scale EU IT databases from the scope of the AI Act. As outlined by civil society, this exclusion allows such systems to avoid necessary oversight. This is damaging for the fundamental rights of people on the move, as the large-scale IT systems at stake affect almost all third-country nationals, and all of these systems involve or intend to involve AI-based systems that would be otherwise classified as ‘high-risk’ under the Regulation.

It is vital that the legislators foreground the concerns of people and issues of fundamental rights in onward negotiations on the AI Act. Moving forward, we, the authors of this joint statement, urge MEPs to be bold in amending the AI Act to safeguard the rights of people and ensure that AI development and deployment fully respects fundamental rights and democracy. 

Read the joint statement on the European Parliament’s AI report (odt)

Read the joint statement on the European Parliament’s AI report (pdf)