Future of technology : ethics guidelines for trustworthy AI

26 March 2019

EDF submitted a written feedback on the Artificial Intelligence (AI) High Level Group - Draft Ethics Guidelines for Trustworthy AI. Some of the main points we raise are to truly embrace human diversity when developing AI solutions, always incorporate accessibility, and involve persons with disabilities meaningfully and from the outset. Additionally, we also point out the fact that procurement of AI must also be tackled in this document.

The importance of Artificial Intelligence

Artificial intelligence, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks. There is no doubt that Artificial Intelligence has changed the way we think and use technology. AI is used on a daily basis to cover from the simplest to the most complicated scientific needs of humanity. It is AI’s promising future and having the capability to generate tremendous benefits for individuals and society that makes for a need for guidelines to be set. We need to assure that AI doesn’t repeat mistakes from the past, that it doesn’t increase exclusion. Due to its nature AI must rely on big data and machine learning for myriad applications. Having access to an abundance of large amounts of data is fundamental for technologies using AI, as machine learning evolves as more data is entered.

Why organisations of persons with disabilities should be involved in AI discussions?

The pace of development in Artificial intelligence is so fast that we must keep up with it , being aware of these developments is very important to our ability to engage in the discussions and ensure the developments are inclusive and accessible. Artificial intelligence should be developed in a way that does not discriminate a person on the grounds of his or her disabilities, but supports the independent living and participation of all persons with disabilities. EDF’s feedback provides valuable suggestions, making sure that persons with disabilities and their needs are not overlooked. Lastly, EDF provides some suggested questions on how to verify that an AI system is designed for all.

Why is the European Commission Interested in the ethics of AI

It was the misuse of data and the recent Facebook and Cambridge Analytica scandal that alerted policy makers and highlighted the importance of ethically regulating the use of data with the consent of the user. Now apply this predicament to AI technologies that request an even greater amount of data processing in order to work at their optimal capacity. Companies are facing increasing public scrutiny, and the European Union has made this a priority as evidence by initiatives such as the GDPR. The ethical standards for assessing AI and its associated technologies are still in their infancy. The Artificial Intelligence (AI) High Level Group - Draft Ethics Guidelines for Trustworthy AI is the European Commission’s attempt to tackle these questions and find the answers to questions such as: How do we ensure the ethical and responsible use of AI? How do we bring more awareness about such responsibility, in the absence of a global standard on AI? And more.

For who are the Guidelines intended for?

The draft ethics guidelines for trustworthy AI, is a document that concerns developers deployers and users of AI technologies as it sets the basis for fundamental rights and regulations as far as AI technologies are concerned. The guidelines do not aim at acting as a substitute to any form of policy-making or regulation, but rather informing all parties involved that there are fundamental rights and regulations that should not be overlooked.

EDF calls the Commission experts on Artificial Intelligence to strengthen their Ethics Guidelines

EDF welcomes the development of these Ethics Guidelines, but we also underline that we nevertheless need strong legal safeguards to protect the rights of all citizens, including citizens with disabilities, from AI-powered technology that could cause them harm or discriminate them. This is why we suggest making a clearer distinction between ethics and law, since self-regulation and voluntary compliance with ethics guidelines are not enough to offer reassurance to consumers, including consumers with disabilities.

Read EDF feedback to the draft Ethics Guidelines on Trustworthy AI.

EDF welcomes the development of these Ethics Guidelines, but we also underline that we nevertheless need strong legal safeguards to protect the rights of all citizens, including citizens with disabilities, from AI-powered technology that could cause them harm or discriminate them. This is why we suggest making a clearer distinction between ethics and law, since self-regulation and voluntary compliance with ethics guidelines are not enough to offer reassurance to consumers, including consumers with disabilities.

Email this page
Subscribe to