Training an Artificial Intelligence



Training an Artificial Intelligence

In our last article, we explored some technologies that are used when we talk about Artificial Intelligence (AI). In this article, we will explore how their creators and developers train these technologies – teaching them how to do what they do. 

How do we teach the AI?

Despite all the fancy explanations, if we simplify, computers just do maths. It’s just that computers of today can do lots of maths very fast. 

Since AI is all about mathematics, training an AI is the process of translating the human world into numbers and equations. Without this, the AI will not understand how the human world works or what humans expect from the AI. 

Criticism and Responsible Use

Criticism regarding the use of AI in certain contexts is mistaken for being against technology. That is usually not the case. In most cases, it is rather related to shortcomings in human judgment and leadership. Because we need AI to be used in a way that respects us and improves our quality of life.

The problem with this is that sometimes designers of the AI do not have enough knowledge about the needs of minorities such as persons with disabilities. They cannot ensure that the information is properly translated into maths. Other times, humans are too complex and different to be properly represented in an equation.

Challenges and Representation

This means that when an AI is considered unsafe, or used in an unsafe way, humans are most likely at fault – not the technology. Because it is humans who decide what data to use to train the AI, how much they should test it before it is released, when they consider it to be ready for the market and if it is appropriate to use the technology in a specific situation.   

Humans train Artificial Intelligence by giving it large sets of data that contain examples of what humans want and how our world works. If the people who developed the AI do not carefully select their data to ensure it reflects human diversity, the numbers in the AI’s memory will not accurately describe people and our society. And that leads the AI to come to bad solutions. 

Importance of Inclusion

The more we rely on AI to solve social problems, the more important it becomes that people who develop AI include persons with social and psychological expertise. It is equally vital to meaningfully include people with the lived experience of disability and other marginalised groups who will be affected by the AI system – and that are often underrepresented in data such as Roma people, racialised people, queer people and especially, people that live at the intersection of marginalisation.  

AI Education and Future Focus

If you would like to know more, we would recommend this series of educational content on AI called Crash Course. It was produced by a public service TV organisation in the United States (US), called the Public Broadcasting Service (PBS). 

Our next series of articles will start focusing on legislation regarding Artificial Intelligence. Meanwhile, sign up to receive updates to our “AI & Disability” Series. 

About the AI and Disability Series

This article has been written by Kave Noori (EDF Artificial Intelligence Policy Officer), as part of the “AI and Disability Series”.

The “AI and Disability Series” is supported by the European Fund for Artificial Intelligence and Society, funder of the Disability Inclusive AI project. The series aims to take you through the progress of AI, the EU’s policy perspectives and how we ensure that AI empowers people with disabilities and protects us from harm or discrimination.