Understanding Artificial Intelligence – and how it affects the disability community.



Understanding Artificial Intelligence – and how it affects the disability community.

Have you ever wondered how artificial intelligence (AI) could impact the lives of people with disabilities? In this series, we will unravel the world of Artificial Intelligence and how it impacts you – as an individual and as a person with disabilities.

Join us in our journey to ensure that the development and implementation of AI take into account the interests of everyone – including us, persons with disabilities.

The AI journey with EDF

The most recent “star of the show” is ChatGPT – a chatbot that can write essays, speeches, jokes, and even computer code. It is the latest example of how AI creates buzz and excitement – but also concerns.

Let’s take the example of ChatGPT. A chatbot like this has two main parts:

  • A computer program that can “understand” human language.
  • An easy-to-use interface that people can use to interact with the computer program.

Contrary to what some might think, tools like this don’t actually understand human language. They work based on a special kind of computer program called a “large language model.” This program turns words and sentences into numbers and patterns the computers can understand.

For example, if you give the language program 10,000 books about being a good kindergarten teacher, it will learn that words like “kindergarten,” “children,” “parents,” “field trip”, “fun,” and “development” are often used together. The program doesn’t really know what these words mean, but it can see patterns in the way they are used.

Although human language can be so expressive and dynamic, it is still subject to grammatical rules. When an AI is trained on millions of texts written by humans, it recognises patterns in the data on its own. So there is no need for a human to intervene and tell the AI that it is grammatically incorrect to start an English sentence with the word ‘but’. The AI will recognise this from its training data as a statistical relationship between “but” and other words in the English language. This statistical relationship will probably show that “but” is almost never at the end of a sentence but very often in the middle of a sentence.

Illustration of a brain with several wires from different colours attached, symbolysing the information that large language models will turn into patterns. On top says LLM, large language model
Large language model detect patterns in large streams of data – and replicate them.

These patterns are the foundation of the language model. When you talk to a chatbot like this, it will try to answer your questions using what it has learned. Sometimes it will be right, and sometimes it will be wrong.

Researchers gave the chatbot a lot of information – taken from the internet – to teach it. Millions of webpages and research reports. It helps the chatbot learn patterns across many different topics – that’s why it’s called a “large language model.”

The chatbot can seem intelligent because it can answer so many questions. But in reality, it only uses patterns that it has learned from content created by humans. It does not really “think” for itself.

However, teaching the chatbot took a lot of data and computer power- only really powerful computers, often owned by governments or big companies, can run these chatbots.

AI is not magic.

So, computer programmes have become so advanced that it is tempting to think artificial intelligence is some kind of magic. However, there can be very unpleasant consequences if we start treating computers as if they have some kind of human-like intelligence, or are too smart for us to understand.

For example, members of marginalised groups – such as persons with disabilities – may be afraid to campaign for inclusive AI, because they feel that they must have a computer science degree in order to speak out.

If we adopt this mindset, we are doing ourselves and the disability community a big disservice.

Any person who wants to design an AI system should consult persons with disabilities and ensure that their datasets are disability-inclusive and sufficiently varied. Which can actually be a great challenge.

That is why especially now when society is deciding on the rules for the future use of AI, it is really important for disability organisations to advocate for better rules for people with disabilities.

Unsupervised learning

Another thing that many people do not realise is that there is a lot of manual work behind successful AI systems. When you train an AI, you need to give it a lot of data without telling the computer what it is. This is called unsupervised learning and helps the AI recognise connections between words and draw conclusions on its own, without human intervention. However, in order for the AI model to become more accurate and produce relevant results, it is necessary to check and correct the model with the help of humans.

Illustration of Smart robot vacuum cleaner in a modern living room.
Robot vacuum cleaner are one of the many appliances that can use AI. Problems such as invasion of privacy can arise from that.

In the case of the ChatGPT, its dataset was cleaned for several months by remote workers in Kenya (who reportedly work for very low wages). The MIT Technology Review also recently published an article about training robot vacuum cleaners that highlights the amount of manual work required to make artificial intelligence systems function correctly. The article tells the story of a robot vacuum cleaner manufacturer that hired an outside company to classify pictures taken by the robot in a tester’s home. However, it was discovered that the external workers – including low-paid workers in Venezuela – were also viewing and labelling pictures of sensitive moments, including a woman sitting on the toilet. Even more disturbingly, it was found that screenshots of these private moments were being shared in closed Facebook groups. This incident underscores the problems created by the lack of oversight or regulation.

Why disability organisations should care about AI

This first step on our journey shows the several reasons for organisations of (and for) persons with disabilities to care about the development of AI systems. The systems and model can become disability-inclusive with time, but we need strong laws to protect us against discrimination from the them. This discrimination also comes from how AI systems are trained, or how they are used.

A diverse group of people speaking with each other in an office
Persons with disabilities and from other marginalised groups must be involved in the development of AI systems

For example, an AI tool used by child welfare agencies is under scrutiny for possibly discriminating against parents with disabilities. As the algorithm and dataset are not publicly available, activists cannot know for sure. – and this is another problem.

In our next blog posts, we will talk about the EU AI Act – the legislation that may possibly protect us. And, because knowledge is power, we will also outline the different types of AI – and how its models and algorithms are created.

Conclusions

  • Artificial intelligent systems are not “intelligent – they are trained with human supervision and data.
  • Persons with disabilities must be involved in discussions to ensure the training does not lead to discrimination.
  • We don’t need to be experts in computer science to get involved and raise our concerns.

About the AI and Disability Series

This article has been written by Kave Noori (EDF Artificial Intelligence Policy Officer), as part of the “AI and Disability Series”.

The “AI and Disability Series” is supported by the European Fund for Artificial Intelligence and Society, funder of the Disability Inclusive AI project. The series aims to take you through the progress of AI, the EU’s policy perspectives and how we ensure that AI empowers people with disabilities and protects us from harm or discrimination