In February 2026, we took part in the Code of Equality Conference in Tallinn, organised by the Gender Equality and Equal Treatment Commissioner of Estonia. The event gathered equality bodies, legal experts, technologists, researchers, and civil society representatives from across Europe to explore how to ensure that AI systems uphold human rights and equal treatment. Over two days, participants examined how fairness can be translated from principles into concrete action in both public and private sector AI systems.
A Rights‑Based Vision from Estonia
Our ambition is to become the world’s leading AI country among free societies.
— Liisa‑Ly Pakosta, Minister of Justice and Digital Affairs of Estonia.
In her opening address, Liisa‑Ly Pakosta, Minister of Justice and Digital Affairs of Estonia, outlined why Estonia deliberately links digitalisation and justice under the same ministry. She emphasised that technological leadership must go hand in hand with strong protections for rights, freedoms, and public trust.
Her speech highlighted real‑world examples where Estonia has reviewed and corrected discriminatory patterns in automated systems, showing how legal safeguards and technical innovation can support each other. She also underlined that Estonia draws clear boundaries around how AI may be used: systems with surveillance potential will not be used to punish people for minor behaviour, as trust is essential for a functioning digital society.
Keynote Reflections: From Ethical Principles to Practical Fairness
Prof. Brent Mittelstadt, Professor of Data Ethics and Policy at the University of Oxford, delivered a keynote examining how fairness, transparency, and accountability can be put into practice. He described how fairness is not a single definition, but rather a concept shaped by social values, legal traditions, and democratic choices.
His keynote also highlighted the risks of relying solely on mathematical fairness metrics without considering context. He encouraged policymakers and developers to align technical tools with Europe’s broader non‑discrimination framework, ensuring that fairness is meaningful in real‑world scenarios.
Panel Discussion: Is Algorithmic Fairness Possible?
Second days’ first panel brought together government, industry, and research experts to analyse whether AI systems can ever truly be fair, and what can realistically be done to improve them. The session was moderated by Hendrik Roonemaa, a public‑sector communication strategist and technology commentator.
Panel Speakers
- Marju Purin, Senior Machine Learning Scientist, Microsoft
- Dr. Liina Kamm, Senior Researcher, Cybernetica
- Ott Velsberg, Government Chief Data Officer, Ministry of Justice and Digital Affairs
The discussion explored how bias enters AI systems through data, design choices, and societal inequalities. Panelists described emerging tools for identifying and mitigating bias, the importance of transparency around automated decisions, and the need for continuous monitoring. They also emphasised that fairness is not solely a technical issue: it requires organisational responsibility, clear legal frameworks, and sustained public engagement.
Workshops: Policy, Practice, and Participation
The breakout sessions provided space for more focused discussions, including fairness metrics, legal accountability, and inclusive AI design.
EDF participated in Session C: “Designing with People – Civil Society and Inclusive AI”, hosted by the Estonian Equality Commissioner’s Office and moderated by Allar Laaneleht, AI Project Manager at the Ministry of Justice and Digital Affairs.
Session C Speakers
- Kathinka Theodore Aakenes Vik, Senior Advisor, Equality and Anti‑Discrimination Ombud (Norway)
- Kave Noori, AI Policy Officer, European Disability Forum (EDF)
- Katrin Nyman‑Metcalf, Adjunct Professor, Tallinn University of Technology and Chair of the Board of the Estonian Human Rights Centre
In this session, participants explored how inclusive design and meaningful public participation can strengthen fairness in AI. EDF contributed insights on the importance of involving people with lived experience when assessing how AI systems affect different communities. We also highlighted the need for sustainable financing so that organisations of persons with disabilities can engage consistently with policymakers and developers.
EDF shared practical tools, including our infographics on inclusive and accessible digital systems, which support policymakers and developers in identifying exclusion risks early in the design process.
The round table discussions emphasised that AI should support — not replace — human responsibility and judgement. Participants also noted the risk of placing too much of the burden for explaining fairness needs on marginalised groups, particularly when they are already under-resourced.
One idea that gained support was the need to build stronger infrastructure to educate and empower representatives from these groups, enabling them to contribute both as end users and as co-designers. Ensuring proper recognition and fair compensation for their time and expertise was seen as essential for meaningful participation.
Together, these measures can help create AI systems that are more equitable, inclusive, and accountable.