The European Digital Education Hub hosted a workshop on explainable artificial intelligence (AI) in education on 17-18 October 2024 in Brussels.
The event brought together 31 AI experts from diverse backgrounds in education and policymaking to explore the challenges and opportunities of explainable AI systems in classrooms and to develop actionable recommendations.
What is explainable artificial intelligence in education?
Explainable AI (also referred to as XAI) in education refers to AI systems that reveal how certain decisions and recommendations are made in educational contexts. Unlike traditional AI, which can be a ‘black box’ with hidden processes, XAI reveals its decision-making steps.
This transparency helps students, educators and administrators to better understand the AI’s reasoning. By making AI decisions more transparent, XAI can support fairer, more effective, and inclusive educational experiences.
Key insights from the workshop
During the workshop, the participants explored the implications of explainable AI in education. They used real-life cases to test explainability strategies and refine the integration of AI systems in education settings.
The discussions covered
- the regulatory framework of the AI Act
- the importance of ethics
- the need for AI literacy
and were followed by proposals such as
- developing an AI literacy framework
- adapting the EU’s SELFIE tool to measure AI transparency
Policy recommendations for integrating explainable AI in education
The key result of the workshop was a set of policy recommendations
- Clarifying accountability in the use of AI
There is a need for clearer accountability guidelines for AI systems in education. Simplifying complex legal terms and explicitly outlining responsibilities will help educators and students with the use of AI tools. - Setting standards and explainability scores
Creating an “explainability score” for AI tools could help educators choose systems that are transparent and understandable. Setting such standards would ensure that AI systems are aligned with educational goals and tested for transparency. - Promoting AI literacy
There is a need to build AI literacy into the education system, equipping both educators and students with the skills to critically engage with AI tools. Policymakers should provide resources and funding for professional training to increase AI literacy. - Designing ethical, human-centred AI
AI systems should be built on the principles of fairness, transparency and inclusivity. This will ensure that AI tools are designed to support the diverse needs of educators and students. - Encouraging multi-stakeholder collaboration and co-creation
Partnerships between educators, technology developers and policy makers are essential to co-create AI tools tailored to educational needs. - Allocating funding for research and development
Governments should increase investment in research on AI applications to ensure continuous innovation and adaptation of AI systems in education.
Looking ahead: dedicated working group
Some of the workshop participants are members of the working group on explainable AI in education, also run by the European Digital Education Hub.
The working group will use the workshop’s policy recommendations and aim to
- build trust and transparency around AI systems
- enable educators to understand the rationale behind AI-driven decisions, allowing them to make informed judgements about integrating AI tools into their teaching strategies.
Digital education’s collaborative community of practice
The Hub’s community workshops are great places to bring together like-minded practitioners to discuss challenges and innovative solutions in digital education.
Find out more about the European Digital Education Hub
Join the community and become a Hub member
Source: European Commission (European Education Area) | News (https://shorturl.at/hXSiW)