EduTrust AI

Artificial Intelligence (AI) in Education: Layers of Trust

Trustworthy AI in Education is an under-researched area. When it comes to the use of AI in society, academics focus on responsibility, accountability, and trust (e.g., in AI systems and ethics), while politically the focus is on social acceptance; trust is actually a complex social phenomenon. Thus, when looking at the trustworthy use of AI in the educational sector, we need research that takes both perspectives. It is not enough to have transparent, interpretable, and FAIR AI systems— they also need to be trusted by stakeholders and accepted by society at large.   

The trustworthiness of AI in education involves a multifaceted interplay between social, cultural and technical aspects of AI such as reliability, transparency, explainability, fairness, and accountability, and the intricate socio-technical dynamics among diverse groups of important stakeholders. Thus, trust lies within the complex web of interactions between different human and machine actors, various entities, and the regulatory system that comprises the ecosystem of the educational sector. 

To deepen the understanding and meet the rapid technological development and the related challenges— such as competence, reliability, privacy in the use of AI in education— the primary objective is:  

To develop research excellence in the area of Trustworthy AI in Education, and provide a framework, multi-disciplinary insights, materials, and tools for building trust in the use of AI in the educational sector.

Secondary objectives:  

1) Map stakeholder motives and interests in the educational ecosystem, map accountability relationships in the educational ecosystem, and develop a conceptional framework of the layers of trust for the responsible and trustworthy use of AI in education. 

2) Explore and analyze relevant EU/EEA (GDPR/AI ACT) and national legal frameworks (e.g., opplæringsloven and universitets- og høyskoleloven) that regulate the processing of personal data and AI, with a view of verifying whether these frameworks safeguards trustworthy AI in education— and, if necessary, propose amendments on both EU/EEA and national level, as called for by Personvernkommisjonen (Norwegian Privacy Commission).

3) Analyze a variety of AI systems in education against the ethical guidelines for trustworthy AI (Lawful, ethical, robust) to identify the key requirements and competencies that should be addressed in building trust between stakeholders identified in the conceptual framework. 

4) Develop a repository of communication processes, guidelines, materials, and tools (e.g., games) to address trust in AI in education for multiple stakeholders (parents, students, teachers, privacy officers, EdTech companies, etc.), thus increasing their knowledge of responsible use of AI in education. 

5) Contribute to national and European work [1] on legal guidelines for AI and education. 

6) Identify competence needs and new educational and training offerings for a variety of stakeholders on responsible and trustworthy use of AI in Education.  

[1] the Norwegian education and higher education laws & Council of Europe’s on-going work on binding legal guidelines for AI and Education

Thus, through an interdisciplinary collaboration between the Centre for the Science of Learning & Technology (SLATE), the Faculty of Psychology and the Faculty of Law at the University of Bergen, EduTrust AI contributes scientific value. The project does this by creating new knowledge, methods, guidelines (educational, technological, and regulatory) and tools, and gives input to a practicable framework related to the challenging questions around the use of student data and AI systems in education. This is relevant for the fields of law, information and computer science, learning sciences, and the social sciences. 

Project period:

1 November 2023 – 31 October 2027

Project leader: 

Professor Barbara Wasson

Project Members:

SLATE, University of Bergen: Barbara Wasson (PI), Anja Salzmann, Mohammad Khalil, Fride Klykken, Cathrine Tømte (Professor II), Ingunn Ness, Qinyi Lui.

Faculty of Law, University of Bergen: Malgorzata Cyndecka (PI);

University College London: Wayne Holmes

Project Partners:

AI generated using Microsoft Designer

Professor Barbara Wasson

Barbara.Wasson@uib.no

Leader of TAIS and project leader of EduTrust AI

Learn more about the EduTrust AI project here: