Human-Centred Artificial Intelligence Research Group

The Research Group "Human-Centred Artificial Intelligence" is part of the joint Professorship for “Research Infrastructures for the Digital Humanities” headed by Prof. Dr.-Ing. Ernesto William De Luca which also includes the leadership of the “Human-Centred Technologies for Educational Media” Department of the Leibniz Institute for Educational Media | Georg Eckert Institute (GEI) . The joint professorship includes also the establishment and use of an hybrid Usability-Lab, which is used from the two institutions.

The research group works on different research areas related to Human-Centred AI (HCAI) and Human-Centred Design (HCD), with a particular focus on Responsible AI, Ethical AI, Machine Learning, Natural Language Processing, Human-Computer Interaction, User-Adaptive Systems and Usability.

Human-Centred AI

Human-Centred Artificial Intelligence (HCAI) is an emerging discipline intent on creating AI systems that amplify and augment (rather than displace) human abilities. HCAI seeks to preserve human control in a way that ensures artificial intelligence meets our needs while also operating transparently, delivering fair and equitable outcomes, and respecting privacy.

Main research topics covered:

  • Ethical AI and Trustworthiness: in 2019, the High-Level Expert Group on AI (AI HLEG), appointed by the European Commission, published the document entitled “Ethics Guidelines for Trustworthy AI”. The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components (, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. In particular, every AI system should be developed, deployed and use in a way that adheres to the ethical principles of: (I) respect for human autonomy, (II) prevention of harm, (III) fairness and (IV) explainability (or explicability).
  • Responsible AI: AI concerns all of us, and impacts all of us, not only individually but also collectively. We thus need to go further than the analysis of benefits and impacts for individual users, but rather to consider AI systems as part of an increasingly complex socio-technical reality. Responsible AI is thus about being responsible for the power that AI brings. If we are developing artefacts to act with some autonomy, then “we had better be quite sure that the purpose put into the machine is the purpose which we really desire”. The main challenge is to determine what responsibility means, who is responsible, for what, and who decides that. But given that AI systems are artefacts, tools built for a given purpose, responsibility can never lie with the AI system because as an artefact, it cannot be seen as a responsible actor. Even if a system’s behavior cannot always be anticipated by designers or deployers, chains of responsibility are needed that can link the system’s behaviour to the responsible actors. It is true that some, notably the European Parliament, have argued for some type of legal personhood for AI systems. Responsible AI requires participation. That is, it requires the commitment of all stakeholders and the active inclusion of all of society. Which means that everybody should be able to get proper information about what AI is and what it can mean for them, and also to have access to education about AI and related technologies. It also means that AI researchers and developers must be aware of societal and individual implications of their work and understand how different people use and live with AI technologies across cultures.
  • Natural Language Processing (NLP) refers to the branch of AI concerned with giving computers the ability to understand text and spoken words in much the same way human beings can. NLP combines computational linguistics with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to comprehend its full meaning, complete with the speaker or writer’s intent and sentiment.
  • User Profiling aims to infer an individual’s interests, personality traits or behaviours from generated data to create an efficient user representation, i.e. a user model, which is exploited by adaptive and personalised systems. Modern systems focus on profiling users’ data implicitly based on individuals’ actions and interactions, and this approach is also referred to as behavioural user profiling.
  • Algorithm Fairness is the field of research at the intersection of machine learning and ethics. Specifically, it aims at researching the causes of bias in data and algorithms, as well as defining and applying measurements of fairness and developing data collection and modelling methodologies for creating fair algorithms and fair AI systems by providing advice to governments/corporates on how to regulate machine learning. It is also important to understand that approaches to fairness are not only quantitative. This is because the reasons for unfairness go beyond data and algorithms. The research will involve understanding and addressing the root cause of unfairness.
  • Explainability, also referred to as “interpretability”, is the concept that a machine learning model and its output can be explained in a way that is undestandable for a human being at an acceptable level. Unlike traditional software, it may not be possible to point to explain the outcome of a machine learning or deep learning model to an end-user. This lack of transparency can lead to significant losses can also result in user distrust and refusal to use AI applications. Explainability can help developers ensure that the system is working as expected.

Human-Centred Design

Human-Centred Design (HCD) is an approach to interactive systems development that focuses on the use of the interactive system and applies Usability and User Experience (UX) knowledge and methods. It is based upon an explicit understanding of users, goals, tasksresources and environments. Users are involved throughout the design. The design is driven by user requirements and refined by usability evaluation. A human-centred design process is iterative, that is, refinement continues until the user requirements are met. HCD addresses the whole UX.

Usability is the extent to which an interactive system is effective, efficient and satisfying to use in a specified context of use. An interactive system is effective if it supports what users need to do to reach their goals, and if users can figure out how to do it. It is efficient if it supports users in completing their tasks quickly and without having to think too much. It is satisfying if it meets users’ expectations and is pleasant to use.

User Experience considers users’ anticipated use, their satisfaction during use and the fulfilment of their expectations after use (whereas usability considers satisfaction only during use).

Last Modification: 17.01.2024 - Contact Person: Webmaster