Human-Centred Artificial Intelligence Research Group

The Research Group "Human-Centered Artificial Intelligence"is part of the joint Professorship for “Research Infrastructures for the Digital Humanities ” headed by Prof. Dr.-Ing. Ernesto William De Luca which also includes the leadership of the “Human-Centered Technologies for Educational Media” Department of the Leibniz Institute for Educational Media | Georg Eckert Institute (GEI) . The joint professorship also includes the establishment and use of an hybrid usability lab, which is used from the two institutions.

The research group works on different research areas related to Human-Centered AI (HCAI) and Human-centered design (HCD), with a particular focus on Responsible AI, Ethical AI, Machine Learning, Natural Language Processing, Human-Computer Interaction, User-Adaptive Systems and Usability.

Human-Centered AI

Human-Centered Artificial Intelligence (HCAI) is an emerging discipline intent on creating AI systems that amplify and augment (rather than displace) human abilities. HCAI seeks to preserve human control in a way that ensures artificial intelligence meets our needs while also operating transparently, delivering fair and equitable outcomes, and respecting privacy.

Main research topics covered:

  • Ethical AI and Trustworthiness: in 2019, the High-Level Expert Group on AI (AI HLEG), appointed by the European Commission, published the document entitled “Ethics Guidelines for Trustworthy AI”. The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components (, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. In particular, every AI system should be developed, deployed and use in a way that adheres to the ethical principles of: (I) respect for human autonomy, (II) prevention of harm, (III) fairness and (IV) explainability ).
  • Responsible AI: AI concerns all of us, and impacts all of us, not only individually but also collectively. We thus need to go further than the analysis of benefits and impacts for individual users, but rather to consider AI systems as part of an increasingly complex socio-technical reality. Responsible AI is thus about being responsible for the power that AI brings. If we are developing artefacts to act with some autonomy, then “we had better be quite sure that the purpose put into the machine is the purpose which we really desire ”. The main challenge is to determine what responsibility means, who is responsible, for what, and who decides that. But given that AI systems are artefacts, tools built for a given purpose, responsibility can never lie with the AI system because as an artefact, it cannot be seen as a responsible actor.Even if a system’s behavior cannot always be anticipated by designers or employers, chains of responsibility are needed that can link the system’s behavior to the responsible actors. It is true that some, notably the European Parliament, have argued for some type of legal personhood for AI systems. Responsible AI requires participation. That is, it requires the commitment of all stakeholders and the active inclusion of all of society. Which means that everybody should be able to get proper information about what AI is and what it can mean for them, and also to have access to education about AI and related technologies. It also means that AI researchers and developers must be aware of societal and individual implications of their work and understand how different people use and live with AI technologies across cultures.
  • Natural Language Processing (NLP) refers to the branch of AI concerned with giving computers the ability to understand text and spoken words in much the same way human beings can. NLP combines computational linguistics with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to comprehend its full meaning, complete with the speaker or writer’s intent and sentiment.
  • User profiling aims to infer an individual’s interests, personality traits or behaviors from generated data to create an efficient user representation, i.e. a user model, which is exploited by adaptive and personalized systems. Modern systems focus on profiling users ’ data implicitly based on individuals ’ actions and interactions, and this approach is also referred to as behavioral user profiling.
  • Algorithm fairness is the field of research at the intersection of machine learning and ethics. Specifically, it aims at researching the causes of bias in data and algorithms, as well as defining and applying measurements of fairness and developing data collection and modeling methodologies for creating fair algorithms and fair AI systems by providing advice to governments / corporate on how to regulate machine learning. It is also important to understand that approaches to fairness are not only quantitative. This is because the reasons for unfairness go beyond data and algorithms. The research will involve understanding and addressing the root cause of unfairness.
  • Explainability, also referred to as “interpretability”, is the concept that a machine learning model and its output can be explained in a way that is undestandable for a human being at an acceptable level. Unlike traditional software, it may not be possible to point to explain the outcome of a machine learning or deep learning model to an end-user. This lack of transparency can lead to significant losses can also result in user distrust and refusal to use AI applications. Explainability can help developers ensure that the system is working as expected.

Human-centered design

Human-centered design (HCD) is an approach to interactive systems development that focuses on the use of the interactive system and applies Uliability and User experience (UX) knowledge and methods. It is based upon an explicit understanding of users, goals, tasksresources and environments. Users are involved throughout the design. The design is driven by user requirements and refined by usability evaluation. A human-centered design process is iterative, that is, refinement continues until the user requirements are met. HCD addresses the whole UX.

Usability is the extent to which an interactive system is effective, efficient and satisfying to use in a specified context of use. An interactive system is effective if it supports what users need to do to reach their goals, and if users can figure out how to do it. It is efficient if it supports users in completing their tasks quickly and without having to think too much. It is satisfying if it meets users ’ expectations and is pleasant to use.

User experience considers users ’ anticipated use, their satisfaction during use and the fulfillment of their expectations after use (whereas usability considers satisfaction only during use).

Human-Centered Natural Language Processing

Human-Centered Natural Language Processing (HC-NLP) is a course designed to provide an advanced understanding of how AI systems can process and generate human language. By focusing on the interplay between machine learning, linguistics, and human communication, this course explores how NLP technologies are developed to work alongside humans, amplifying and augment their abilities rather than replacing them. 

This course is tailored for students with a basic background in Machine learning and Python programming, and aims to equip participants with the knowledge and tools to build AI systems that understand and generate language in ways that are useful, ethical, and aligned with human needs.

Key research and application areas covered include:

  • Natural Language Processing (NLP):

      • NLP is the field of AI dedicated to teaching machines to understand, interpret, and generate human language. It leverages deep learning techniques, linguistic models, and computational frameworks to enable machines to analyze text or speech, capturing nuances like sentiment, intent, and context.

  • Ethical AI in NLP:

      • The course emphasizes the ethical dimensions of AI in language technologies. Trustworthy AI in NLP ensures that systems are lawful, ethical, and robust.

  • Generative AI & Large Language Models:

      • energetic AI plays a pivotal role in modern NLP. This section of the course covers how large language models (e.g., GPT) are trained to generate coherent and contextually relevant text. (Topics include: Pre-Training & Fine-Tuning, Prompting and RLHF

  • Human-AI collaboration:

      • The course takes a human-centered approach, focusing on how NLP systems can collaborate with humans in real-world applications. 

  • Explainability & Transparency in NLP:

      • Understanding Explainability in NLP models is crucial for creating AI systems that users can trust.

 

Last Modification: 01.11.2024 - Contact Person: Webmaster