This workshop focuses on Human-Centered AI and addresses the question of how AI-supported applications can be designed to consistently align with human needs, capabilities, limitations, and values. Rather than emphasizing the technological performance of AI itself, the workshop places its focus on the meaningful, responsible, and user-centered integration of AI into interactive systems, products, and services. The workshop is deliberately designed to be methodologically open and outcome-oriented. Contributions are invited that engage with the design, evaluation, or critical reflection of AI-supported applications regardless of whether they are conceptual, empirical, experimental, or practice-oriented in nature. The aim is to bring together diverse perspectives and jointly discuss which design approaches, methods, and conceptual frameworks have proven effective or are currently emerging in the context of Human-Centered AI.
Additional Keywords and Phrases: Human-Centered AI, Evaluation, Human-Needs, Design Guidelines
MOTIVATION
Artificial Intelligence is evolving at an unprecedented pace, with capabilities increasingly integrated into traditional usage scenarios and enabling entirely new tasks and job roles. In many cases, this significantly changes the role of human users (Shneiderman, 2022). Users who previously exercised full control over complex systems are now transitioning into collaborative roles, working alongside AI systems as co-actors.
As users delegate complex tasks to AI agents or co-pilots, they must continuously assess whether to trust and follow AI-generated suggestions or override them. This raises critical questions regarding transparency (McKay, 2022), explainability (Holzinger et al., 2019), and trust (Johnson et al., 2025), as well as the necessity of maintaining meaningful human control (Sheridan, 2011; Körber, 2018). Furthermore, AI systems increasingly support or pre-structure high-stakes decisions, such as hiring processes, credit approvals, or medical treatments. Consequently, ethical considerations, fairness, and perceptions of bias become central to the design and evaluation of such systems. Against this background, a key challenge is not only understanding Human-Centered AI conceptually, but also translating it into concrete design practices and actionable methods. While related workshops may address similar topics, this workshop explicitly differentiates itself through its strong focus on the integration of design practice, methodological approaches, and real-world application contexts. Rather than remaining at a purely conceptual or discursive level, the workshop aims to generate actionable design knowledge, including concrete design principles, patterns, and methodological recommendations for Human-Centered AI.
By combining interdisciplinary perspectives with practice-oriented contributions, the workshop seeks to bridge the gap between theoretical discourse and applied design. The workshop provides space for discussion, exchange, and collective reflection, with the goal of advancing the practical implementation of Human-Centered AI.
WORKSHOP MODE
The workshop will take place as an in-person event and will be conducted on-site. Online participation is not possible.
WORKSHOP ACTIVITIES The workshop will be structured into two sections with a 20-minute break in between.
In the first section, authors will present research papers related to the topic of the workshop. Each presentation is planned for 15 minutes, followed by 5 minutes of discussion. Contributions should primarily relate to one or more of the thematic focal points listed below, while related contributions beyond this scope are also welcome. Presentations are expected to provide new research results or insights concerning the impact of AI on human-centered design.
In the second section, participants will split into smaller groups to discuss relevant open questions concerning the future role and impact of AI in the field of Human-Computer Interaction. Based on the submitted contributions, the organizers will prepare a set of discussion topics. Participants will vote on these topics, and the most relevant ones (typically 2–4, depending on group size) will be selected. Each group will be supported by a moderator who documents the discussion. To ensure sustainable and impactful outcomes, the results of the group discussions will be systematically documented and consolidated after the workshop. This includes a structured summary of key insights, design recommendations, and identified research gaps. The organizers aim to make these results publicly accessible (e.g., via a workshop report, institutional website, or potential joint publication), thereby contributing to the broader Human-Centered AI community.
CALL FOR PARTICIPATION
Artificial Intelligence is rapidly transforming how people interact with digital systems. Users increasingly move from direct control of applications toward collaborative roles in which they interact with AI systems as co-actors. This shift raises fundamental questions about how AI-supported systems can be designed to align with human needs, capabilities, limitations, and values. This workshop focuses on Human-Centered AI and explores design-oriented, methodological, and practical approaches for developing responsible and user-centered AI applications. Contributions are invited that address the design, evaluation, or critical reflection of AI-supported systems across conceptual, empirical, experimental, or practice-based work. We particularly welcome contributions that provide actionable insights, design implications, or methodological advances.
Possible thematic focal points include, but are not limited to:
The workshop combines paper presentations and interactive discussions with the goal of deriving shared insights and concrete design implications for Human-Centered AI systems.
If the number of submissions exceeds the available slots, contributions will be selected based on the following criteria:
We explicitly encourage submissions from diverse disciplinary perspectives to foster a rich and interdisciplinary exchange. The workshop aims to derive a consolidated set of design-oriented insights that can serve as a foundation for future research, teaching, and practical implementation of Human-Centered AI.
SUBMISSION AND PARTICIPATION
The workshop is planned as a half-day, in-person event and combines short paper presentations with interactive discussions. We invite participants to submit a short paper (https://muc2026.mensch-und-computer.de/submission/hci-scientific-track/call-for-short-papers/) and present their work during the workshop. Submissions should clearly relate to the workshop topic and are expected to provide either conceptual contributions, empirical insights, methodological approaches, or practical case studies.
Important dates:
Further details on the submission format and process will be provided via the Conference Webseite (conference submission system).
In addition, we explicitly encourage interested participants to contact the organizers in advance via email to discuss potential contributions:
This provides an opportunity to clarify the fit of a submission, receive early feedback, and lower the barrier for participation. Early contact with the organizers also helps to ensure a well-balanced and focused workshop program, particularly in case of high demand and limited presentation slots.
Conference website: https://muc2026.mensch-und-computer.de/
ORGANIZERS
Zeynep Tuncer: Prof. Dr. Zeynep Tuncer is Professor of Digital Media and Head of the Media Lab at Baden-Wuerttemberg Cooperative State University. Her work focuses on Human-Computer Interaction, UX/UI, Human-Centered Design, and responsible digital innovation. She previously held professorships in Media Informatics and HCI and has led major research initiatives, including a DFG-funded collaborative research center at Technische Universität Darmstadt. She combines academic leadership with industry experience at companies such as Bosch/Siemens/Home Appliances GmbH, Daimler AG, Ford-Werke GmbH, and John Deere GmbH & Co. KG (see publications on Google Scholar).
Martin Schrepp: Dr. Martin Schrepp studied Mathematics and Psychology at University of Heidelberg. He received a diploma in Mathematics 1990 and a PhD in Psychology 1993. Since 1994 he has worked in various roles at SAP SE. His work experience includes writing technical documentation, software development and user interface design. Main research interests are the application of insights from cognitive science to the design of interactive products, accessibility and the development of methods for evaluation and data analysis. He published a number of research papers in these areas (see publications on Google Scholar).
REFERENCES
[1] Shneiderman, B., 2022. Human-Centered AI. Oxford University Press. DOI: 10.1093/oso/9780192845290.001.0001
[2] Johnson, B., Bird, C., Ford, D., Al Haque, E., Forsgren, N., & Zimmermann, T., 2025. Facilitating Trust in AI-assisted Software Tools. ACM Transactions on Software Engineering and Methodology.
[3] Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley interdisciplinary reviews: data mining and knowledge discovery, 9(4), e1312.
[4] McKay, M. H. (2022). AI Transparency in a Real-World Context: What we can learn from past examples of algorithmic and statistical decision-making. Proceedings of the Canadian Conference on Artificial Intelligence. https://doi.org/10.21428/594757db.59056afd
[5] Sheridan, T. B. (2011). Adaptive automation, level of automation, allocation authority, supervisory control, and adaptive control: Distinctions and modes of adaptation. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(4), 662-667.
[6] Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Congress of the International Ergonomics Association (pp. 13-30). Cham: Springer International Publishing.
Additional Keywords and Phrases: Human-Centered AI, Evaluation, Human-Needs, Design Guidelines
MOTIVATION
Artificial Intelligence is evolving at an unprecedented pace, with capabilities increasingly integrated into traditional usage scenarios and enabling entirely new tasks and job roles. In many cases, this significantly changes the role of human users (Shneiderman, 2022). Users who previously exercised full control over complex systems are now transitioning into collaborative roles, working alongside AI systems as co-actors.
As users delegate complex tasks to AI agents or co-pilots, they must continuously assess whether to trust and follow AI-generated suggestions or override them. This raises critical questions regarding transparency (McKay, 2022), explainability (Holzinger et al., 2019), and trust (Johnson et al., 2025), as well as the necessity of maintaining meaningful human control (Sheridan, 2011; Körber, 2018). Furthermore, AI systems increasingly support or pre-structure high-stakes decisions, such as hiring processes, credit approvals, or medical treatments. Consequently, ethical considerations, fairness, and perceptions of bias become central to the design and evaluation of such systems. Against this background, a key challenge is not only understanding Human-Centered AI conceptually, but also translating it into concrete design practices and actionable methods. While related workshops may address similar topics, this workshop explicitly differentiates itself through its strong focus on the integration of design practice, methodological approaches, and real-world application contexts. Rather than remaining at a purely conceptual or discursive level, the workshop aims to generate actionable design knowledge, including concrete design principles, patterns, and methodological recommendations for Human-Centered AI.
By combining interdisciplinary perspectives with practice-oriented contributions, the workshop seeks to bridge the gap between theoretical discourse and applied design. The workshop provides space for discussion, exchange, and collective reflection, with the goal of advancing the practical implementation of Human-Centered AI.
WORKSHOP MODE
The workshop will take place as an in-person event and will be conducted on-site. Online participation is not possible.
WORKSHOP ACTIVITIES The workshop will be structured into two sections with a 20-minute break in between.
In the first section, authors will present research papers related to the topic of the workshop. Each presentation is planned for 15 minutes, followed by 5 minutes of discussion. Contributions should primarily relate to one or more of the thematic focal points listed below, while related contributions beyond this scope are also welcome. Presentations are expected to provide new research results or insights concerning the impact of AI on human-centered design.
In the second section, participants will split into smaller groups to discuss relevant open questions concerning the future role and impact of AI in the field of Human-Computer Interaction. Based on the submitted contributions, the organizers will prepare a set of discussion topics. Participants will vote on these topics, and the most relevant ones (typically 2–4, depending on group size) will be selected. Each group will be supported by a moderator who documents the discussion. To ensure sustainable and impactful outcomes, the results of the group discussions will be systematically documented and consolidated after the workshop. This includes a structured summary of key insights, design recommendations, and identified research gaps. The organizers aim to make these results publicly accessible (e.g., via a workshop report, institutional website, or potential joint publication), thereby contributing to the broader Human-Centered AI community.
CALL FOR PARTICIPATION
Artificial Intelligence is rapidly transforming how people interact with digital systems. Users increasingly move from direct control of applications toward collaborative roles in which they interact with AI systems as co-actors. This shift raises fundamental questions about how AI-supported systems can be designed to align with human needs, capabilities, limitations, and values. This workshop focuses on Human-Centered AI and explores design-oriented, methodological, and practical approaches for developing responsible and user-centered AI applications. Contributions are invited that address the design, evaluation, or critical reflection of AI-supported systems across conceptual, empirical, experimental, or practice-based work. We particularly welcome contributions that provide actionable insights, design implications, or methodological advances.
Possible thematic focal points include, but are not limited to:
- Design principles, guidelines, and patterns for Human-Centered AI
- UX, UI, and interaction design of AI-based systems
- Transparency, explainability, human control, and trust in AI applications
- Human–AI interaction, collaboration, and decision-making processes
- Ethical, social, and cultural dimensions of AI design
- Use cases, project reports, and best-practice examples from research, education, or industry
- Methodological approaches to the conception, evaluation, and reflection of AI-supported applications
The workshop combines paper presentations and interactive discussions with the goal of deriving shared insights and concrete design implications for Human-Centered AI systems.
If the number of submissions exceeds the available slots, contributions will be selected based on the following criteria:
- Relevance to the workshop topic
- Conceptual or methodological contribution
- Originality and clarity
- Potential to stimulate discussion
We explicitly encourage submissions from diverse disciplinary perspectives to foster a rich and interdisciplinary exchange. The workshop aims to derive a consolidated set of design-oriented insights that can serve as a foundation for future research, teaching, and practical implementation of Human-Centered AI.
SUBMISSION AND PARTICIPATION
The workshop is planned as a half-day, in-person event and combines short paper presentations with interactive discussions. We invite participants to submit a short paper (https://muc2026.mensch-und-computer.de/submission/hci-scientific-track/call-for-short-papers/) and present their work during the workshop. Submissions should clearly relate to the workshop topic and are expected to provide either conceptual contributions, empirical insights, methodological approaches, or practical case studies.
Important dates:
- Submission deadline: July 6, 2026
- Notification of acceptance: July 13, 2026
- Final submission: July 22, 2026
Further details on the submission format and process will be provided via the Conference Webseite (conference submission system).
In addition, we explicitly encourage interested participants to contact the organizers in advance via email to discuss potential contributions:
- Zeynep Tuncer (DHBW Mannheim): zeynep.tuncer(at)dhbw.de
- Martin Schrepp (SAP SE): martin.schrepp(at)sap.com
This provides an opportunity to clarify the fit of a submission, receive early feedback, and lower the barrier for participation. Early contact with the organizers also helps to ensure a well-balanced and focused workshop program, particularly in case of high demand and limited presentation slots.
Conference website: https://muc2026.mensch-und-computer.de/
ORGANIZERS
Zeynep Tuncer: Prof. Dr. Zeynep Tuncer is Professor of Digital Media and Head of the Media Lab at Baden-Wuerttemberg Cooperative State University. Her work focuses on Human-Computer Interaction, UX/UI, Human-Centered Design, and responsible digital innovation. She previously held professorships in Media Informatics and HCI and has led major research initiatives, including a DFG-funded collaborative research center at Technische Universität Darmstadt. She combines academic leadership with industry experience at companies such as Bosch/Siemens/Home Appliances GmbH, Daimler AG, Ford-Werke GmbH, and John Deere GmbH & Co. KG (see publications on Google Scholar).
Martin Schrepp: Dr. Martin Schrepp studied Mathematics and Psychology at University of Heidelberg. He received a diploma in Mathematics 1990 and a PhD in Psychology 1993. Since 1994 he has worked in various roles at SAP SE. His work experience includes writing technical documentation, software development and user interface design. Main research interests are the application of insights from cognitive science to the design of interactive products, accessibility and the development of methods for evaluation and data analysis. He published a number of research papers in these areas (see publications on Google Scholar).
REFERENCES
[1] Shneiderman, B., 2022. Human-Centered AI. Oxford University Press. DOI: 10.1093/oso/9780192845290.001.0001
[2] Johnson, B., Bird, C., Ford, D., Al Haque, E., Forsgren, N., & Zimmermann, T., 2025. Facilitating Trust in AI-assisted Software Tools. ACM Transactions on Software Engineering and Methodology.
[3] Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley interdisciplinary reviews: data mining and knowledge discovery, 9(4), e1312.
[4] McKay, M. H. (2022). AI Transparency in a Real-World Context: What we can learn from past examples of algorithmic and statistical decision-making. Proceedings of the Canadian Conference on Artificial Intelligence. https://doi.org/10.21428/594757db.59056afd
[5] Sheridan, T. B. (2011). Adaptive automation, level of automation, allocation authority, supervisory control, and adaptive control: Distinctions and modes of adaptation. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 41(4), 662-667.
[6] Körber, M. (2018). Theoretical considerations and development of a questionnaire to measure trust in automation. In Congress of the International Ergonomics Association (pp. 13-30). Cham: Springer International Publishing.