Symposium

Symposium

Deutsches Hygiene-Museum, Dresden, Germany
25-26 March 2026


The emergence of generative AI (GenAI) marks a pivotal transformation in the human–technology nexus — a shift as profound as the printing press at the dawn of modernity. We may be witnessing not the end of Humanism, but the beginning of a new era in which GenAI expands access to knowledge, enables translingual dialogue, amplifies reasoning, and fosters new forms of collaboration. But as with all transformative technologies, such potential must be intentionally shaped. Just as the printing press became a vehicle for rational discourse only through active engagement by cultural, intellectual, and political actors, so too must GenAI be guided toward humanistic, ethical, and democratic ends.

The interdisciplinary project The Answering Machine, funded by the Volkswagen Foundation, invites you to its concluding symposium that explores how generative AI might enrich rather than erode the cognitive, emotional, creative, and social capacities that define human flourishing. Rather than capitulating to narratives of disruption or decline, this gathering offers a space for interdisciplinary reflection and constructive imagination. In line with our project’s vision, The Answering Machine seeks to chart a course in which GenAI is not merely a tool, but a partner in the ever-changing project of Humanism — attuned to the complexities of the human condition and committed to the pursuit of shared progress.

At the heart of our conversation lie the following questions. These discussions will be enriched by a series of performances in which actors interact with GenAI on stage.

  • In what ways can generative AI, appearing as social agents, help to shape a shared future that strengthens rather than diminishes human capabilities?
  • What does co-creating with generative AI look like in practice, and how can this collaboration strengthen human agency and creativity?
  • How can generative AI support the co-construction of knowledge and competencies, and how might this transform learning processes and critical-thinking skills?
  • Which risks and challenges arise when integrating generative AI in education, healthcare, performing arts and other societal domains?


COMPUTATIONAL LINGUISTICS
Max van Duijn, Leiden University

Why (Some) Large Language Models Are Such Good Mindreaders

Well, are they? If by “mindreading” we mean performance on standardised Theory of Mind tests, then the answer is “yes” for several state-of-the-art LLMs. If a more encompassing definition of mindreading is adopted, involving also pragmatic skills, empathy, recursive and metarepresentational capacities, and more, and if a broader suite of LLMs is considered, then the answer should be more nuanced. In this talk I will discuss the empirical evidence for different aspects of mindreading in LLMs as presented in various key studies, including some from my own SIM lab. I will connect these to findings from the cognitive-developmental and evolutionary sciences, to deepen our theoretical understanding of social cognition in both humans and machines, as a basis for complementary interactions between them in the future.

THEATER STUDIES
Ulf Otto, LMU Munich

Machine Phenomenology: Theatre Photography, Computer Vision, and the Epistemic Potentials of Statistical Models

Events lie at the heart of theatre. Media are therefore of interest to research not only as sources, but also as mediators. Especially in the mass media, theatre in its breadth has been recorded since the nineteenth century. Their digital availability makes it possible to investigate theatre cultures on a new scale —analogous to distant reading — that is, statistically rather than through exemplary case studies. What was shown, and how was it seen? Photographic reporting in particular, which became established after 1945, contains a contemporary history of theatre as an image history. Yet what appears in images cannot be counted in the way words can — at least not without a gaze that already separates and names what can be found in the image. Machines must learn, through the abstraction of large quantities of data, to “see something into” images. The machine’s gaze is therefore opaque (black-boxed) and biased — like the human gaze as well. Just as performance analysis always implicates one’s own perception and its social positioning, the machine-based preprocessing of theatre photographs also becomes a kind of Rorschach test for AI. Building on a research project on the visual history of theatre photography, the talk addresses the epistemic consequences of computational methods and machine learning, and argues for comprehensive data literacy in the humanities.

MEDIA STUDIES
Minha Lee, TU/e Eindhoven

Mind over matter? Conversational futures with speculative minds 

We denote other beings to have minds of their own when we perceive them to have cognitive and affective capacities, to different degrees. A chatbot can be perceived to have some level of cognition, but not emotions, for instance. But, this bias can be overturned through how the agent and the environment it operates in are designed, which can be speculatively explored. The potential to shape our human perception of non-human “minds” has various ethical considerations to be discussed. The talk will be followed by a collaborative activity.

PSYCHOLOGY
Ute Schmid, University of Bamberg

Requirements for Human AI Alignment in Joint Decision Making and Problem Solving

With the advance of highly performant AI systems – deep learning based classifiers and transformer based generative approaches – there is hope that human AI collaboration will support humans to master complex tasks more efficient as well as in high quality. This is especially relevant for critical tasks such as medical diagnostics or generation of program code for scientific tasks. However, a growing number of empirical studies shows that the best of humans or AI systems outperform human AI teams. Possible reasons are on the one hand over-reliance in the output of AI systems and on the other hand a mismatch between human cognitive processes and AI systems. In the talk, I will propose that we need more human aligned methods of explainable AI as well as novel methods to support human agency and oversight.  

TRANSDISCIPLINARY
Eva Wolfangel, Science Journalist 

The World According to Words

If all you had were words, could you truly grasp the world behind them? Large language models try exactly that. In some situations they seem highly capable; under systematic testing, their limits become obvious. Whatever this is, it does not fully match what we usually mean by understanding. But the human case is not straightforward either. What actually counts as understanding, and how do we obtain it? Decades of research in different disciplines offer partial answers, yet many assumptions are still unsettled. Today, some findings suggest that language models merely imitate world knowledge; others hint at parallels to how humans rely on heuristics and incomplete representations. This talk explores, what LLMs reveal about the boundary between imitation and understanding, where language-only learning seems to reach its limits and which alternative approaches researchers are exploring to give machines a richer access to the world. Rather than delivering clear-cut answers (because: they are not here yet!), the talk brings together surprising results that challenge common intuitions about both machines and minds. An invitation to rethink what it means to “know” the world at all.


We offered a diverse program featuring interdisciplinary keynotes and discussions, theatre performances with AI, poster sessions, and a Human–AI Playground that explores the questions introduced in the overview section. Furthermore, you were invited to visit the exhibition Mental Health at the Deutsches Hygiene-Museum Dresden, free of charge. Below you found an overview.

Information could also be found in the full program. The symposium was held in English. The artistic performances on Tuesday, Thursday, and Friday evening were in German, as they were open to the general public. However, on Wednesday there was an artistic performance in English, exclusively for symposium participants.


Poster

Varvara Gumirova Behavioral and Linguistic Cues for Evaluating Simulated Psychotherapy Patients
Maximilian Jun ZhangVoice, Presence, and Alliance: Towards a Theoretical Framework for Voice-based Human-AI Communication in Mental Health Support
Alfio VenturaSynthetic Relationships and Social Health: A Framework and Empirical Data on User-Centred AI Companion Design :: AI companionship, Synthetic Relationships, Social Health
Livia Kuklick,
Elisabeth Mayweg
Same Words, Less Trust: Perceptual Biases Toward AI-Based Compared to Human Dialogue in Higher-Education Contexts
Anna Lena MenneDisconcerting Answers: Affective-Epistemic Friction in Human-AI Interaction and Alternative Futures
Patrick Weis“I totally could have solved that complex problem on my own”: Some Notes on AI-induced Illusions of Competence
Zhifan SunALICE: Towards Multi-dimensional Short Answer Scoring
Büsra SarigülInfluence of Perceived Role of AI on Self-Efficacy: A Longitudinal Perspective
David FilgertshoferThe individual perception of Large Language Models
Polina VedernikovaWhich risks and challenges arise in the view of users when integrating generative AI in healthcare? Findings from different Generational Groups
Magdalena Taube,
Andrea Kloß
Knowledge Cultures in Transition: Participatory Media, Public-Interest AI, and the Socio-Political Dimensions of Generative AI
Nick Naujoks-SchoberStudium Generare: Development of a Scenario-Based Instrument for Assessing Knowledge of Self-Regulated Learning with AI
Astrid CarolusReframing Generative AI from Tool to Social Counterpart: A Psychological Perspective on Chatbot Usage
Olga Vogel,
Sophie Berretta
(Para)Social by Design? A Systematic Review of parasocial relationship Forms with Generative AI
Sebastian MusslickAutomated Discovery of Mind and Brain
Alica MüllerWhen Tools Take the Lead: Experiments with Assistive AI in Creative Processes
Anna Köhler,
Carolin Volz
(Re-)Creating a comedy podcast using Generative AI
Teresa LutherFrom tools to companions? Longitudinal dynamics of how users relate to AI agents
Anna-Marie RönschThe first medical chatbot? A media archaeological study of the expert system MYCIN
Ingo SiegertBeyond Hype and Fear – The Felicia Festival as a Stage for Democratic AI Competence
Björn RudzewitzAn AI-supported reading comprehension system with a teacher-in-the-loop approach
Lydia BärnreutherResearch trend analysis – Artificial intelligence in higher education
Alessandra BrondettaAutomated Prototyping of Behavioral Experiments with Large Language Models.
Lena NischwitzCute or Not? Perceiving Cuteness in AI-Generated Images
Hannah SeidlerWho Builds Personal Relationships with AI, and Why? – A Systematic Review
Lou Therese BrandnerAI in education: Ethical challenges
Aline MangoldAsking Better Question: A Socratic Chatbot for Research Question Refinement
Sabijn PerdijkDoes collaborative storytelling shape the linguistic abilities of both children and language models?
Sabrina NamazovaEvaluating LLMs as Participant Simulators for Behavioral Science
Thorsten Zylowski,
Wladimir Hettmann
UniC: Integrating Embodied Conversational Agents into Teaching and Learning
Rucha KhotDesigning for My Future Self with AI: Is Simulating Dementia Enough

Playground

Philipp GraffePatientBot
Sonja NiemannCognitive Involvement instead of Overdelegation — Understanding Recursion with LLM-Dialogs
Lars EngelnLLM‑Driven Natural Language Control for Theatrical Lighting
Sebastian MusslickAI Scientist for Discovery of Mind and Behavior
Kaan Sahin,
Linda Nguyen
Developing and Optimizing Individually Adapted Reading-Learning Stories Using AI-Based Tools
Jana RiedelCan AI replace human professional skills?
Aline MangoldSocratic Quest: A Chatbot for Crafting Your Research Questions
Mona HedayatiThe Body & the Archive: Towards a Sonic Speculation
Sabijn PerdijkA Storytelling Cube for Co-Creative Storytelling
Claudia LoitschMetis – AI-Companion for Students


Coming soon