sHAI @ CONVERSATIONS 2025
- Zhen Cong, Ezgi Dede, and Valentina Bartali
- 6 days ago
- 5 min read
Updated: 5 days ago
On the 12th and 13th of November the 9th edition of the conference CONVERSATIONS took place in Lübeck, Germany. A few members of sHAI (Zhen Cong, Ezgi, and Valentina; the three people in the photo on the right) were present and had a great time sharing their work, asking questions, and meeting new people. After a welcome speech by Prof. Dr.-Ing. Jochen Abke, Asbjørn Følstad formally kicked-off the conference programme by inviting the audience to “delve” into the topics concerning chatbots and human-centered AI.

Our Highlights
The presenters offered diverse and thought-provoking perspectives on how research is done to understand artificial agents and their potential effect on human connection and capability.
Dr. Marisa Tschopp investigated the increasingly relevant topic of relationships with “syntees” (synthetic entities for companionships). Her worked revealed that while these AI relationships are not easily replicated in the lab, a closed-system conversational agent yields surprisingly strong emotional responses. Perhaps, when it comes to synthetic relationships, less equals more?
In an educational setting, Mattias Arvola showcased the potential of a social robot in increasing reading motivation among school children. Most interestingly, even after the children were informed that the robot was human-operated, they were immersed enough to continue interacting with the robot as the story character, demonstrating the power of suspension of disbelief in using social robots in learning environments.
Addressing inclusivity in AI design, Madeleine Rischer and Effie L.-C. Law's research highlighted that responsible AI and chatbot development must take into account our most vulnerable populations and the elderly.
Finally, Prof. Dr. Julian Kunkel's keynote contrasted the comparatively slower progress in academic research and safety frameworks with the rapid pace and ambition of current AI development. A timely reminder of the relevance of the presented research and an emphasis on the importance in funding and supporting such research.
(Poster) presentations
Valentina Bartali presented the study “Student Counsellors’ Perspectives on a Student Mental Well-being Chatbot: Exploring Needs, Preferences, and Design Requirements” written with her supervisors Emmelyn A. J. Croes, Tibor Bosse, Renate H. M. de Groot, Marjolijn L. Antheunis. In her presentation, she describes how student counsellors view the usage of the student mental well-being (StMWB) chatbot in university. The 5 take-aways were:
Customization and personalization are fundamental as every student, their needs, and preferences are different.
What students want and what they need are not always the same.
Having an escalation point is important in case the student needs help that the chatbot cannot provide.
More research should be done on the (therapeutic) approach that the chatbot should take.
Student counsellors have different expectations from the chatbot: using it for practical things and in their sessions (as blended therapy).
Ezgi Dede presented her PhD project through a poster presentation, which was titled “Relationship Formation with Artificial Companions” and supervised by Hande Sungur, Jeroen S. Lemmens and Jochen Peter. Her poster included both the finalized first study, which investigated how AI companions afford relationship formation, and the preliminary findings of her second study that investigates the predictors of intimacy in synthetic relationships. Overall, Ezgi’s work (currently) concludes that:
Friendbots (such as Replika, Character AI, Talkie AI, etc.) have more features than Assistantbots (such as ChatGPT, Gemini, Siri, etc.) to support all stages of relationship formation while Assistantbots lack exploration stage features. That’s why even though Assistantbots might perform better in following a conversation, friendbots have greater capacity to stimulate and support relationship formation.
Preliminary findings show that intimacy in synthetic relationships seem to be not related to the traits and motivations of the users but instead linked to how users perceive the communication with the AI companion.
Workgroup reflections
Ezgi, Zhen Cong, and Valentina, together with Marisa Tschopp, Asbjørn Følstad, and Teresa Luther, worked on the question: “How do Large Language Models impact scientific research?”
It is now a few years since we are using more and more AI tools in our daily lives, and this can also impact our work as researchers. For this group work, we had to reflect on how we use these tools, the challenges that we have, and how we want to tackle them. In our research practices, we might use AI tools/ LLMs to brainstorm about a study or to improve the grammar of a text we write.
In our group, we immediately started discussing challenges:
Losing identity: every time we ask AI tools/ LLMs to do something for us, we feel like our work becomes less and less ours. From asking to help with brainstorming (also if we use it to elaborate thoughts) to writing text. “AI might kill the joy of doing research”.
Missing guidelines: It is not that clear what we can use it for, or not. Also, in which way? And when? We miss some guidelines, and these can differ in culture, discipline, and expertise. While social scientists may experience guilt, shame, and ethical dilemmas for using AI during research, computer scientists may experience the sense of falling behind peers, and incompetency when they do not use AI. Similarly, while European researchers focus on the safe use of LLMS, Asian researchers may focus on the productive use.
Isolation (guilt): As there are no guidelines, we feel that we argue about whether we should use AI or not. The assumption that use of AI might be decided to be unethical in the future guidelines, hurts the transparency in research practices. That makes researchers feel isolated in their own bubble. We feel guilty both for using AI and not catching up with the technological advancements. This also has an impact on productivity as we end up not using these tools in a way which can help us in our work. Yet, also, we feel a certain aversion towards academia’s focus on productivity, and “publish-or-perish” culture. We believe that the peer-reviewing and journal publishing days are coming to an end because the system was already failing and AI tools fasten this erosion. Additionally, we miss out on collaborating with other researchers, disciplines and cultures because of miscommunication about AI use and guidelines.
As for possible solutions we thought of:
Negotiation: We need to ensure that we talk to colleagues about the usage of AI tools/LLMs in scientific practices. This does not need to become a taboo. So, before starting to work with someone, we should negotiate and ensure that there are guidelines on how we would use AI tools/LLMs for that collaboration. Also, it is important not to finger point - to not make people feel that they are doing something wrong. Relatedly, the dependence on AI tools might not be about the people, but about the context. In other words, using AI for certain tasks (e.g., writing emails) might be more acceptable than for others (e.g., statistical analysis and the interpretation of results).
Enjoying doing research: Instead of focusing on quantity and productivity, we should focus on quality and producing a work that will make us happy as researchers so that we can preserve our identity, uniqueness and the joy we feel for doing research.
Dinner reflections
Some of us had a lovely dinner alongside fellow conference attendees at one of Lübeck’s historical breweries, where, through gestures, collective guesswork, and without AI, we figured out how to translate and match German named dishes with orders described in English!
For Zhen Cong, this conference represented his first international conference, and the meal became an unexpected metaphor for his experience. Having the assumption that the environment would be formal and uptight, Zhen Cong chose the schnitzel, rather than the more succulent Crispy German Pork Knuckle (Schweinshaxe), out of fear of appearing informal or making a mess. As the evening unfolded, it became clear that the conference environment was relaxed, and open, no matter everyone’s academic seniority. In hindsight, the Schweinshaxe would have been just fine, and alas he can only view longingly at Ezgi’s dish. Oh well, he’ll learn!
What Zhen Cong got (left) vs. what Ezgi got and Zhen Cong should have gotten (right).
We all enjoyed this conference, and we greatly benefited from connecting with colleagues. And we thank the organization, in particular Asbjørn Følstad, Sebastian Hobert, Effie L.-C. Law, and TH Lübeck – University of Applied Sciences for hosting us.








Comments