top of page


Finally, one of the most gezellig workshops is back! Three years ago was the last time it was held at the University of Amsterdam, and last year it was completely online, which was okay, but the power and attraction of the CONVERSATIONS workshop really is in the welcome atmosphere, safe space, and interesting people that attend it in person. Not to mention all the snacks.

We started off with an interesting keynote by Catherine Pelachaud, Director of Research in the ISIR laboratory at Sorbonne University. Her work was focused on how humans interact with virtual agents, using known human-human interaction techniques (hence very close to my heart). To make these virtual agents appear warmer and more competent, they had models made based on human-human gestural data. The VAs would be trained on the models and then participants were asked to rate these VAs on their warmth and competence – expecting that they would be perceived positively in these areas. However, it turns out that what works with humans doesn’t work when VAs do it. In fact, when looking at the evaluation results from the participants, the VA with the most positive reviews was one that didn’t do any gestures at all, it just smiled. Once again, more research coming out suggesting that we do not interact with computers like we do with humans. Sorry Nass.

This year’s CONVERSATIONS also had a new format: panel discussions. As the organizer Asbjørn said when he introduced the new panel, a lot of conferences have this format, so it was time to try it out for ourselves. The discussion was about how to evaluate chatbots, with representatives from the industry as well as academia. It was a great addition to the workshop – because it felt like it was over way too quickly. It was a great discussion focusing on the discrepancy between end-user experience with chatbots and evaluation, and what businesses want and think their end-users want. And of course, the age-old divergence between what businesses think is worth spending money on for research, versus what the literature/researcher/field suggests should be done. End-result is a lot of “home-grown” surveys because no one has the time or budget to create a universal one that accurately captures all the different aspects that come to play when interacting with chatbots. As you can imagine, we could have spent an entire 2 days entirely on this topic.

On the second day, besides the interesting presentations, we engaged in two group work sessions. I was in Group 1 and our conclusion was that when you design chatbots (or any conversational agent) you need to be aware of the complexity of how humans interact, how we converse. Again, something close to my heart: don’t reinvent the wheel! Interdisciplinary work is so important in general, but especially in human-computer interaction because we are re-creating situations with a computer. And a lot of times we lose sight of that.

I’ve noticed that I haven’t mentioned anything about any of the presentations from the submitted papers. That is not because there wasn’t anything interesting being presented, but more that a lot of the presentations were about projects that were about to start, or where the authors hadn’t had a chance to properly analyze the data yet. So, more inspiring talks yet to come next year 😊


bottom of page