Predatory pixels in cuddly clothing: Navya Sharan reflects on the hidden dangers of AI companions
- Rebecca Wald
- Jan 13
- 1 min read
Updated: Mar 16
As generative AI becomes interweaves into the fabric of everyday life, one of the most pressing questions for researchers in human-AI interaction goes beyond how we interact with these systems but what those interactions are doing to us. Navya Sharan, a postdoctoral researcher in our group, reflects on this in the December edition of her brand-new newsletter, Research & Realities.

She explores the growing phenomenon of parasocial relationships with AI companions. The piece takes its cue from Cambridge Dictionary’s Word of the Year 2025, parasocial, and makes the case for why this concept has never felt more urgent.
The article tackles what makes AI companions uniquely dangerous compared to earlier one-way parasocial relationships: unlike a favourite TV character, today’s AI agents talk back. They respond, adapt, and simulate care. Yet they cannot meaningfully reciprocate the emotional investment users place in them. This gap is especially alarming when the users in question are children, who are increasingly turning to chatbots for emotional support, friendship, and even mental healthcare.
For a community dedicated to understanding our social interactions with digital agents, this piece lands close to home. Human-machine communication has long grappled with questions like: Do people treat machines like social beings? Do they form bonds with them? And these are now playing out at scale, in homes and classrooms, with real and sometimes devastating consequences.
Read her full article here.




Comments