Date of Award
Winter 2026
Document Type
Thesis (Master's)
Department or Program
Computer Science
First Advisor
Nikhil Singh
Second Advisor
John Bell
Third Advisor
Lorie Loeb
Abstract
Avatars are typically deployed as user-controlled proxies or conversational agents, yet their potential as autonomous companions that enrich media consumption re mains underexplored. This work reconceptualizes the avatar’s role as an AI co-viewer companion, designed to transform solitary viewing into a co-experienced emotional journey. A novel multimodal pipeline is presented that enables the avatar to autonomously perceive, interpret, and generate emotionally resonant reactions to video content. The pipeline uses large multimodal models for video understanding and affective inference to generate first-person commentary that reflects the perceived emotional and narrative context. These outputs are synthesized into synchronized speech, facial expressions, and body gestures, and rendered through an avatar integrated in Unity. A user study demonstrates that the avatar significantly enhances viewers’ self-reported emotional connection to the content. This work contributes a complete architecture for a reactive avatar co-viewer and provides empirical evidence supporting its role in mediating emotional engagement with media and informing future avatar development.
Recommended Citation
Dong, Xinyi, "Emotionally Reactive AI Avatar Companion for Video Watching Experiences" (2026). Dartmouth College Master’s Theses. 265.
https://digitalcommons.dartmouth.edu/masters_theses/265
