Author ORCID Identifier

https://orcid.org/0009-0001-4975-0262

Date of Award

Spring 6-11-2023

Document Type

Thesis (Undergraduate)

Department

Cognitive Science

First Advisor

Mark Thornton

Second Advisor

Landry Bulls

Abstract

With several modes of expression, such as facial expressions, body language, and speech working together to convey meaning, social communication is rich in redundancy. While typically relegated to signal preservation, this study investigates the role of cross-modal redundancies in establishing performance context, focusing on unaided, solo performances. Drawing on information theory, I operationalize redundancy as predictability and use an array of machine learning models to featurize speakers' facial expressions, body poses, movement speeds, acoustic features, and spoken language from 24 TEDTalks and 16 episodes of Comedy Central Stand-Up Presents. This analysis demonstrates that it is possible to distinguish between these performance types based on cross-modal predictions, while also highlighting the significant amount of prediction supported by the signals’ synchrony across modalities. Further research is needed to unravel the complexities of redundancy's place in social communication, paving the way for more effective and engaging communication strategies.

Share

COinS