Author ORCID Identifier
https://orcid.org/0000-0003-4538-0986
Date of Award
2024
Document Type
Thesis (Ph.D.)
Department or Program
Computer Science
First Advisor
Soroush Vosoughi
Abstract
The 21st century has seen dramatic shifts in human interactions with information, peers, and the environment, primarily driven by the proliferation of online platforms and social media. These advancements offer more access to information and global connectivity, but also present challenges such as information overload, misinformation, online harms, and biased reporting that can negatively impact user well-being. This thesis examines the role of Large Language Models (LLMs) — advanced forms of artificial intelligence that understand and generate human-like text — in enhancing well-being in the digital age. The study begins by exploring the potential of LLMs to detect early signs of mental health issues by analyzing emotional patterns in online posts. This approach serves as a scalable, content-agnostic tool for early intervention, promoting timely help-seeking behaviors. Additionally, the thesis investigates the use of LLMs in assessing threats to informational integrity, which is critical for trust, psychological safety, informed decision-making, and social cohesion. This includes identifying state-sponsored propaganda on social media and biases in news outlets. The research also delves into the role of LLMs in improving online interactions by studying factors such as identifying intellectual humility through social media data analysis, which is the first step to cultivating constructive communication and diminishing toxic online behavior. Moreover, the paper discusses the utility of LLMs in managing large-scale information consumption, facilitating the generation of community-level insights, and alleviating information overload for individual users.
What's more, the thesis addresses the biases of metrics for specific tasks and the inherent biases in LLMs, such as the preference for longer or shorter summaries and the serial position effect, which favors information presented at the beginning or end of a sequence. Understanding and mitigating these biases is essential for ensuring fair and accurate LLM applications in decision-making contexts.
Finally, this thesis explores the selection of LLMs for their application across various tasks, providing a comprehensive overview that will guide future work. This discussion is particularly valuable for researchers in other domains who are interested in leveraging LLMs to enhance their work.
Overall, the thesis highlights the significant potential of LLMs to foster a safer, more informed, and positive online environment, thereby enhancing individual and societal well-being. It presents novel methodologies, datasets, and models that pave the way for effective LLM applications in the realm of digital well-being.
Recommended Citation
Guo, Xiaobo, "Leveraging Large Language Models for Enhancing Well-being in the Digital Age" (2024). Dartmouth College Ph.D Dissertations. 297.
https://digitalcommons.dartmouth.edu/dissertations/297