Author ORCID Identifier

https://orcid.org/0009-0008-7409-2303

Date of Award

Spring 5-28-2024

Document Type

Thesis (Undergraduate)

Department

Computer Science

First Advisor

Jonathan Phillips

Second Advisor

Soroush Vosoughi

Abstract

The replication of human concept representation is a critical task for the pursuit of artificial general intelligence. With the recent influx of large language models that demonstrate text-generation capabilities nearly on par with humans, the question stands on whether these large language models can capture concepts within language. We examine this question by exploring differences in concept representation across similarity spaces between humans and LLMs. We find that, while concept representation within LLMs does partially mimic human concept representation, LLMs are greatly limited by their dependence on semantic information and cannot therefore develop an understanding of human social code or morality. Our results suggest that there are limitations imposed by the model design of LLMs that will prevent full replication of human concept representation.

Share

COinS