Date of Award

Spring 5-29-2024

Document Type

Thesis (Undergraduate)

Department

Computer Science

First Advisor

Soroush Vosoughi

Abstract

ChatGPT and other Large Language Models (LLMs) currently do a good job at generating novel text across many domains, but math remains a consistent issue when it comes to the accuracy of answers generated by these models. My research into various ways to manipulate the model have led me to the conclusion that a general closed form solution to help LLMs with math is both unrealistic and likely impossible. LLMs can be trained more successfully as you narrow the problem space, but consideration must be taken on the part of human user to recognize when an LLM is detrimental to your solution and a traditional programming solution should be taken instead.

Share

COinS