Author ORCID Identifier
https://orcid.org/0000-0002-6625-2863
Date of Award
Summer 8-14-2023
Document Type
Thesis (Ph.D.)
Department or Program
Computer Science
First Advisor
Michael Casey
Abstract
Transfer learning is a machine learning technique founded on the idea that knowledge acquired by a model during “pretraining” on a source task can be transferred to the learning of a target task. Successful transfer learning can result in improved performance, faster convergence, and reduced demand for data. This technique is particularly desirable for the task of brain decoding in the domain of functional magnetic resonance imaging (fMRI), wherein even the most modern machine learning methods can struggle to decode labelled features of brain images. This challenge is due to the highly complex underlying signal, physical and neurological differences between brains, low data collection throughput, and other factors. Transfer learning is exciting in its potential to mitigate these challenges, but with this application still in its infancy, we must begin on the ground floor. The goals of this thesis were to design, implement, and evaluate a framework for pretraining and transfer learning on arbitrary fMRI datasets, then demonstrate its performance with respect to the literature, and achieve substantive progress toward generalized pretrained models of the brain. The primary contribution is our novel framework which achieves these goals, called BEAT, which stands for Bi-directional Encoders for Auditory Tasks. The design and implementation of BEAT include adapting state-of-the-art deep learning architectures to sequences of fMRI data, as well as a novel self-supervised pretraining task called Next Thought Prediction and several novel supervised brain decoding tasks. To evaluate BEAT, we pretrained ii on Next Thought Prediction and performed transfer learning to the brain decoding tasks, which are specific to one of three fMRI datasets. To demonstrate significant benefits of transfer learning, BEAT decoded instrumental timbre from one of the fMRI datasets which standard methods failed to decode in addition to improved downstream performance. Toward generalized pretrained models of the brain, BEAT learned Next Thought Prediction on one fMRI dataset, and then successfully transferred that learning to a supervised brain decoding task on an entirely distinct dataset, with different participants and stimuli. To our knowledge this is the first instance of transfer learning across participants and stimuli–a necessity for whole-brain pretrained models.
Recommended Citation
Paulsen, Sean, "Self-Supervised Pretraining and Transfer Learning on fMRI Data with Transformers" (2023). Dartmouth College Ph.D Dissertations. 173.
https://digitalcommons.dartmouth.edu/dissertations/173
Included in
Artificial Intelligence and Robotics Commons, Computational Neuroscience Commons, Other Computer Sciences Commons