Author ORCID Identifier

https://orcid.org/0009-0004-4130-6599

Date of Award

12-2025

Document Type

Thesis (Master's)

Department or Program

Computer Science

First Advisor

Lorie Loeb

Second Advisor

Nikhil Singh

Third Advisor

John P Bell

Abstract

Smart-home control in mixed-reality environments like Apple Vision Pro often relies on disruptive, application-based paradigms, such as using a smartphone or a windowed virtual interface. These methods create a “mode switch” that imposes cognitive load and pulls users from their primary tasks. We present VisionGlow, a minimal-disruption spatial interaction technique for Vision Pro. VisionGlow represents devices as spatially-anchored “orbs.” To control a device, the user looks at its orb and performs a pinch gesture, which invokes a compact, contextual control panel. We conducted a within-subjects study (N=18) comparing VisionGlow against two baselines: the standard Apple Home app on a smartphone and the windowed Home app on Vision Pro. Participants performed control tasks while engaged in a primary video-watching task presented within the headset. Our results show that VisionGlow was significantly faster, reduced subjective workload (NASA-TLX) by nearly half, and was rated as significantly more usable (SEQ, UMUX). A majority of participants (67%) ranked VisionGlow as their most preferred interface. This work demonstrates that in-situ, gaze-and-pinch interfaces can significantly reduce disruption for smart-home control, offering a viable, low-friction alternative to current windowed and mobile-first paradigms.

Share

COinS