Date of Award
Spring 4-28-2023
Document Type
Thesis (Master's)
Department or Program
Engineering Sciences
First Advisor
Laura Ray
Second Advisor
Devin J. Balkcom
Third Advisor
Minh Q. Phan
Abstract
This thesis addresses hexapod robot motion control. Insect morphology and locomotion patterns inform the design of a robotic model, and motion control is achieved via trajectory planning and bio-inspired principles. Additionally, deep learning and multi-agent reinforcement learning are employed to train the robot motion control strategy with leg coordination achieves using a multi-agent deep reinforcement learning framework. The thesis makes the following contributions:
First, research on legged robots is synthesized, with a focus on hexapod robot motion control. Insect anatomy analysis informs the hexagonal robot body and three-joint single robotic leg design, which is assembled using SolidWorks. Different gaits are studied and compared, and robot leg kinematics are derived and experimentally verified, culminating in a three-legged gait for motion control.
Second, an animal-inspired approach employs a central pattern generator (CPG) control unit based on the Hopf oscillator, facilitating robot motion control in complex environments such as stable walking and climbing. The robot's motion process is quantitatively evaluated in terms of displacement change and body pitch angle.
Third, a value function decomposition algorithm, QPLEX, is applied to hexapod robot motion control. The QPLEX architecture treats each leg as a separate agent with local control modules, that are trained using reinforcement learning. QPLEX outperforms decentralized approaches, achieving coordinated rhythmic gaits and increased robustness on uneven terrain. The significant of terrain curriculum learning is assessed, with QPLEX demonstrating superior stability and faster consequence.
The foot-end trajectory planning method enables robot motion control through inverse kinematic solutions but has limited generalization capabilities for diverse terrains. The animal-inspired CPG-based method offers a versatile control strategy but is constrained to core aspects. In contrast, the multi-agent deep reinforcement learning-based approach affords adaptable motion strategy adjustments, rendering it a superior control policy. These methods can be combined to develop a customized robot motion control policy for specific scenarios.
Recommended Citation
Zhan, Weishu, "MOTION CONTROL SIMULATION OF A HEXAPOD ROBOT" (2023). Dartmouth College Master’s Theses. 113.
https://digitalcommons.dartmouth.edu/masters_theses/113