Author ORCID Identifier
Date of Award
2025
Document Type
Thesis (Ph.D.)
Department or Program
Computer Science
First Advisor
Adam Breuer
Abstract
Machine learning (ML) models and artificial intelligence (AI) systems are widely vulnerable to different adversarial and privacy attack vectors. Adversaries with different capabilities target AI/ML systems to break down overall functionality (i.e., adversarial attacks) or leak sensitive information (i.e., privacy attacks). To ensure a trustworthy AI/ML system, it is crucial to characterize these vulnerabilities and develop defense techniques. This dissertation comprises five published papers and one draft paper that focus on analyzing vulnerabilities of AI/ML systems and introducing innovative techniques for defenses. The first dissertation work (IEEE CSF) focuses on systematizing novel model inversion (MI) privacy attacks against ML models, including their attack taxonomy, foundational aspects, challenges, and future directions. The second dissertation work (Springer SaSeIoT) explores the use of AI for robust user authentication (e.g., validating users under attacks) in the context of an IoT device, leveraging continuous biometrics, i.e., breathing patterns. The third dissertation work (IEEE SaTML) investigates ML model vulnerabilities against privacy attacks (e.g., tabular data) under a realistic setup with limited adversarial capabilities and shows that ML models can indeed leak sensitive information even in those restricted scenarios.
These new ML attack vectors then inspired a suite of novel ML adversarial and privacy attack defenses. Specifically, the fourth dissertation work (IEEE ICASSPW) focuses on the mitigation of state-of-the-art (SOTA) audio adversarial attacks through multilayer lateral completion networks. The fifth dissertation work (ECCV) introduces novel defenses against MI attacks based on the sparse coding architecture (SCA), which shows 1.1-18.3 times more robustness against MI attacks while not significantly compromising accuracy. This novel work inspires the last piece of dissertation work to further improve MI defense by designing privacy-preserving modeling techniques to systematically eliminate highly sensitive features during training to achieve even better robustness.
Original Citation
Chapters 1 and 2:
Dibbo, S.V., 2023, July. Sok: Model inversion attack landscape: Taxonomy, challenges, and future roadmap. In 2023 IEEE 36th Computer Security Foundations Symposium (CSF) (pp. 439-456). IEEE. doi:10.1109/CSF57540.2023.00027.
Chapter 3:
Dibbo, S.V., Cheung, W. and Vhaduri, S., 2022, January. On-phone CNN model-based implicit authentication to secure IoT wearables. In The Fifth International Conference on Safety and Security with IoT: SaSeIoT 2021 (pp. 19-34). Cham: Springer International Publishing. doi: https://doi.org/10.1007/978-3-030-94285-4_2.
Chapter 4:
Dibbo, S.V., Chung, D.L. and Mehnaz, S., 2023, February. Model inversion attack with least information and an in-depth analysis of its disparate vulnerability. In 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) (pp. 119-135). IEEE. doi:10.1109/SaTML54575.2023.00017.
Chapter 5:
Dibbo, S.V., Moore, J.S., Kenyon, G.T. and Teti, M.A., 2024, April. Lcanets++: Robust audio classification using multi-layer neural networks with lateral competition. In 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW) (pp. 129-133). IEEE. doi:10.1109/ICASSPW62465.2024.10627668.
Chapter 6:
Dibbo, S.V., Breuer, A., Moore, J. and Teti, M., 2024, September. Improving robustness to model inversion attacks via sparse coding architectures. In European Conference on Computer Vision (pp. 117-136). Cham: Springer Nature Switzerland. doi: https://doi.org/10.1007/978-3-031-72989-8_7.
Recommended Citation
DIBBO, SAYANTON, "SECURE AND TRUSTWORTHY AI/ML" (2025). Dartmouth College Ph.D Dissertations. 397.
https://digitalcommons.dartmouth.edu/dissertations/397
Included in
Artificial Intelligence and Robotics Commons, Cybersecurity Commons, Information Security Commons
