About the project
This project focuses on advancing the trustworthiness and usability of multi-robot systems, particularly in the context of swarm robotics.
With the rapid growth of autonomous systems, it's essential to ensure that swarms of agents can operate safely, effectively, and transparently in complex environments. Our research addresses these challenges through multi-agent machine learning and the design of novel interfaces and interaction models that facilitate seamless communication and collaboration between human operators and swarms. This includes developing user-centric approaches to improve situational awareness, control, and decision-making in complex tasks.
You will research human-swarm interaction, concentrating on trust modeling and system transparency. You will design and conduct experiments to gather data on human behavior and swarm performance using physical ground or aerial robots. You will develop adaptive interaction models that respond to real-time human trust and cognitive load levels.
You will join our cutting-edge research team, working with a multidisciplinary team of engineers, cognitive scientists, and robotics experts. You will present your findings in high-impact journals and conferences.
You will have access to multiple robotic platforms (aerial, quadruped, and wheeled) and can choose the one most suitable for your research. You will also have the opportunity to discuss real-world applications with industry partners and develop collaborations with international partners, such as the University of Texas at Austin, and travel to conferences and partner institutions.
Applicants who want to apply machine learning or human factors to the multi-robot domain are encouraged to apply. You should have excellent grades and strong programming skills such as Python or ROS.