Autonomous Unmanned
Aerial Vehicles (UAVs)
Our main goal is to increase the autonomy of UAVs by exploring
state-of-the-art vision and control techniques. Vision tasks
include object detection and tracking whereas control tasks include
motion planning, fault-tolerance and obstacle avoidance. Techniques
include learning techniques (e.g., reinforcement learning),
optimal control and fault-tolerant control.
Smart/Intelligent
Robots
Our main goal is to develop smart robots that can help human beings
live a better life. We can use reinforcement learning techniques to
guide robots to learn aggressive maneuvers. We could also use LLMs
to let them better understand and interact with human beings.
Research topics include:
Deep reinforcement learning-based control
Deep reinforcement learning-based navigation
Vision-based navigation and control
Learning to achieve dynamic skills
Visual detection and tracking
Visual Localization, Mapping and
Navigation
The main objective is to allow robots to localize itself in any
(known or unknown) environments and map the environment for
navigation purposes.
Research topics include:
VIsion- or Event-based VO/SLAM
Radar/Lidar-based VO/SLAM
Dynamic and large-scale SLAM
Learning and Control
The
main goal is to use deep learning and deep reinforcement learning
techniques to help enhance the autonomy and intelligence of robots.
Research topics include:
Deep learning-based object detection and tracking
Scene understanding
Reinforcement learning-based navigation
Continuum/Soft Robotics
The main goal is to develop navigation and control algorithms
for soft/continuum robots. We also aim to enhance the grasping
skills of soft/continuum robots, especially in complex scenarios.
Research topics include:
Fast control of continuum robots
RL for continuum robots
Aerial continuum arms
Learning to grasp in complex environments