Resilient Multi-Modal Robot Localization in Perception Degraded Environments

Loading...
Thumbnail Image

Authors

Khattak, Shehryar M. K.

Issue Date

2019

Type

Dissertation

Language

Keywords

aerial robotics , extreme environments , Localization , Robot Perception , SLAM , subterranean robotics

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

Resilient odometry estimation for autonomous systems, and especially small aerial robots, is one of the core components required to enable these agile and versatile robots to undertake increasingly demanding roles that had been previously reserved for humans. An ever increasing set of applications requires aerial robots to navigate through GPS–denied environments, while relying on their on-board sensing for localization. In particular, visible light camera sensors have been a popular choice, as their affordable cost, low power consumption, light weight and typically high information content make them a particularly suitable choice for small flying systems. Similarly, miniaturized LiDAR systems are actively and increasingly utilized within the aerial robotics domain. However, such sensing modalities cannot penetrate and provide informative data in all conditions. In particular, visible light cameras cannot provide accurate data in poorly-illuminate or dark environments, while LiDAR systems are sensitive to geometric self-similarity in the environment. Furthermore, both modalities suffer in the presence of airborne visual obscurants such as dust, smoke, and fog. The degradation of data quality in such conditions can adversely affect the reliability of the robot pose estimation processes relying on these sensing modalities.In contrast, Long Wave Infrared (LWIR) thermal vision systems, which are unaffected by darkness and can penetrate many types of obscurants, and can provide a potential solution either as a standalone solution or as complementary sensing that can be incorporated into multi-modal fusion approach to take advantage of the available sensing diversity for providing a resilient robot localization solution. In response to this fact, this dissertation presents a compendium of approaches that in their combination aim to comprehensively address the challenge of resilient robot localization in perception–degraded environments. First, as an initial investigation this work proposes the augmentation of visual camera data with short range dense depth data to extract multi-modal features to improve robot localization capabilities. For this purpose an odometry estimation framework is presented that fuses multi-modal features, generated using visual and depth data, with inertial data in an extended Kalman filter framework for robot localization and mapping applications in low-illumination environments. Second, this work investigates the choice of utilizing thermal cameras as a complementary sensing modality to visual camera sensors and proposes a visual-thermal landmark and inertial fusion method. This approach makes selectiveuse of the most informative areas in visual and thermal images to make the odometry estimation process robust. Focusing specifically to the problem of enabling robust odometry in completely dark and obscurant-filled settings, this dissertation further proposes a novel keyframe-based thermal-inertial odometry estimation framework tailored to the exact data and concepts of operation of thermal cameras to demonstrate the viability of thermal vision as a potential standalone localization solution for micro aerial robots in challenging environments. Beyond the role of thermal vision systems in degraded environments and the associated needs for novel odometry algorithms to exploit their data, this work further investigates the potential of multi-modal fusion and specifically the integration of thermal-and-visual camera and range sensors, alongside inertial cues. To enable long-term and large-scale operations within environments where LiDAR odometry estimation can become degenerate, a loosely-coupled odometry estimation approach is proposed. The proposed approach utilizes visualor-thermal and inertial odometry estimates to provide point-cloud alignment priors,as well as to propagate LiDAR estimates when the underlying LiDAR odometry and mapping process is detected to be ill-conditioned. Furthermore, an initial investigation of tight fusion between the same modalities is presented emphasizing on the relevant new optimization architecture.Finally, this work simultaneously emphasizes on extensive verification and field testing. The localization solutions presented have been at the core of a collection of research activities in perception-degraded environments such as underground mines. By utilizing and generating ground-truth data the proposed methods are thoroughly evaluated with respect to their resilience and performance in a multitude of different situations.

Description

Citation

Publisher

License

Journal

Volume

Issue

PubMed ID

DOI

ISSN

EISSN