Attention-Enabled Reinforcement Learning for Control of Scalable Multi-Agent Systems

Loading...
Thumbnail Image

Authors

Dailey, Joseph A.

Issue Date

2024

Type

Thesis

Language

Keywords

Attention , Autonomous Systems , Intelligent Control , Multi-agent Systems , Reinforcement Learning

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

Multi-agent reinforcement learning has been the subject of considerable interest and effort for its potential as a means of specifying behavior policies for multi-agent systems. Specifically, on-policy algorithms based on gradient estimation have achieved state-of-the-art performance on end-to-end control problems once thought beyond the scope of machine learning methods. In seeking to apply the benefits of MARL to practical control of physical autonomous systems, we must begin to account for three factors: (1) the presence of other autonomous elements in the environment configuration space, which may or may not be amenable to coordination; (2) non-idealities in sensing the configuration of the environment (e.g. locality and limited observability); and (3) variability in the number of sensed dynamical elements. The attention head, a relational ML structure originally designed for extraction of abstract natural language features, is structurally well suited to addressing these challenges. This work presents a systematic argument and framework for the use of attention as an input layer to enable learning of neural policy models in changing multi-agent environments which are not well-suited to other representations. In benchmark physical simulations, it is shown that such models achieve competitive performance on cooperative and mixed cooperative/competitive MAS control tasks as the agent cohort is arbitrarily changed. Prospective advantages of attention-based architectures for physical autonomous systems in select applications are discussed, as well as drawbacks associated with explainability and potential for emergent behavior.

Description

Citation

Publisher

Journal

Volume

Issue

PubMed ID

DOI

ISSN

EISSN