Optimal Strategy Imitation Learning from Differential Games

Loading...
Thumbnail Image

Authors

Silveria, Niki T.

Issue Date

2017

Type

Thesis

Language

Keywords

autonomous vehicles , differential games , imitation learning

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

The ability of a vehicle to navigate safely through any environment relies on its driver having an accurate sense of the future positions and goals of other vehicles on the road. A driver does not navigate around where an agent is, but where it is going to be. To avoid collisions, autonomous vehicles should be equipped with the ability to to derive appropriate controls using future estimations for other vehicles, pedestrians, or otherwise intentionally moving agents in a manner similar to or better than human drivers. Differential game theory provides one approach to generate a control strategy by modeling two players with opposing goals. Environments faced by autonomous vehicles, such as merging onto a freeway, are complex, but they can be modeled and solved as a differential game using discrete approximations; these games yield an optimal control policy for both players and can be used to model adversarial driving scenarios rather than average ones, so that autonomous vehicles will be safer on the road in more situations. Further, discrete approximations of solutions to complex games that are computationally tractable and provably asymptotically optimal have been developed, but may not produce usable results in an online fashion. To retrieve an efficient, continuous control policy, we use deep imitation learning to model the discrete approximation of a differential game solution. We successfully learn the policy generated for two games of different complexity, a fence escape and merging game, and show that the imitated policy generates control inputs faster than the differential game generated policy.

Description

Citation

Publisher

License

Creative Commons Attribution-NonCommercial 4.0 United States

Journal

Volume

Issue

PubMed ID

DOI

ISSN

EISSN