Neurovisual Control in the Quake II Environment

Loading...
Thumbnail Image

Authors

Parker, Matt

Issue Date

2009

Type

Thesis

Language

Keywords

backpropagation , genetic algorithm , neuroevolution

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

An enormous variety of tasks and functions can be performed by humans using only a two-dimensional visual array as input. If an artificial intelligence (AI) controller could adequetely harness the great amount of data that is readily extracted by humans in visual input, then a large number of robotics and AI problems could be solved using a single camera as input. First-person shooter computer games that provide realistic-looking graphics can be used to test visual AI controllers that can likely be used for real-life robotics. In this research, the computer game Quake II is used to test and make improvements to visual neural network controllers. Neural networks are promising for visual control because they can parse raw visual data and learn to recognize patterns. Their computational time can also be much faster than complex mathematical visual algorithms, which is essential for real-time applications. In the first experiment, two different retinal layouts are connected to the same type of neural network: one retina imitates a human's clear-center/blurred-periphery, and the other uses uniform acuity. In the second experiment, a Lamarckian learning scheme is devised that uses a hand-coded non-visual controller to help train agents in a mixture of back-propagation and neuroevolution. Lastly, the human-element is completely removed from the Lamarckian scheme by replacing the hand-coded non-visual controller with an evolved non-visual neural network. The learning techniques in this research are all successful advances in the field of visual control and can be applied beyond Quake II.

Description

Citation

Publisher

License

In Copyright(All Rights Reserved)

Journal

Volume

Issue

PubMed ID

DOI

ISSN

EISSN