Daniel Flower Dot Com Banner

Introduction

The control of agents in an environment is a difficult problem that has been approached in a variety of ways, from discretising the world into cells to giving obstacles repellent forces to push agents away from them, among many other methods. But in some situations, such as a robot in the real world, the agent has no representation of the world and can only rely on its visual sensors. In these cases, traditional navigation and path finding techniques do not work.

The goal of this project is to investigate the use of neural networks to analyse the visual input of an agent in order to control its navigation. The use of neural networks was inspired by the difficulty in hand coding algorithms to control obstacle avoidance in an arbitrary environment, and the success of the ALVINN system (explained in [1]) which successfully controlled the steering of a vehicle on real roads by analysing the visual input.

This would be achieved by creating a virtual city that agents could navigate around, but importantly the agents would be given no information about the city other than what they saw. The goals of the project are to train an agent to follow the road in much the same way as the ALVINN system, train an agent to avoid obstacles, train an agent to obey traffic lights, and finally use the neural network for path finding.

This report does not explain the theory behind neural networks and it is assumed that the reader has a basic understanding of them.

It is hoped that this project will give an idea of the sort of functions that a neural network is suitable for when controlling an agent based on its visual perception, and which functions are not suitable.