This project has moved. For the latest updates, please go here.
Project Description
Mona is a goal-seeking artificial neural network.that is able to learn a number of tasks, including how to solve mazes and forage for food and water.

A simulated environment included in the package is a block world containing mushrooms and pools of water. Robot-like creatures called muzzes forage in this world guided by neural network brains that are capable of learning from experience and reward.

Mona has a simple functional interface with the environment: all knowledge of the state of the environment is absorbed through senses. Responses are expressed to the environment with the goal of eliciting sensory inputs which are internally associated with the reduction of needs. Events can be drawn from sensors, responses, or the firing states of component neurons, calling for three types of neurons: those attuned to sensors are receptors, those associated with responses are motors, and those mediating other neurons are mediators. A mediator presides over a cause-and-effect pair of neuron firing events, retaining state information and driving need-based motive through the network to activate motor responses that will move the system toward goals that reduce needs. Mediators can be structured in hierarchies representing environmental contexts.

For further details see the white paper at tom.portegys.com/research/MonaWhitepaper.pdf. The core code is C++. Java, C# and OpenGL are used for various applications. See Readme.txt for instructions on how to build and run on UNIX and Windows.

News:

April 2009:
Version 4.0, available in the source code section, features a neural network controlled simulated robot that runs with Microsoft Robotics Studio. See demo at tom.portegys.com/research/atani.pdf.

July 2012:
Version 5.1 features:
1. Goal learning based on changes in need values. This is an alternative way of generating goals instead of explicitly incorporating goals.
2. Synergistic instinct/experiential learning demonstrated in the Minc (Mona instinctive creatures) T-maze world.

September 2013:
Added white paper to documentation section.

April 2015:
Version 5.2 features:
The Pong game world:
An environment that features both unpredictable events and sequential actions.
These features are manifested in a nondeterministic finite automaton.

References:

T.E. Portegys, "Goal-Seeking Behavior in a Connectionist Model",
Artificial Intelligence Review. 16 (3):225-253, November, 2001.
T. E. Portegys, "Learning Environmental Contexts in a Goal-Seeking Neural Network",
Journal of Intelligent Systems, Vol. 16, No. 2, 2007.
T. E. Portegys, "Instinct and Learning Synergy in Simulated Foraging Using a Neural Network",
The 2007 International Conference on Artificial Intelligence and Pattern Recognition (AIPR-07).
T. E. Portegys, "A Maze Learning Comparison of Elman, Long Short-Term Memory, and Mona Neural Networks", in Neural Networks at dx.doi.org/10.1016/j.neunet.2009.11.002.
T. E. Portegys, "Discrimination Learning Guided By Instinct", International Journal of Hybrid Intelligent Systems, 10 (2013) 129-136.

Last edited Apr 14, 2015 at 5:58 PM by portegys, version 27