
                                   README file

A .perl file (AgentProbaCameraS0.perl) has been set up for an easy demonstration of implicit cooperation (through emergence) in a system where agents have a vision-based perception, where they can have antagonist behaviors and whose global behavior is to gather objects (see our papers for details: http://www-iiuf.unifr.ch/pai/axe/Contributions.html).

The vision-based perception is provided by the vision library. The main advantage of the vision is to significantly reduce the number of iterations to complete a task and to be closer to real robotics applications. We consider only some probabilistic agents, that is, some agents whose capability to collect an object is governed by a probability law and whose probability to drop an object onto a non empty cell is always 1. The vision allows the agent to be guided towards the object which is the closest to its optical axis (the easiest choice to be implemented).
As the grid is not the best environment for simulating vision, because the smallest step for rotating is PI/4, we were obliged to enhance the perceptive capabilities of our agents, by providing them the capability to perceive the 3 adjacent cells in front of them (according to their direction). This fixed the problem in most cases. A continuous world would therefore be the best for us.

The .perl file is set up in order to run a certain number of experiments, scaling with the number of agents:
- 10 for instance (here you can append other values)
- if you want to test different settings for the location of objects in the environment, just modify "seed_loc_obj"
The environment is exactely the same at the outset: same number of objects, same location of the objects in the environment, same location for the agents as well. Moreover, we have made the following assumptions: at the outset, there is always only one object per cell, an agent can not be situated on a cell containing an object. During an experiment, two agents can not come to the same cell.

In the simulation, objects are represented in red, agents free (non carrying an object) in green and agents carrying an object are in blue. An agent can not carry more than one object.

At the end of each experiment, the number of iterations necessary for a set of agents to complete the task is reported. The notion of completion is given by the observer and consists for all the experiments in getting a unique stack in the environment, even if some agents are still carrying one object. You can modify this criterion by changing the flag "CHOITOSTOP" (in the .perl file) from 1 to 0: the experiment will stop only once all the objects will be stacked up.
There is a limitation (referred to as "THRESHOLD" in the .perl file) on the number of iterations an experiment can take, beyond which we consider the experiment will never succeed. This is a way to stop automatically. This is of course the drawback of our approach: we can find configurations that can never be done.

When running an experiment, three windows will appear on the screen:
- one displaying the evolution of the number of stacks in the environment (at the outset, the number of stacks is equal to the number of objects).
- one displaying the evolution of the size of the biggest stack (useful to know how the biggest stack is built).
- one displaying an histogram of stacks once the number of stacks in the environment comes equal to ten or less. It is therefore uninteresting to have such information at the beginning of the experiment. The stacks in the histogram appear according to their position (x,y) in the 2D grid converted into an offset (y times worldSizeX + x): the first stack in the histogram will be the one whose location on the grid is the closest to the upper left corner.

Once the application is built (after having run 'make'), just type the name of .perl file and the demo will be executed with the appropriate setting: the size of the world is 40 by 40 (grid with bounds). The number of objects is 40.

