1. Declare the state variables of patches, turtles, and the environment:
patches-own [ ... ]
turtles-own [ ... ]
globals [ ... ]
Of course the state of an agent can represent anything from
physical to social attributes. Often the state of an agent is associated with
visible attributes such as color, label, and position.
2. Declare a procedure to initialize the states of all agents. This procedure
will be associated with a user interface button:
to init-agents
ca
init-patches
init-turtles
end
3. Declare a procedure that updates the state of each agent. This will be associated with a "forever" button on the user interface. It will repeatedly call this procedure until it is pressed a second time:
to update-agents
ask turtles [update-turtle]
ask patches [update-patch]
end
As the agents are updated repeatedly, the user can see the visible attributes of the agents changing in the graphics window. More information can be depicted on monitors. Sliders, switches, and choice boxes allow the user to modify global variables while the program runs.
1. An MAS consists of an environment (platform, context) and a set of components called agents.
MAS = {env, agent-1, ..., agent-n}
2. Each agent has a unique id (AID), a state (fields), and several behaviors (roles, procedures, methods, etc.).
agent = {id, state, behavior-1, ..., behavior-n}
Agent states may include color and position.
3. There may be different types (breeds, classes) of agents. For example, some types of agents may be mobile, while others are stationary. Some types of agents can play system-wide roles (observers, AMS, DF, UI, etc.)
4. An agent can ask (send a message to) another agent to perform some of its behaviors:
ask (agent 9) [behavior-1 behavior-2 behavior-3]
5. Performing (running, executing, invoking) a behavior may cause an agent to update its state, report a value, perform sub-behaviors, or ask other agents to perform behaviors.
to-report behavior-j
update-state
ask agent-3 [behavior-1]
report result
end
6. An agent's "main" behavior iteratively or recursively tries to reach a goal state:
to behavior-main
if (state = goal-state) [stop]
update-state
behavior-main ; do it again
end
In a single-threaded system, the MAS can recursively interleave execution of agent update methods:
to update-agents
ask agents [if (state != goal-state)
[update-state] [stop]]
set cycle (cycle + 1)
update-agents
end
Each call to the main behavior begins a new cycle.
7. The state of an MAS is simply the environment state and the set of states of all agents in the system.
mas.state = {env.state, agent-1.state, ..., agent-n.state}
MAS state and agent states change with each cycle.
8. State changes may instantiate behavior patterns.
9. Patterns can be classified hierarchically, according to their complexity. At the lowest level are simple fixed or repeating patterns, at the highest level are turbulent near-random or chaotic patterns.
10. The behavior pattern of a MAS can be more complex than the behavior patterns of its agents. In a sense, these types of systems are non-reductive. The whole is greater than the sum of the parts. More comes out than went in. We say that they exhibit emergent behavior.
11. Hypothesis: The MAS model is computationally complete. In other words, the components and processes of any computationally complete system can be simulated by agent behaviors. (It should be possible to define a C to NetLogo compiler.)
12. Hypothesis: Intelligent symbol-processing systems can be simulated by multi-agent systems. (If 11 is true, then this is just the Physical Symbol System Hypothesis.)
13. Wolfram's Computational Equivalence Hypothesis: All systems in nature can be simulated by multi-agent systems.
NetLogo can be viewed as a laboratory for experimenting with emergence. Emergence happens when a system composed of agents following simple rules can exhibit complex behavior patterns. Here are a few examples:
Games: The moves of each chess piece are quite simple, and yet the game itself takes years to master.
Physics: Particles move and collide according to simple laws, and yet give rise to the cosmos.
Biology: The behavior of individual neurons in the brain is simple, and yet gives rise to thought.
Sociology: National trends such as voting patterns, segregation, and fashion can emerge from individuals influenced only by their neighbors.
Economics: Price fluctuations, supply, and demand in markets follow complex laws quite different from the simple selfish behavior of the individual buyers and sellers.
NetLogo: Although the update procedures for agents can be quite simple, the global patterns that we see in the graphics window can be complex and surprising.
John Holland; Emergence; Perseus Books.
Stephen Wolfram; A New Kind of Science.
breeds [agents]
globals [cycle env-state max-state]
agents-own [agent-state goal-state]
to init-agents
ca
set cycle 0
set env-state 0
set
max-state 10
create-agents 5
ask agents [init-agent]
end
to init-agent
set agent-state (random max-state)
set goal-state (random max-state)
end
to update-agents
ask agents [ifelse (agent-state !=
goal-state) [update-agent] [die]]
set cycle cycle + 1
display-states
ifelse (count agents > 0)
[update-agents] [print "done"]
end
to update-agent
if (agent-state = goal-state) [print
"done!" stop]
set agent-state (agent-state + 1) mod
max-state
set env-state random 100
end
to display-states
type "[env-state = "
type env-state
type ", cycle = "
type cycle
type ", number of agents = "
type count agents
print "]"
ask agents [display-agent]
end
to display-agent
type "{aid = "
type self
type ", goal-state = "
type goal-state
type ", agent-state = "
type agent-state
print "}"
end
> init-agents
O> update-agents
[env-state = 61, cycle = 1, number of agents = 4]
{aid = (turtle 1), goal-state = 7, agent-state = 3}
{aid = (turtle 2), goal-state = 3, agent-state = 5}
{aid = (turtle 3), goal-state = 4, agent-state = 3}
{aid = (turtle 4), goal-state = 6, agent-state = 6}
[env-state = 55, cycle = 2, number of agents = 3]
{aid = (turtle 1), goal-state = 7, agent-state = 4}
{aid = (turtle 2), goal-state = 3, agent-state = 6}
{aid = (turtle 3), goal-state = 4, agent-state = 4}
[env-state = 52, cycle = 3, number of agents = 2]
{aid = (turtle 1), goal-state = 7, agent-state = 5}
{aid = (turtle 2), goal-state = 3, agent-state = 7}
[env-state = 91, cycle = 4, number of agents = 2]
{aid = (turtle 1), goal-state = 7, agent-state = 6}
{aid = (turtle 2), goal-state = 3, agent-state = 8}
[env-state = 76, cycle = 5, number of agents = 2]
{aid = (turtle 1), goal-state = 7, agent-state = 7}
{aid = (turtle 2), goal-state = 3, agent-state = 9}
[env-state = 65, cycle = 6, number of agents = 1]
{aid = (turtle 2), goal-state = 3, agent-state = 0}
[env-state = 34, cycle = 7, number of agents = 1]
{aid = (turtle 2), goal-state = 3, agent-state = 1}
[env-state = 5, cycle = 8, number of agents = 1]
{aid = (turtle 2), goal-state = 3, agent-state = 2}
[env-state = 17, cycle = 9, number of agents = 1]
{aid = (turtle 2), goal-state = 3, agent-state = 3}
[env-state = 17, cycle = 10, number of agents = 0]
done