<meta http-equiv="refresh" content="1; url=/nojavascript/"> Elements of Agent-based Models | CK-12 Foundation
Dismiss
Skip Navigation
You are reading an older version of this FlexBook® textbook: CK-12 Modeling and Simulation for High School Teachers: Principles, Problems, and Lesson Plans Go to the latest version.

Having an idea of what agents and agent-based models are, we will now turn to some more detailed definitions and examples. In particular, we will look at three components that define "agenthood": individual agents, agent societies, and the situated environment. As a rule, all three components are important for agent-based models.

Agenthood

This section shall answer the question, "What makes an agent an agent?" or "What characteristics must an agent have?" To contribute to this answer, Figure below shows agents in their situated environment.2

Agents in their situated environment.

Figure above shows what needs to be taken into consideration when building agent-based models and what contributes to defining agents:

  • Individual agents: each agent is an intelligent software agent. They are able to perceive their environment, including other agents. This perception leads to an internal representation that is used to make decisions. It is even possible to run simulations on this perception to support better decisions. They can act on objects and communicate with other agents (or via interfaces also with humans). They have a set of rules that represent their goals, their beliefs, and their desires, all of which drive their decisions.
  • Agent society: as a rule, an agent-based model will use more than just one agent. While each agent exposes the characteristics of an individual agent, they may be quite different. The resulting agent societies can be quite different regarding: 1) the number of agents (from large scales with thousands of agents to just a few); 2) the degree of cooperation (all stages between cooperative to non-cooperative can make sense, depending on the problem to be solved); 3) the type of agents employed (homogeneous vs. heterogeneous agent types); and 4) if everybody can contribute new agents to the society (open society) or if strict and predefined rules regulate the composition (closed society).
  • Situated environment: the environment provides the context for the individual agents and the agent society. Besides other agents, the environment may be comprised of additional active and passive objects. The environment can represent geological and geographical information, weather, and all sorts of other things of interest in the context of the problem. What’s most important here is that the agents are situated within the environment, which means they both perceive and act in this environment.

Passive objects can be best understood as just being things. They normally do not change their attributes if not acted upon by others. The streets and office buildings used in the traffic examples are normally pretty static and are not normally important for the computation of traffic problems unless they are acted upon, resulting in a change to their attributes. An active object, on the other hand, may change its status on its own without being acted upon or having to expose all the characteristics of an intelligent software agent. Traffic lights are a good example of an active object in the situated environment of the traffic simulation.

The question often arises, "What is the difference between objects and agents?" While they are both software programs that communicate via interfaces and act towards a given goal, intelligent software agents have internal decision rules that govern if and how to respond to a subroutine (a program running within a program), while objects simply execute the subroutine that is called. Objects have no control over their actions, they have to act. Agents have full control and use their rules to make a decision about how to answer the inquiry to maximize their goals and desires. This difference is summarized in Wooldridge3 and Jennings et al.4 by the now well-known slogan: Objects do it for free, agents do it for money.

Individual Agents

Many textbooks start with describing the agent. The approach selected in this section has the advantage that you already know why some of the characteristics are so important, as you already know the context. Knowing the context makes defining the characteristic properties of an agent easier. The following list is compiled from a variety of papers that tried to define individual agents better. Up to today, there is no generally accepted definition available, but the research community agrees that these following characteristics all contribute to the understanding of what an agent really is.

  • Agents are situated! They are in an environment that they can sense and perceive. They can use their sensors to find out more. They can move around in this environment and act on objects or other agents. Therefore, agents need sensors to sense and perceive as well as effectors to interact. The ability to observe and act accordingly is often referred to as reactivity, in particular when observation and (re)action are directly related by rules.
  • Agents are social! They can exchange information with other agents. This information can be task related. It can also comprise orders or tasks within an agent hierarchy working together to solve a problem, or can be coordination between agents. Agents can collaborate or they can compete for tasks. In any case, several languages have been developed to support this communication.
  • Agents are flexible! There are many other words that have a similar meaning that have been used to describe the same idea: agents can intelligently adapt to new situations. They not only act according to their rules, they observe if their action contributes to reaching their goals. They can learn, as they can change their rules if the observation shows that some actions are not helping to reach the goal or may even be counterproductive. This makes them agile and sustainable.
  • Agents are autonomous! They can operate without the direct intervention of humans or other authorities. Autonomy requires control about their state and behavior. It also requires guidance by some kind of value system, which can be purely utilitarian, or it can be based on complex psychological models of beliefs, desires, morals, and ethics.
  • Agents are pro-active! They do not simply act in response to their environment, they take the initiative. This requires that they build a plan and conduct the resulting operation. Communication with other agents supports both aspects: building the plan and executing the plan in an orchestrated manner.

To support this set of characteristics, the following architectural frame addressing the main agent characteristics has been proposed.5 It is comprised of a set of functions for the external domain (to deal with interactions with the environment, which includes other agents and humans as well), and a set of functions for the internal domains needed for the agent to act and adapt as an autonomous object. The external functions are perception, communications, and action. The internal functions are sense making, decision-making, memory, and adaptation.

Architecture Framework for Agents

Let’s define the functionality and related challenges in a little bit more detail. We use the order in which the functions are likely used when an agent acts in the situated environment.

  • Perception: Using its sensors, the agent receives signals from his environment and prepares this information for the internal sense making function. The general assumption is that with more and more observation – and exchange of results with other agents that do the same – the closer the perception gets to the real situation.
  • Sense Making: In order for observations to make sense, they need to be mapped to an internal representation. The internal representation is the picture the agent has about himself within the situated environment. Looking at the earlier figure showing the agent in his situated environment, the perception is depicted in the cloud. The internal representation does not have to be complete (for example, the triangle and one of the big blocks are missing) and can even be wrong. It is possible that the agent only uses a limited set of attributes to capture his observations, such as "is moving" or "is not moving," as a Boolean parameter that cannot capture velocity and direction. The general rule is: "If you need a detail for your decision process, you need to be able to observe it and it needs to be captured in your internal representation!" Another problem to tackle is data fusion. If we make two similar observations, we have to decide if these observations are dealing with the same thing or if we have two things to take care of.
  • Memory: the rules and algorithms that define how sense making and later decision-making are done are stored in the memory. It is possible to distinguish between short-term and long-term memories. The memory domain stores all information needed for the agent to perform the tasks. There are many approaches to model and implement memory and represent knowledge, information, or data required for the sense-making and decision-making function to work.
  • Communications: As agents are social, they share results with others and ask questions. If two agents use different kinds of sensors, they can share the results of their observations and both can improve the internal representation. If one agent already observed the effects of a certain action, he can communicate the results so that other agents can use the experience. It is possible to use different kinds of languages; whiteboard approaches can be implemented to support collaboration.
  • Decision Making: Agents can support reactive as well as proactive methods. That means we can use simple if-then rules to react on events, or we can use complex decision algorithms based on plans, goals, and value systems. We can ask for more information, or we can trigger some action.
  • Action: Whenever the decision-making function triggers an action, the effectors of the agent are used to act in the situated environment. This includes moving the agent itself as well as acting on active and passive objects. It is also possible to act on other agents other than simply communicating with them. In a hostile environment, agents may attack each other. In a cooperative environment, one agent may transport another agent to a location that was otherwise out of reach. All possible actions – and their constraints – should be known to the decision-making component and stored in the memory.
  • Adaptation: Agents learn and adapt; they are flexible. If a certain action does not lead to the desired effect, this action will no longer be supported. If the environment changes, the internal representation needs to be updated and the rules and algorithms need to be adjusted. Depending on the sophistication of the approach, the agent may discover new heuristics or algorithms. It is possible to simply apply operations research and classic optimization, or it is possible to use complex artificial intelligence approaches.

As this enumeration shows, all seven components are in perpetual interplay with each other. The decision-making function may need more observations from the perception; the memory is adapted based on new information received from communication; and so forth. Many agent-based model frameworks support (pseudo-) parallel execution of these components for agents, and definitely within a society of agents.

Applying These Ideas in Agent-based Models

When designing an agent-based model, one eventually brings together all the various components discussed so far, even if only starting with a very simple setup at first. To begin with, one should have at least one agent situated in its environment along with at least one other agent or an object. As soon as this is the case, the agent needs to be able to "see" the other partner (or the object) in order to interact with them. This can be done by implicitly modeling sensors or simply assuming that everything in the environment can be "seen" by the agent by using the information provided by the other agent or object (i.e., its location, direction, and velocity). The next step is to create an internal map and copy all relevant information, filtered by what the sensors can actually observe. Finally, an agent can be programmed to apply complex algorithms to add new observations to what they "see."

This process of programming an agent to add new conditions/observations eventually leads to the processes of sense making and decision-making. Sense making correlates the observations with the knowledge stored in the program’s memory, and then tries to identify patterns in what is "seen." Decision-making then connects such patterns with actions. These two processes can be controlled through the application of simple IF-THEN rules. If a condition is discovered in the environment (the sense making process), then the related action is executed (decision making).

While simple IF-THEN rules can be applied to many basic scenarios, more complex interactions require the use of additional, stored information from memory for comparison. This holds true even if one were to only use a single constant or a list of constants from a database. These have to be stored and therefore belong to memory. If, for example, our agent has only a limited set of actions stored in his memory from which he can choose to conduct, then the agent must therefore have the constants (conditions) in his memory as well. What’s more, not only is it necessary to use additional, stored memory for more complex interactions, it is possible to categorize these memories into the equivalent of short-term and long-term memories. To do this requires that the agent follow two different sets of rules. The short-term set comprises those rules that are used often or have been recently used, while the long-term set comprises rules that are not so often applied.

The actions of the agent, which result from the application of short-term and/or long-term stored information, can be constrained by predetermined values. If an action exceeds these values, the action is not considered. Using the traffic example, the speed limit may be a legal constraint for some drivers, while others may add a 10% tolerance rate and drive 40 mph in a 35 mph zone. Still others may have no problems driving as fast as possible as long as they don’t see a police car. This example can be used to generalize values which can be used to reflect a variety of scenarios.

The process of using observed conditions to guide actions is not too terribly difficult until one considers how beliefs play a role in real-world situations and how these beliefs can be modeled. Because beliefs often guide the selection process, and because they often reflect higher individual values, two choices may look the same in any given context: the belief system of the agent will therefore guide his choice. If, for example, an agent has the same rates of success when either fighting for something or simply giving up and looking for an alternative, a programmed belief that peaceful operations are a better solution should guide his selection of that option.

So how does one implement adaptation in a model? Adaptation is actually programmed into a model whenever something is implemented that changes how rules are applied regarding sense making and decision-making, including the application of values/beliefs. If, in the traffic scenario, more police cars are punishing speeding, even speed lovers will adapt their patterns.

Once a model has been developed that closely resembles the real-world situation it was designed for, one would likely seek out answers to questions. Specifically, one would be looking for patterns of behavior to emerge as a result of manipulating some condition within the environment or some action of the agent. The wave patterns in the city traffic model are one example. Schelling’s segregation model is another.6 Here Schelling defined a chess board structure with two types of agents on it, and both types of agents were programmed to follow the same simple rule: In order to be happy in a place, a certain percentage of neighbors should be of the same type! If this is not the case, the agent moves to a free space, if possible. This leads to patterns that clearly separate the types. The figures below show the NETLOGO implementation of Schelling’s segregation model with initial and resulting distributions for 30% similarity (i.e., an agent feels happy when at least 30% of its neighbors are of its same type) and 70% similarity (i.e., an agent feels happy when at least 70% of its neighbors are of its same type).

Segregation Model with 30% (top) and 70% (bottom) happiness rate.

In both cases nobody told the agents to segregate, it just happened: a pattern emerged. Many complex system structures have been explainable by simple rules. New approaches are looking at how emergent behaviors can actually be used within engineering to make systems become more stable, sustainable, and agile.

It should be pointed out that the patterns of emergent behaviors do not happen magically. There is no emerging intelligence, no deus ex machina. The results of emergent behavior can always be explained after they occurred based upon the programmed agent specifications and/or their location within the situated environment. A modeling simulation system in general – and an agent-based modeling simulation system in particular – is a production system that follows well defined algorithms and rules (i.e., applied mathematics based on computable functions). No one expected such clear segregation to occur just based on very simple rules within a simple geometry, but it did and it can be explained after observation. Sometimes the results that emerge are as predicted. Sometimes the unintended is observed, but these results are often the most powerful in their ability to incite further discussion as to how the modeled system actually works.

References

2 Andreas Tolk, Adelinde M. Uhrmacher: “Agents: Agenthood, Agent Architectures, and Agent Taxonomies,” in: LeventYilmaz and TuncerÖren(Eds.): Agent-Directed Simulation and Systems Engineering. Wiley-Berlin, pp. 75-109, 2009

3 Michal Wooldridge: “An Introduction to Multiagent Systems,” Wiley-Hoboken, NJ 2002

4 Nicholas R. Jennings, Katia Sycara, and Michal Wooldridge: “A Roadmap of Agent Research and Development,” Autonomous Agents and Multi-Agent Systems 1, 275-306 (1998)

5 Lisa J. Moya and Andreas Tolk: “Towards a taxonomy of agents and multiagent systems.” Proceedings of the 2007 Spring Simulation Multiconference, 1:11–18, SCS San Diego, CA, 2007

6 “Thomas C.Schelling” “Dynamic Models of Segregation,” Mathematical Sociology 1:143-186, 1971

Image Attributions

Description

Categories:

Grades:

Date Created:

Jul 27, 2012

Last Modified:

Apr 29, 2014
Files can only be attached to the latest version of None

Reviews

Please wait...
Please wait...
Image Detail
Sizes: Medium | Original
 
ShareThis Copy and Paste

Original text