Adaptive Constraint-Based Agents in Artificial Environments

[AGENTS]   [Reactive Agents]   [Triggering Agents]   [Deliberative Agents]   [Hybrid Agents]   [Anytime Agents]

[ Please note: The project has been discontinued as of May 31, 2005 and is superseded by the projects of the ii Labs. There won't be further updates to these pages. ]

Reactive Agents

(Related publications: [PUBLink] [PUBLink])

Reactive agents work in a hard-wired stimulus-response manner. Systems like Joseph Weizenbaum's Eliza [PUBLink] and Agre and Chapman's Pengi [PUBLink] are examples of this kind of approach. For certain sensor information, a specific action is executed. This can be implemented by simple if-then rules.

The agent's goals are only implicitly represented by the rules, and it is hard to ensure the desired behavior. Each and every situation must be considered in advance. For example, a situation in which a helicopter is to follow another helicopter can be realized by corresponding rules. One of the rules might look like this:

IF (leading_helicopter == left) THEN

But if the programmer fails to foresee all possible events, he may forget an additional rule designed to stop the pursuit if the leading helicopter crashes. Reactive systems in more complex environments often contain hundreds of rules, which makes it very costly to encode these systems and keep track of their behavior.

The nice thing about reactive agents is their ability to react very fast. But their reactive nature deprives them of the possibility of longer-term reasoning. The agent is doomed if a mere sequence of actions can cause a desired effect and one of the actions is different from what would normally be executed in the corresponding situation.

[AGENTS]   [Reactive Agents]   [Triggering Agents]   [Deliberative Agents]   [Hybrid Agents]   [Anytime Agents]

For questions, comments or suggestions, please contact us.

Last update:
May 19, 2001 by Alexander Nareyek