EXCALIBUR Adaptive Constraint-Based Agents in Artificial Environments |
[AGENTS] | [Reactive Agents] [Triggering Agents] [Deliberative Agents] [Hybrid Agents] [Anytime Agents] |
[ Please note: The project has been discontinued as of May 31, 2005 and is superseded by the projects of the ii Labs. There won't be further updates to these pages. ] |
(Related publications: [PUBLink] [PUBLink])
What we need is a continuous transition from reaction to planning. No matter how much the agent has already computed, there must always be a plan available. This can be achieved by improving the plan iteratively. When an agent is called to execute its next action, it improves its current plan until its computation time limit is reached and then executes the action:
WHILE (computation_time_available) DO improve_current_plan ENDWHILE execute_plan's_next_action
For short-term computation horizons, only very primitive plans (reactions) are available, longer computation times being used to improve and optimize the agent's plan. The more time is available for the agent's computations, the more intelligent the behavior will become. Furthermore, the iterative improvement enables the planning process to easily adapt the plan to changed or unexpected situations. This class of agents is very important for computer-games applications and will constitute the basic technology for EXCALIBUR's agents.
[AGENTS] | [Reactive Agents] [Triggering Agents] [Deliberative Agents] [Hybrid Agents] [Anytime Agents] |
For questions, comments or suggestions, please contact us.
Last update:
May 19, 2001 by Alexander Nareyek