EXCALIBUR Adaptive Constraint-Based Agents in Artificial Environments |
[LOGISTICS] | [Realization] [Results] [Tabu Lists] |
[ Please note: The project has been discontinued as of May 31, 2005 and is superseded by the projects of the ii Labs. There won't be further updates to these pages. ] |
(Related publication: [PUBLink])
The problem does not correspond to a typical instance of an agent's configuration (hands, feet, ...), but it is nevertheless easy to model. Every package, truck and airplane has an action resource, which prevents multiple operations from being executed by an object at the same time (like a truck being driven and simultaneously being loaded with a package). Every package, truck and airplane also has a state resource, which defines its location. The initial locations are stored in the corresponding Current States' variables, the Current Time's variable is set to 0 and the packages' destinations are specified by the SGoals's Task Constraint with corresponding Precondition Tasks. The Task Constraint is given all possible action configurations.
The ARC's choice between its improvement heuristics is made with a probabilistic distribution of 90% for ARC-H2, 9% for ARC-H1 and 1% for ARC-H3. This distribution is based on the experiments described in Section [More Knowledge] and was empirically proved, in this context too, to be superior to other ratios. The SRC's choice of heuristic is self-adapting and does not need to be given any parameter.
The Task Constraint's choice of a heuristic is modified so as to always apply only the more aggressive TC-H2 heuristic. This would appear to be more suitable for the planning context because the temporal order here is much more important than for scheduling.
[LOGISTICS] | [Realization] [Results] [Tabu Lists] |
For questions, comments or suggestions, please contact us.
Last update:
May 20, 2001 by Alexander Nareyek