This dissertation investigates high-level decision making for agents that are both goal and utilitydriven. We develop a partially observable Markov decision process (POMDP) planner whichis an extension of an agent programming language called DTGolog, itself an extension of theGolog language. Golog is based on a logic for reasoning about action—the situation calculus.A POMDP planner on its own cannot cope well with dynamically changing environmentsand complicated goals. This is exactly a strength of the belief-desire-intention (BDI) model:BDI theory has been developed to design agents that can select goals intelligently, dynamicallyabandon and adopt new goals, and yet commit to intentions for achieving goals. The contributionof this research is twofold: (1) developing a relational POMDP planner for cognitiverobotics, (2) specifying a preliminary BDI architecture that can deal with stochasticity in actionand perception, by employing the planner.
展开▼