One of the main goals of Artificial Intelligence (AI) is to build rational agents that are capable of taking rational decisions autonomously. For this, it is essential to devise mechanisms to properly represent knowledge, and reason about the knowledge that an agent has about the world. However, an agent’s knowledge is not static?—?it gets updated as the agent acquires new information. One of the big challenges involving knowledge representation is how an agent ought to change its own knowledge and beliefs in response to any new information it acquires. This, in short, is the problem of belief change.
展开▼