In this paper, we develop a video retrieval method based on Query-By-Example (QBE) approach where example shots are provided to represent a query, and used to construct a retrieval model. One drawback of QBE is that a user can only provide a small number of example shots, while each shot is generally represented by a high-dimensional feature. This causes that the retrieval model tends to be over fit to feature dimensions which are specific to example shots, but are ineffective for retrieving relevant shots. As a result, many clearly irrelevant shots are retrieved. To overcome this, we construct a {it video ontology} as knowledge base for QBE. Our video ontology is used to select concepts related to a query. Then, irrelevant shots are filtered by referring to recognition results of objects corresponding to selected concepts. Also, counter-example shots are not provided in QBE, although they are useful for constructing an accurate retrieval model. We introduce a method which selects counter-example shots among shots without user supervision. In this method, our video ontology is used to exclude shots relevant to the query from candidates of counter-example shots. Specifically, we filter shots where object recognition results for concepts related to the query are similar to those of example shots. The effectiveness of our video ontology is tested on TRECVID 2009 video data.
展开▼