In this paper, we compare the performance of three probabilistic pseudo context-sensitive models on parsing isolating languages. These models are all based on the conventional probabilistic context-free grammar (PCFG). The first one is well known for statistical parsing of English, while the other two are novel models conditioning the siblings of an expanding nonterminal. We experiment these models on Classical Chinese, a typical isolating language. And it is quite surprising to see that through only a little more conditioning, the new models significantly outperform the first model. To this end, our work shows the impact of typological distinction on parsing and provides two simple-yet-effective conditioning models for isolating languages.
展开▼