In this paper, we investigate the sentence summarization task that produces a summary from a source sentence. Neural sequence-to-sequence models have gained considerable success for this task, while most existing approaches only focus on improving word overlap between the generated summary and the reference, which ignore the correctness, i.e., the summary should not contain error messages with respect to the source sentence. We argue that correctness is an essential requirement for summarization systems. Considering a correct summary is seman-tically entailed by the source sentence, we incorporate entailment knowledge into abstractive summarization models. We propose an cntailment-aware encoder under multi-task framework (i.e., summarization generation and entailment recognition) and an cntailment-aware decoder by entailment Reward Augmented Maximum Likelihood (RAML) training. Experimental results demonstrate that our models significantly outperform baselines from the aspects of informative-ness and correctness.
展开▼