Chris Pollett > Students > Charles

    Print View



    [CS297 Proposal]

    [Deliverable #1: Compare Basic Summarizer to Centroid-Based Summarizer using ROUGE]

    [Deliverable #2: Create a Dutch Stemmer for the Yioop Search Engine]

    [Deliverable #3: Create a New Summarizer for the Yioop Search Engine]

    [Deliverable #4: Term Frequency Weighting in the Centroid-Based Summarizer]

    [CS297 Report]

    [CS299 Proposal]

    [Deliverable #1: Test Yioop Summarizers Against a Large Data Set]

    [Deliverable #2: Improve the ROUGE Results for Dr. Pollett's Summarization Algorithm]

    [CS 299 End of Fall 2015 Semester Summary]

    [Deliverable #3: A Numerically Stable Lanczos Text Summarization Algorithm]

    [Deliverable #4: Improving Text Summarization using Automatic Sentence Compression]

    [CS299 Presentation]

    [CS299 Report]


Improving Text Summarization using Automatic Sentence Compression


The goal of this deliverable is to find a simple automatic sentence compression framework to improve text summarization in the, integrate it into the Yioop search engine, compare the ROUGE results with automatic sentence compression on/off and report the results.


Research on automatic summarization has shown that "in many cases, sentences in summaries contain unnecessary information as well as useful facts" [NENKOVA2011]. It is also noted that the longer the sentence the greater the chance the sentence contains unnecessary information. For example in the sentence:

As a tenured professor, Dr. Pollett has mentored many graduate students, who sometimes work on his search engine, which has improved the breadth of research on many topics.

three topics are presented. The first being Dr. Pollett mentors students, second students working on his search engine and third improving the breadth of research. Depending on what is in the rest of the article being summarized, one of more of those topics may be irrelevant.

As many researchers noticed the long sentence phenomena, they propose the idea of not just summarizing the text as a whole but to summarize each sentence of the summary as well. The goal being to provide a more concise summary as would a human. The idea is known today as automatic sentence compression. "Automatic sentence compression can be broadly described as the task of creating a grammatical summary of a single sentence with minimal information loss" [COHN2008].

Furthermore, the research on automatic sentence compression has brought forth many approaches to solving the problem. The paper Automatic Summarization splits the approaches into two categories; rule-based and statistical-based. For example, the papers Sentence Reduction for Automatic Text Summarization and Back to Basics: CLASSY 2006 describe ways of approaching the problem from a rule-based approach. Moreover, Supervised and Unsupervised Learning for Sentence compression and Modelling Compression with Discourse Constraints attack the problem from a statistical approach.

The rule-based approaches use knowledge about how each term is related to the rest of the summary and/or syntactic structure of the sentence. For example, Back to Basics: CLASSY 2006 uses syntactical structure to trim sentences. "Commas, periods, and sentence start are used in identifying most of those items to remove" [CONROY2006]. Furthermore term knowledge in Sentence Reduction for Automatic Text Summarization comes in the form of a corpus that consists of "original sentences and their corresponding reduced forms written by humans for training and testing purpose" [JING2000]. The corpus is further used to match phrases from the human summary to phrases in the document being summarized.

In addition, the statistical approaches also use rules but the rules are not coded into the summarizer, they are learned. For example, in Supervised and Unsupervised Learning for Sentence compression the rules are created using the first section of the Penn Treebank (a syntactic and semantic parser), counting up all of the best context free grammar expansions. Moreover in Modelling Compression with Discourse Constraints rules are created using discourse informed models built on the framework of Integer Linear Programming.

Work Performed

The first thing I did was to research what was already being done on the topic of automatic summarization. As stated in the overview, there are rule–based and statistical-based approaches. Most of the approaches use a corpus that has to be trained and have lots of dependencies. So in the interest of time I decided to implement a part of one of the approaches. The portion I chose to implement was the sentence trimming method mentioned in Back to Basics: CLASSY 2006. The sentence trimming algorithm relies "on lists of "function" words, i.e., those words that play critical roles such as propositions, conjunctions, determiners, etc., on lists of words that play a major role in a specification, i.e., adverbs, gerunds, and on punctuation" [CONROY2006]. In other words, it looks for specific words, phrases or clauses and removes them. The algorithm has seven categories and I chose to implement four of them. They are [CONROY2006]:

  • We remove many adverbs and all conjunctions, including phrases such as "As a matter of fact," and "At this point," that occur at the start of a sentence.
  • We remove a small selections of words that occur in the middle of a sentence, such as ", however," and ", also," (not always requiring the commas).
  • For DUC 2006, we added the removal of ages such as ", 51," or ", aged 24,".
  • We remove relative clause attributives (clauses beginning with "who(m)", "which", "when", and "where") wherever possible.

Next I started writing the code to integrate into the Yioop search engine. Since this was something that all summarizers and languages would eventually support, I made sure I injected the code in a place they would be able to easily leverage it. The code is called from each summarizer and that method pulls the sentence compression algorithm from its current locale tokenizer (if implemented). I only implemented it for the English language in hopes that other will continue my work and create the sentence compression method for other locales. Expanding the work to more locales should not be difficult because after the rules have been established, the code itself is a bunch of Regular Expression replacements. Lastly, once the code was completed, I submitted an issue and a patch to the MantisBT repository manager (Dr. Pollett) for review. Hopefully it will make it into the Yioop search engine at some point.


In conclusion, the sentence compression implementation I performed did not increase the ROUGE results enough to state it is worth including in Yioop’s search engine text summarization process. It will exist in the code base but disabled. That is not to say that a more in depth automatic sentence compression framework would not have generated better results, just that the work I performed did not bare much fruit. As previously stated more work can definitely be done on this topic in the future.

Now for the raw results. The DUC data has 120 documents to summarize and seven ROUGE tests to perform. That comes to a total of 875 tests for each summarizer. Out of the 875 tests, the results of 236 tests for the Basic Summarizer (BS), 350 tests for the Centroid-based Summarizer (CBS), 197 tests for the Centroid-based Weighted Summarizer (CBWS) and 266 tests for the Graph-based Summarizer (GBS) differed. Out of the 236 tests for the BS automatic sentence compression lost 134 to 82. Out of the 350 tests for the CBS automatic sentence compression lost 215 to 135. Out of the 197 tests for the CBWS automatic sentence compression lost 143 to 54. Lastly, out of the 266 tests for the GBS automatic sentence compression lost 175 to 91.

In addition to the overall results, I analyzed them by each of the seven tests. While the results per test were close in most cases, automatic sentence compression did do better for a few of the tests. The BS came in close second on the ROUGE-W-1.2 tests and performed better on the ROUGE-SU4 tests. The CBS also came in close second on the ROUGE-W-1.2 tests but performed better on the ROUGE-L tests. The CBWS came in close second on the ROUGE-W-1.2 tests too in addition to the ROUGE-SU4 tests. Finally, the GBS only performed better on the ROUGE-4 tests and not even close on the rest of the tests.


[COHN2008] Sentence Compression Beyond Word Deletion. Trevor Cohn, Mirella Lapata. Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008). 2008.

[NENKOVA2011] Automatic Summarization. Ani Nenkova, Kathleen McKeown. . Foundations and Trends in Information Retrieval. 2011.

[JING2000] Sentence Reduction for Automatic Text Summarization. Hongyan Jing. . Department of Computer Science. Columbia University. 2000.

[CONROY2006] Back to Basics: CLASSY 2006. John M. Conroy, and Judith D. Schlesinger, Dianne P. O’Leary, Jade Goldstein. . University of Maryland. 2006.

[TURNER2005] Supervised and Unsupervised Learning for Sentence Compression. Jenine Turner, Eugene Charniak. Proceedings of the 43rd Annual Meeting of the ACL, pages 290–297. Association for Computational Linguistics. 2005.

[CLARKE2007] Modelling Compression with Discourse Constraints. James Clarke, Mirella Lapata. Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 1–11. Association for Computational Linguistics. 2007.