Compare Basic Summarizer to Centroid-Based Summarizer using ROUGE

Aim

The goal for this deliverable is to compare the Yioop search engine's basic summarizer (BS) and content-based summarizer (CBS) results to human created summaries on a sample of web pages and report the results.

Overview

The Yioop search engine has two summarizers. The BS and CBS. The BS grabs different parts of the HTML document in a fixed order until it has gotten to the limit of the number of allowed characters in the summary. A CBS hinges on using a centroid (a set of words that are statistically important to the document) to get the main idea for the document. After that it computes the text frequencies and cosine similarity to build the summary. The results will be compared using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) software package, which as of now is the gold standard for calculating summarization metrics.

ROUGE uses various methods to calculate its metrics along with its Recall, Precision and F measures. The precision measure is the relevance of the retrieved documents, the recall measure is the relevance of the relevant documents and the F measure is a combination of both. Below are each method and a brief explanation:

  • ROUGE-L: measures sentence-to-sentence similarity based on the longest common subsequence (LCS) statistics between a candidate translation and a set of reference translations (LinOch2004)
  • ROUGE-S: computes skip-bigram co-occurrence statistics (LinOch2004)
  • ROUGE-W: is an extended version of ROUGE-L. The only difference is ROUGE-W weights the LCS statistics and favors contiguous occurrences
  • ROUGE-SU: is an extended version of ROUGE-S. ROUGE-SU considers skip-bigrams and unigrams, hence the addition of the U in the name
  • ROUGE-N: is an N-gram recall between a candidate summary and the reference summaries. N is the length of the n-gram (Lin2004)

Work Performed

In order to generate the BS and CBS summaries I needed to install and perform an initial configuration of the Yioop search engine. Details on how to set up the Yioop search engine are covered here. After I installed the Yioop search engine myself, I let it crawl the internet to generate an index that I could search. Creating the index does not apply to this subject. I mention it because I needed it to prove I had a working search engine. Now that my search engine was functional, I proceeded to select ten sample web pages for the experiment. The sample web pages were blog entries written for my CS 200W class. I figured they would be easy to summarize since I wrote them and they were not very long. After a few hours of carefully selecting the most important sentences from each blog entry, I used the Yioop search engine to generate both its BS and CBS summaries. Next I had to compare the BS and CBS summaries to my human summary. The ROUGE software package was used for this comparison. I was skeptical that I would be able to use it because I had to contact the developer to get it. Luckily he responded and I was able to start setting it up. To put it nicely, the setup was not straight forward. In short, ROUGE needs an input configuration file that tells the system what system generated files and human generated files to use. The system and human generated files must be in a specific html format with each sentence wrapped in an <a> tag. I will show an example in the results section. Once you have all of the pieces put together, you run it with various switches and review the output.

Results

I ran ROUGE using the recommended arguments in each test. Here is an example:
perl ..\ROUGE-1.5.5.pl -e ..\data -c 95 -2 -1 -U -r 1000 -n 4 -w 1.2 -b 75 -m -s -a Yioop-testCentroid.xml

Below are the results. In general you can see that the BS has better statistics than the CBS and they are very close.

BS ROUGE Result CBS ROUGE Result
---------------------------------------------
11 ROUGE-1 Average_R: 0.80587 (95%-conf.int. 0.68571 - 0.92333)
11 ROUGE-1 Average_P: 0.70494 (95%-conf.int. 0.55714 - 0.85833)
11 ROUGE-1 Average_F: 0.74742 (95%-conf.int. 0.61115 - 0.88264)
---------------------------------------------
11 ROUGE-2 Average_R: 0.70543 (95%-conf.int. 0.54000 - 0.86333)
11 ROUGE-2 Average_P: 0.61554 (95%-conf.int. 0.43571 - 0.80857)
11 ROUGE-2 Average_F: 0.65227 (95%-conf.int. 0.47596 - 0.83047)
---------------------------------------------
11 ROUGE-3 Average_R: 0.60484 (95%-conf.int. 0.39167 - 0.82833)
11 ROUGE-3 Average_P: 0.52748 (95%-conf.int. 0.31095 - 0.76429)
11 ROUGE-3 Average_F: 0.55747 (95%-conf.int. 0.34470 - 0.78985)
---------------------------------------------
11 ROUGE-4 Average_R: 0.46413 (95%-conf.int. 0.20000 - 0.76667)
11 ROUGE-4 Average_P: 0.42445 (95%-conf.int. 0.16000 - 0.71333)
11 ROUGE-4 Average_F: 0.43933 (95%-conf.int. 0.17500 - 0.73333)
---------------------------------------------
11 ROUGE-L Average_R: 0.56185 (95%-conf.int. 0.42162 - 0.71773)
11 ROUGE-L Average_P: 0.70494 (95%-conf.int. 0.55714 - 0.85833)
11 ROUGE-L Average_F: 0.60646 (95%-conf.int. 0.47795 - 0.75049)
---------------------------------------------
11 ROUGE-W-1.2 Average_R: 0.37926 (95%-conf.int. 0.27541 - 0.49500)
11 ROUGE-W-1.2 Average_P: 0.65106 (95%-conf.int. 0.49107 - 0.82321)
11 ROUGE-W-1.2 Average_F: 0.46368 (95%-conf.int. 0.34814 - 0.59179)
---------------------------------------------
11 ROUGE-S* Average_R: 0.66482 (95%-conf.int. 0.47238 - 0.85524)
11 ROUGE-S* Average_P: 0.53184 (95%-conf.int. 0.32500 - 0.75000)
11 ROUGE-S* Average_F: 0.57648 (95%-conf.int. 0.37678 - 0.78848)
---------------------------------------------
11 ROUGE-SU* Average_R: 0.71369 (95%-conf.int. 0.53984 - 0.88286)
11 ROUGE-SU* Average_P: 0.57277 (95%-conf.int. 0.38142 - 0.78056)
11 ROUGE-SU* Average_F: 0.62202 (95%-conf.int. 0.43843 - 0.81393)
---------------------------------------------
11 ROUGE-1 Average_R: 0.76663 (95%-conf.int. 0.63762 - 0.89333)
11 ROUGE-1 Average_P: 0.67857 (95%-conf.int. 0.53214 - 0.84286)
11 ROUGE-1 Average_F: 0.71577 (95%-conf.int. 0.57970 - 0.86190)
---------------------------------------------
11 ROUGE-2 Average_R: 0.70201 (95%-conf.int. 0.54667 - 0.86333)
11 ROUGE-2 Average_P: 0.61664 (95%-conf.int. 0.43809 - 0.80762)
11 ROUGE-2 Average_F: 0.65163 (95%-conf.int. 0.48667 - 0.83636)
---------------------------------------------
11 ROUGE-3 Average_R: 0.60484 (95%-conf.int. 0.39167 - 0.82833)
11 ROUGE-3 Average_P: 0.53236 (95%-conf.int. 0.31917 - 0.76250)
11 ROUGE-3 Average_F: 0.56079 (95%-conf.int. 0.35098 - 0.79039)
---------------------------------------------
11 ROUGE-4 Average_R: 0.46413 (95%-conf.int. 0.20000 - 0.76667)
11 ROUGE-4 Average_P: 0.42445 (95%-conf.int. 0.16000 - 0.71333)
11 ROUGE-4 Average_F: 0.43933 (95%-conf.int. 0.17500 - 0.73333)
---------------------------------------------
11 ROUGE-L Average_R: 0.54120 (95%-conf.int. 0.39743 - 0.70309)
11 ROUGE-L Average_P: 0.67857 (95%-conf.int. 0.53214 - 0.84286)
11 ROUGE-L Average_F: 0.58289 (95%-conf.int. 0.44879 - 0.72838)
---------------------------------------------
11 ROUGE-W-1.2 Average_R: 0.39129 (95%-conf.int. 0.28736 - 0.50728)
11 ROUGE-W-1.2 Average_P: 0.67325 (95%-conf.int. 0.52159 - 0.83758)
11 ROUGE-W-1.2 Average_F: 0.47857 (95%-conf.int. 0.36753 - 0.60219)
---------------------------------------------
11 ROUGE-S* Average_R: 0.60686 (95%-conf.int. 0.40381 - 0.81667)
11 ROUGE-S* Average_P: 0.49785 (95%-conf.int. 0.28373 - 0.74167)
11 ROUGE-S* Average_F: 0.53466 (95%-conf.int. 0.32789 - 0.76667)
---------------------------------------------
11 ROUGE-SU* Average_R: 0.66706 (95%-conf.int. 0.48421 - 0.84841)
11 ROUGE-SU* Average_P: 0.54722 (95%-conf.int. 0.34603 - 0.76429)
11 ROUGE-SU* Average_F: 0.58952 (95%-conf.int. 0.40240 - 0.79922)


BS ROUGE Configuration File CBS ROUGE Configuration File
<ROUGE-EVAL version="1.0">
<EVAL ID="1">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
AgiletaskslistswhatdoesdonemeaninAgileCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
AgiletaskslistswhatdoesdonemeaninAgilCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="2">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
DeliveringaprojectandpresentingtoamultilevelaudienceCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
DeliveringaprojectandpresentingtoamultilevelaudienceCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="3">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
HandingoffaprojecttoaclientwhataretherisksandchallengesCS20.html</P>
</PEERS>
<MODELS>
<M ID="A">
HandingoffaprojecttoaclientwhataretherisksandchallengesCS20.html</M>
</MODELS>
</EVAL>
<EVAL ID="4">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
LinkedInprofileshowtousethemhowtomarketyourselfhowtonetwork.html</P>
</PEERS>
<MODELS>
<M ID="A">
LinkedInprofileshowtousethemhowtomarketyourselfhowtonetwork.html</M>
</MODELS>
</EVAL>
<EVAL ID="5">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
SocialMediaandBrandingCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
SocialMediaandBrandingCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="6">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
TheAgileTeamandwhatisaBacklogWhataretheyforandwhyaretheyimp.html</P>
</PEERS>
<MODELS>
<M ID="A">
TheAgileTeamandwhatisaBacklogWhataretheyforandwhyaretheyimp.html</M>
</MODELS>
</EVAL>
<EVAL ID="7">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatfivetechnicalskillsareemployersseekingWhatfivesoftskillsput.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatfivetechnicalskillsareemployersseekingWhatfivesoftskillsput.html</M>
</MODELS>
</EVAL>
<EVAL ID="8">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatisAgileandwhatareuserstoriesCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatisAgileandwhatareuserstoriesCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="9">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatisanAgileSprintRetrospectiveAbusylifeofagirlgamer.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatisanAgileSprintRetrospectiveAbusylifeofagirlgamer.html</M>
</MODELS>
</EVAL>
<EVAL ID="10">
<PEER-ROOT>
./Yioop-testBasic/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testBasic/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatisanAgileSprintRetrospectiveCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatisanAgileSprintRetrospectiveCS200WBlog.html</M>
</MODELS>
</EVAL>
</ROUGE-EVAL>
<ROUGE-EVAL version="1.0">
<EVAL ID="1">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
AgiletaskslistswhatdoesdonemeaninAgileCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
AgiletaskslistswhatdoesdonemeaninAgilCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="2">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
DeliveringaprojectandpresentingtoamultilevelaudienceCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
DeliveringaprojectandpresentingtoamultilevelaudienceCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="3">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
HandingoffaprojecttoaclientwhataretherisksandchallengesCS20.html</P>
</PEERS>
<MODELS>
<M ID="A">
HandingoffaprojecttoaclientwhataretherisksandchallengesCS20.html</M>
</MODELS>
</EVAL>
<EVAL ID="4">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
LinkedInprofileshowtousethemhowtomarketyourselfhowtonetwork.html</P>
</PEERS>
<MODELS>
<M ID="A">
LinkedInprofileshowtousethemhowtomarketyourselfhowtonetwork.html</M>
</MODELS>
</EVAL>
<EVAL ID="5">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
SocialMediaandBrandingCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
SocialMediaandBrandingCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="6">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
TheAgileTeamandwhatisaBacklogWhataretheyforandwhyaretheyimp.html</P>
</PEERS>
<MODELS>
<M ID="A">
TheAgileTeamandwhatisaBacklogWhataretheyforandwhyaretheyimp.html</M>
</MODELS>
</EVAL>
<EVAL ID="7">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatfivetechnicalskillsareemployersseekingWhatfivesoftskillsput.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatfivetechnicalskillsareemployersseekingWhatfivesoftskillsput.html</M>
</MODELS>
</EVAL>
<EVAL ID="8">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatisAgileandwhatareuserstoriesCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatisAgileandwhatareuserstoriesCS200WBlog.html</M>
</MODELS>
</EVAL>
<EVAL ID="9">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatisanAgileSprintRetrospectiveAbusylifeofagirlgamer.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatisanAgileSprintRetrospectiveAbusylifeofagirlgamer.html</M>
</MODELS>
</EVAL>
<EVAL ID="10">
<PEER-ROOT>
./Yioop-testCentroid/systemsAreGenerated </PEER-ROOT>
<MODEL-ROOT>
./Yioop-testCentroid/modelsAreHuman </MODEL-ROOT>
<INPUT-FORMAT TYPE="SEE">
</INPUT-FORMAT>
<PEERS>
<P ID="11">
WhatisanAgileSprintRetrospectiveCS200WBlog.html</P>
</PEERS>
<MODELS>
<M ID="A">
WhatisanAgileSprintRetrospectiveCS200WBlog.html</M>
</MODELS>
</EVAL>
</ROUGE-EVAL>


Human Generated Input System Generated Input
<html>
<head>
<title>AgiletaskslistswhatdoesdonemeaninAgilCS200WBlog</title>
</head>
<body bgcolor="white">
<a name="1">[1]</a> <a href="#1" id=1>Agile tasks lists, what does done </a>
<a name="2">[2]</a> <a href="#2" id=2>In life just as at work, you may have had someone ask you the dreaded question Are you done yet?</a>
<a name="3">[3]</a> <a href="#3" id=3>That is why to ensure transparency and improve quality in an agile environment, the definition of done (DoD) must be clearly defined and have a consensus among the team.</a>
<a name="4">[4]</a> <a href="#4" id=4>We will walk through what the DoD is, an example of how to create a DoD and what value it brings to the sprint cycle.</a>
<a name="5">[5]</a> <a href="#5" id=5>According to the Agile Alliance and Institute (2014) the DoD is a list of criteria which must be met before a product increment often a user story is considered done. </a>
<a name="6">[6]</a> <a href="#6" id=6>The most important feature of the DoD is it keeps hidden work or scope creep from happening.</a>
<a name="7">[7]</a> <a href="#7" id=7>The DoD gets iteratively worked just like the user stories within each sprint. According to Scrum.org (2013), the Definition of Done is not changed during a Sprint, but should change periodically between Sprints to reflect improvements the Development Team has made in its processes and capabilities to deliver software.</a>
<a name="8">[8]</a> <a href="#8" id=8>Moreover, you will find the risk is reduced, teams are more focused, and communication between the client is better.</a>
<a name="9">[9]</a> <a href="#9" id=9>By making the DoD a way of life and committing to exceptional work, the client will be able to visualize what complete really is. </a>
</body>
</html>
<html>
<head>
<title>AgiletaskslistswhatdoesdonemeaninAgileCS200WBlog</title>
</head>
<body bgcolor="white">
<a name="1">[1]</a> <a href="#1" id=1>Agile tasks lists, what does done mean in Agile? | CS200W Blog</a>
<a name="2">[2]</a> <a href="#2" id=2>In life just as at work, you may have had someone ask you the dreaded question Are you done yet?</a>
<a name="3">[3]</a> <a href="#3" id=3>In life outside of work, we can consult our own minds to make the determination if something is done or not.</a>
<a name="4">[4]</a> <a href="#4" id=4>In an agile work environment you are most likely not the only one involved in making that decision.</a>
<a name="5">[5]</a> <a href="#5" id=5>Everyones opinion on what done means may vary.</a>
<a name="6">[6]</a> <a href="#6" id=6>That is why to ensure transparency and improve quality in an agile environment, the definition of done (DoD) must be clearly defined and have a consensus among the team.</a>
<a name="7">[7]</a> <a href="#7" id=7>We will walk through what the DoD is, an example of how to create a DoD and what value it brings to the sprint cycle.</a>
<a name="8">[8]</a> <a href="#8" id=8>.. CS200W Blog</a>
<a name="9">[9]</a> <a href="#9" id=9>powered by Charles Bocage</a>
<a name="10">[10]</a> <a href="#10" id=10>CS200W Blog </a>
<a name="11">[11]</a> <a href="#11" id=11>Facebook</a>
<a name="12">[12]</a> <a href="#12" id=12>CS200W Blog </a>
<a name="13">[13]</a> <a href="#13" id=13>Twitter</a>
<a name="14">[14]</a> <a href="#14" id=14>CS200W Blog</a>
<a name="15">[15]</a> <a href="#15" id=15>YouTube</a>
<a name="16">[16]</a> <a href="#16" id=16>Search</a>
<a name="17">[17]</a> <a href="#17" id=17>Home Agile</a>
<a name="18">[18]</a> <a href="#18" id=18>Project Management</a>
<a name="19">[19]</a> <a href="#19" id=19>Social Media</a>
<a name="20">[20]</a> <a href="#20" id=20>Skills</a>
<a name="21">[21]</a> <a href="#21" id=21>About Me</a>
<a name="22">[22]</a> <a href="#22" id=22>Contact</a>
<a name="23">[23]</a> <a href="#23" id=23>First lets get the definition of done out of the way.</a>
<a name="24">[24]</a> <a href="#24" id=24>According to the Agile Alliance and Institute (2014) the DoD is a list of criteria which must be met before a product increment often a user story is considered done.</a>
<a name="25">[25]</a> <a href="#25" id=25>In other words, it is the acceptance criteria the work must pass to be evaluated as complete.</a>
<a name="26">[26]</a> <a href="#26" id=26>It can be in the form of a Done List or a Done Checklist.</a>
<a name="27">[27]</a> <a href="#27" id=27>There is no preference on what it is called because they both produce the same results.</a>
<a name="28">[28]</a> <a href="#28" id=28>.. CS200W Blog</a>
<a name="29">[29]</a> <a href="#29" id=29>powered by Charles Bocage</a>
<a name="30">[30]</a> <a href="#30" id=30>CS200W Blog</a>
<a name="31">[31]</a> <a href="#31" id=31>Facebook</a>
<a name="32">[32]</a> <a href="#32" id=32>CS200W Blog</a>
<a name="33">[33]</a> <a href="#33" id=33>Twitter</a>
<a name="34">[34]</a> <a href="#34" id=34>CS200W Blog</a>
<a name="35">[35]</a> <a href="#35" id=35>YouTube</a>
<a name="36">[36]</a> <a href="#36" id=36>Search</a>
<a name="37">[37]</a> <a href="#37" id=37>Home Agile</a>
<a name="38">[38]</a> <a href="#38" id=38>Project Management</a>
<a name="39">[39]</a> <a href="#39" id=39>Social Media</a>
<a name="40">[40]</a> <a href="#40" id=40>Skills</a>
<a name="41">[41]</a> <a href="#41" id=41>About Me</a>
<a name="42">[42]</a> <a href="#42" id=42>Contact</a>
</body>
</html>


References

[LinOch2004] Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics. Chin-Yew Lin and Franz Josef Och. Association for Computational Linguistics. 2004.
[Lin2004] Looking for a Few Good Metrics: Automatic Summarization Evaluation How Many Samples Are Enough?. Chin-Yew Lin. NTCIR. 2004.