pythonnlpsummarizationrouge

Rouge scores are different when using package "datasets" and "rouge_score"


I use 2 packages, "datasets" and "rouge_score" to get the rouge-1 scores. However, the precision and recall are different. I wonder which package produces the correct scores?

from rouge_score import rouge_scorer
import datasets

hyp = ['I have no car.']
ref = ['I want to buy a car.']

scorer1 = datasets.load_metric('rouge')
scorer2 = rouge_scorer.RougeScorer(['rouge1'])

results = {'precision_rouge_score': [], 'recall_rouge_score': [], 'fmeasure_rouge_score': [], \
           'precision_datasets': [], 'recall_datasets': [], 'fmeasure_datasets': []}

for (h, r) in zip(hyp, ref):

    precision, recall, fmeasure = scorer2.score(h, r)['rouge1']
    results['precision_rouge_score'].append(precision)
    results['recall_rouge_score'].append(recall)
    results['fmeasure_rouge_score'].append(fmeasure)

    output = scorer1.compute(predictions=[h], references=[r])
    results['precision_datasets'].append(output['rouge1'].mid.precision)
    results['recall_datasets'].append(output['rouge1'].mid.recall)
    results['fmeasure_datasets'].append(output['rouge1'].mid.fmeasure)

print('results: ', results)

The results are:

{'precision_rouge_score': [0.3333333333333333], 'recall_rouge_score': [0.5], 
'fmeasure_rouge_score': [0.4],
'precision_datasets': [0.5], 'recall_datasets': [0.3333333333333333],
'fmeasure_datasets': [0.4]}

Solution

  • According to the original paper, https://aclanthology.org/W04-1013.pdf, I saw this formula:

    enter image description here

    So for 2 above sentences (Hyp: "I have no car." vs Ref:"I want to buy a car."), rouge1-recall = 2 (I, car)/6 (I, want, to, buy, a, car) = 0.333333. It seems "datasets" package is correct.