Share via


MeteorScoreEvaluator Class

Evaluator that computes the METEOR Score between two strings.

The METEOR (Metric for Evaluation of Translation with Explicit Ordering) score grader evaluates generated text by comparing it to reference texts, focusing on precision, recall, and content alignment. It addresses limitations of other metrics like BLEU by considering synonyms, stemming, and paraphrasing. METEOR score considers synonyms and word stems to more accurately capture meaning and language variations. In addition to machine translation and text summarization, paraphrase detection is an optimal use case for the METEOR score.

Usage


   eval_fn = MeteorScoreEvaluator(
       alpha=0.9,
       beta=3.0,
       gamma=0.5
   )
   result = eval_fn(
       response="Tokyo is the capital of Japan.",
       ground_truth="The capital of Japan is Tokyo.")

Output format


   {
       "meteor_score": 0.62
   }

Constructor

MeteorScoreEvaluator(alpha: float = 0.9, beta: float = 3.0, gamma: float = 0.5)

Parameters

Name Description
alpha

The METEOR score alpha parameter. Default is 0.9.

Default value: 0.9
beta

The METEOR score beta parameter. Default is 3.0.

Default value: 3.0
gamma

The METEOR score gamma parameter. Default is 0.5.

Default value: 0.5