Share via


GleuScoreEvaluator Class

Evaluator that computes the BLEU Score between two strings.

The GLEU (Google-BLEU) score evaluator measures the similarity between generated and reference texts by evaluating n-gram overlap, considering both precision and recall. This balanced evaluation, designed for sentence-level assessment, makes it ideal for detailed analysis of translation quality. GLEU is well-suited for use cases such as machine translation, text summarization, and text generation.

Usage


   eval_fn = GleuScoreEvaluator()
   result = eval_fn(
       response="Tokyo is the capital of Japan.",
       ground_truth="The capital of Japan is Tokyo.")

Output format


   {
       "gleu_score": 0.41
   }

Constructor

GleuScoreEvaluator()