Evaluating AI Assistants: Using LLMs as Judges

As consumer AI -- Large Language Models (LLMs) become increasingly capable, evaluating them is crucial yet challenging; how can we effectively benchmark AI's performance, especially in the open-ended, free-form conversations preferred by users? Researchers from UC Berkeley, Stanford, and other institutions explore using strong LLMs as judges to evaluate chatbots in a new paper titled "Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena." The core premise is that well-trained LLMs already exhibit alignment with human preferences so that they can act as surrogates for expensive and time-consuming human ratings. 

This LLM-as-a-judge approach offers immense promise in accelerating benchmark development. Let's break down the critical details from the paper.

The Challenge of Evaluating Chatbots

While benchmarks abound for assessing LLMs' core capabilities like knowledge and logic, they focus primarily on closed-ended questions with short, verifiable responses. Yet modern chatbots handle free-form conversations across diverse topics. Evaluating their helpfulness and alignment with user expectations is vital but profoundly challenging.

Obtaining robust human evaluations is reliable but laborious and costly. Crowdsourcing ratings from average users for each new model revision could be more practical. At the same time, existing standardized benchmarks often fail to differentiate between base LLMs and aligned chatbots preferred by users. 

For instance, the researchers demonstrate that human users strongly favor Vicuna, a chatbot fine-tuned to mimic ChatGPT conversations, over the base LLaMA model it's built on. Yet differences in benchmark scores on datasets like HellaSwag remain negligible. This discrepancy highlights the need for better benchmarking paradigms tailored to human preferences.

Introducing MT-Bench and Chatbot Arena

To address this evaluation gap, the researchers construct two new benchmarks with human ratings as key evaluation metrics:

  • MT-Bench: A set of 80 open-ended, multi-turn questions testing critical user-facing abilities like following instructions over conversations. Questions fall into diverse domains like writing, reasoning, math, and coding.
  • Chatbot Arena: A live platform where anonymous users chat simultaneously with two models, then vote on preferred responses without knowing model identities. This allows gathering unconstrained votes based on personal interests.

These human-centered benchmarks offer more realistic assessments grounded in subjective user preferences versus technical accuracy alone.  Here I have run a prompt for two versions of Claude LLMs and I found one answer (B) to be more interesting than the other one (A).

You can try this at: https://chat.lmsys.org

LLMs as Surrogate Judges 

The paper investigates using strong LLMs like Claude and GPT-4 as surrogate judges to approximate human ratings. The fundamental hypothesis is that because these models are already trained to match human preferences (e.g., through reinforcement learning from human feedback), their judgments should closely correlate with subjective user assessments. Advantages of this LLM-as-a-judge approach include:

  • Scalability: Automated LLM judgments require minimal human involvement, accelerating benchmark iteration.
  • Explainability: LLMs provide explanatory judgments, not just scores. This grants model interpretability, as illustrated in examples later.

The paper systematically analyzes this method by measuring LLM judge agreement with thousands of controlled experts and unconstrained crowd votes from the two new benchmarks. But first, let's examine some challenges.

Position Bias and Other Limitations

LLM judges exhibit certain biases that can skew evaluations:

  • Position bias: Tendency to favor responses based just on order presented rather than quality. All LLM judges here demonstrate significant position bias.
  • Verbosity bias: Longer responses seem rated higher regardless of clarity or accuracy. When researchers artificially expanded model responses via repetition without adding new information, all but GPT-4 judges failed to detect this distortion.
  • Self-enhancement bias: Some hints exist of judges preferring responses stylistically similar to their own, but limited evidence prevents clear conclusions.
  • Reasoning limitations: Since math/logic capabilities in LLMs still need improvement, their competency grading such questions unsurprisingly needs to be revised. But even on problems they can solve independently, providing incorrect candidate answers can mislead judges.

Despite these biases, agreement between LLM and human judgments ultimately proves impressive, as discussed next. And researchers propose some techniques to help address limitations like position bias, which we'll revisit later.

Key Finding: LLM Judges Match Human Preferences  

Across both controlled and uncontrolled experiments, GPT-4 achieves over 80% judgment agreement with human assessors - on par even with the ~81% inter-rater agreement between random human pairs. This suggests LLMs can serve as cheap and scalable substitutes for costly human evaluations. In particular, here's a sample highlight:

MT-Bench: On 1138 pairwise comparisons from multi-turn dialogues, GPT-4 attained 66% raw agreement and 85% non-tie agreement with experts. The latter excludes tied comparisons where neither response was favored.

Remarkably, when human experts disagreed with GPT-4 judgments, they still deemed its explanations reasonable 75% of the time. And 34% directly changed their original choice to align with the LLM assessment after reviewing its analysis. This further validates the reliability of LLM surrogate judging.

LLM agreement rates grow even higher on model pairs exhibiting sharper performance differences. When responses differ significantly in quality, GPT-4 matches experts almost 100% of the time. This suggests alignment improves for more extreme cases that should be easier for both humans and LLMs to judge consistently.

Mitigating LLM Judge Biases 

While the paper demonstrates impressive LLM judge performance mainly on par with average human consistency, biases like position bias remain crucial for improvement.  Researchers propose a few bias mitigation techniques with preliminary success:

  • Swapping positions: Running judgments twice with responses flipped and only keeping consistent verdicts can help control position bias.
  • Few-shot examples: Priming LLM judges with a handful of illustrative examples significantly boosts consistency on position bias tests from 65% to 77% for GPT-4, mitigating bias.
  • Reference guidance: For mathematical problems, providing LLM judges with an independently generated reference solution drastically cuts failure rates in assessing candidate answers from 70% down to just 15%. This aids competency on questions requiring precise analysis.

So, while biases exist, simple strategies can help minimize their impacts. And overall agreement rates already match or even exceed typical human consistency.

Complementing Standardized Benchmarks   

Human preference benchmarks like MT-Bench and Chatbot Arena assess different dimensions than existing standardized tests of knowledge, reasoning, logic, etc. Using both together paints a fuller picture of model strengths.

For example, the researchers evaluated multiple variants of the base LLaMA model with additional conversation data fine-tuning. Metrics like accuracy on the standardized HellaSwag benchmark improved steadily with more fine-tuning data. However, small high-quality datasets produced models strongly favored by GPT-4 judgments despite minimal gains on standardized scores.

This shows both benchmark types offer complementary insights. Continued progress will also require pushing beyond narrowly defined technical metrics to capture more subjective human preferences.

Democratizing LLM Evaluation 

Accessibly evaluating sophisticated models like ChatGPT requires expertise today. But platforms like CPROMPT.AI open LLM capabilities to everyone by converting text prompts into accessible web apps.  With intuitive visual interfaces, anyone can tap into advanced LLMs to create AI-powered tools for education, creativity, productivity, etc. No coding is needed. And the apps can be shared publicly or sold without any infrastructure or scaling worries.  

By combining such no-code platforms with the automated LLM judge approaches above, benchmarking model quality could also become democratized. Non-experts can build custom benchmark apps to evaluate evolving chatbots against subjective user criteria.  

More comprehensive access can help address benchmark limitations like overfitting on standardized tests by supporting more dynamic, personalized assessments. This is aligned with emerging paradigms like Dynabench that emphasize continuous, human-grounded model evaluations based on actual use cases versus narrow accuracy metrics alone.

Lowering barriers facilitates richer, real-world measurements of AI progress beyond expert evaluations.

Key Takeaways

Let's recap the critical lessons around using LLMs as judges to evaluate chatbots:

  • Aligning AI with subjective user preferences is crucial yet enormously challenging to measure effectively.
  • New human preference benchmarks like MT-Bench demonstrate failed alignment despite standardized solid test performance.
  • Employing LLMs as surrogate judges provides a scalable and automated way to approximate human assessments.
  • LLMs like GPT-4 can match expert consistency levels above 80%, confirming efficacy.
  • Certain biases affect LLM judges, but mitigation strategies like swapping response positions and few-shot examples help address those.
  • Maximizing progress requires hybrid evaluation frameworks combining standardized benchmarks and human preference tests.

As chatbot quality continues improving exponentially, maintaining alignment with user expectations is imperative. Testing paradigms grounded in human judgments enable safe, trustworthy AI development. Utilizing LLMs as judges offers a tractable path to effectively keep pace with accelerating progress in this domain.

Glossary

  • MT-Bench: Suite of open-ended, multi-turn benchmark questions with human rating comparisons  
  • Chatbot Arena: Platform to gather unconstrained conversations and votes pitting anonymous models 
    against each other
  • Human preference benchmark: Tests targeting subjective user alignments beyond just technical accuracy
  • LLM-as-a-judge: Approach using large language models to substitute for human evaluation and preferences
  • Position bias: Tendency for language models to favor candidate responses based simply on the order presented rather than quality