Judge implementation that handles evaluation functionality and conversation management.

According to the AIEval spec, judges are AI Configs with mode: "judge" that evaluate other AI Configs using structured output.

Hierarchy

  • Judge

Constructors

Properties

_aiConfig: LDAIJudgeConfig
_aiConfigTracker: LDAIConfigTracker
_aiProvider: AIProvider
_evaluationResponseStructure: Record<string, unknown>
_logger?: LDLogger

Methods

  • Interpolates message content with variables using Mustache templating.

    Parameters

    • content: string
    • variables: Record<string, string>

    Returns string

  • Evaluates an AI response using the judge's configuration.

    Parameters

    • input: string

      The input prompt or question that was provided to the AI

    • output: string

      The AI-generated response to be evaluated

    • samplingRate: number = 1

      Sampling rate (0-1) to determine if evaluation should be processed (defaults to 1)

    Returns Promise<undefined | JudgeResponse>

    Promise that resolves to evaluation results or undefined if not sampled

  • Evaluates an AI response from chat messages and response.

    Parameters

    • messages: LDMessage[]

      Array of messages representing the conversation history

    • response: ChatResponse

      The AI response to be evaluated

    • samplingRatio: number = 1

      Sampling ratio (0-1) to determine if evaluation should be processed (defaults to 1)

    Returns Promise<undefined | JudgeResponse>

    Promise that resolves to evaluation results or undefined if not sampled

Generated using TypeDoc