Interface for performing AI operations using LaunchDarkly.

Hierarchy

  • LDAIClient

Methods

  • Retrieves and processes a single AI Config agent based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized instructions.

    Parameters

    • key: string

      The key of the AI Config agent.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    • defaultValue: LDAIAgentConfigDefault

      A fallback value containing model configuration and instructions.

    • Optional variables: Record<string, unknown>

      A map of key-value pairs representing dynamic variables to be injected into the instructions. The keys correspond to placeholders within the template, and the values are the corresponding replacements.

      Optional

    Returns Promise<LDAIAgentConfig>

    An AI agent with customized instructions and a tracker. If the configuration cannot be accessed from LaunchDarkly, then the return value will include information from the defaultValue. The returned tracker can be used to track AI operation metrics (latency, token usage, etc.).

    Example

    const key = "research_agent";
    const context = {...};
    const variables = { topic: 'climate change' };
    const agentConfig = await client.agentConfig(key, context, {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    instructions: 'You are a research assistant.',
    }, variables);

    const researchResult = agentConfig.instructions; // Interpolated instructions
    agentConfig.tracker.trackSuccess();
  • Retrieves and processes multiple AI Config agents based on the provided agent configurations and LaunchDarkly context. This includes the model configuration and the customized instructions.

    Type Parameters

    Parameters

    • agentConfigs: T

      An array of agent configurations, each containing the agent key, default configuration, and variables for instructions interpolation.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    Returns Promise<Record<T[number]["key"], LDAIAgentConfig>>

    A map of agent keys to their respective AI agents with customized instructions and tracker. If a configuration cannot be accessed from LaunchDarkly, then the return value will include information from the respective defaultValue. The returned tracker can be used to track AI operation metrics (latency, token usage, etc.).

    Example

    const agentConfigsList = [
    {
    key: 'research_agent',
    defaultValue: {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    instructions: 'You are a research assistant.'
    },
    variables: { topic: 'climate change' }
    },
    {
    key: 'writing_agent',
    defaultValue: {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    instructions: 'You are a writing assistant.'
    },
    variables: { style: 'academic' }
    }
    ] as const;
    const context = {...};

    const configs = await client.agentConfigs(agentConfigsList, context);
    const researchResult = configs["research_agent"].instructions; // Interpolated instructions
    configs["research_agent"].tracker.trackSuccess();
  • Type Parameters

    Parameters

    • agentConfigs: T
    • context: LDContext

    Returns Promise<Record<T[number]["key"], LDAIAgentConfig>>

    Deprecated

    Use agentConfigs instead. This method will be removed in a future version.

  • Retrieves and processes a completion AI Config based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized messages.

    Parameters

    • key: string

      The key of the AI Config.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    • defaultValue: LDAICompletionConfigDefault

      A fallback value containing model configuration and messages. This will be used if the configuration is not available from LaunchDarkly.

    • Optional variables: Record<string, unknown>

      A map of key-value pairs representing dynamic variables to be injected into the message content. The keys correspond to placeholders within the template, and the values are the corresponding replacements.

      Optional

    Returns Promise<LDAICompletionConfig>

    The AI config, customized messages, and a tracker. If the configuration cannot be accessed from LaunchDarkly, then the return value will include information from the defaultValue. The returned tracker can be used to track AI operation metrics (latency, token usage, etc.).

    Example

    const key = "welcome_prompt";
    const context = {...};
    const variables = {username: 'john'};
    const defaultValue = {
    enabled: false,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    };

    const result = completionConfig(key, context, defaultValue, variables);
    // Output:
    {
    enabled: true,
    config: {
    modelId: "gpt-4o",
    temperature: 0.2,
    maxTokens: 4096,
    userDefinedKey: "myValue",
    },
    messages: [
    {
    role: "system",
    content: "You are an amazing GPT."
    },
    {
    role: "user",
    content: "Explain how you're an amazing GPT."
    }
    ],
    tracker: ...
    }
  • Returns a TrackedChat instance for chat interactions. This method serves as the primary entry point for creating TrackedChat instances from configuration.

    Parameters

    • key: string

      The key identifying the AI chat configuration to use.

    • context: LDContext

      The standard LDContext used when evaluating flags.

    • defaultValue: LDAICompletionConfigDefault

      A default value representing a standard AI chat config result.

    • Optional variables: Record<string, unknown>

      Dictionary of values for instruction interpolation. The variables will also be used for judge evaluation. For the judge only, the variables message_history and response_to_evaluate are reserved and will be ignored.

      Optional
    • Optional defaultAiProvider: "openai" | "langchain" | "vercel"

      Optional default AI provider to use.

      Optional

    Returns Promise<undefined | TrackedChat>

    A promise that resolves to the TrackedChat instance, or null if the configuration is disabled.

    Example

    const key = "customer_support_chat";
    const context = {...};
    const defaultValue = {
    enabled: false,
    model: { name: "gpt-4" },
    provider: { name: "openai" },
    messages: [
    { role: "system", content: "You are a helpful customer support agent." }
    ]
    };
    const variables = { customerName: 'John' };

    const chat = await client.createChat(key, context, defaultValue, variables);
    if (chat) {
    const response = await chat.invoke("I need help with my order");
    console.log(response.message.content);
    }
  • Creates and returns a new Judge instance for AI evaluation.

    Parameters

    • key: string

      The key identifying the AI judge configuration to use

    • context: LDContext

      Standard LDContext used when evaluating flags

    • defaultValue: LDAIJudgeConfigDefault

      A default value representing a standard AI config result

    • Optional variables: Record<string, unknown>

      Dictionary of values for instruction interpolation. The variables message_history and response_to_evaluate are reserved for the judge and will be ignored.

      Optional
    • Optional defaultAiProvider: "openai" | "langchain" | "vercel"

      Optional default AI provider to use.

      Optional

    Returns Promise<undefined | Judge>

    Promise that resolves to a Judge instance or undefined if disabled/unsupported

    Example

    const judge = await client.createJudge(
    "relevance-judge",
    context,
    {
    enabled: true,
    model: { name: "gpt-4" },
    provider: { name: "openai" },
    evaluationMetricKeys: ['$ld:ai:judge:relevance'],
    messages: [{ role: 'system', content: 'You are a relevance judge.' }]
    },
    { metric: "relevance" }
    );

    if (judge) {
    const result = await judge.evaluate("User question", "AI response");
    console.log('Relevance score:', result.evals.relevance?.score);
    }
  • Parameters

    • key: string
    • context: LDContext
    • defaultValue: LDAICompletionConfigDefault
    • Optional variables: Record<string, unknown>
      Optional
    • Optional defaultAiProvider: "openai" | "langchain" | "vercel"
      Optional

    Returns Promise<undefined | TrackedChat>

    Deprecated

    Use createChat instead. This method will be removed in a future version.

  • Retrieves and processes a Judge AI Config based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized messages for evaluation.

    Parameters

    • key: string

      The key of the Judge AI Config.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    • defaultValue: LDAIJudgeConfigDefault

      A fallback value containing model configuration and messages. This will be used if the configuration is not available from LaunchDarkly.

    • Optional variables: Record<string, unknown>

      Optional variables for template interpolation in messages and instructions.

      Optional

    Returns Promise<LDAIJudgeConfig>

    A promise that resolves to a tracked judge configuration.

    Example

    const judgeConf = await client.judgeConfig(key, context, {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    evaluationMetricKeys: ['$ld:ai:judge:relevance'],
    messages: [{ role: 'system', content: 'You are a relevance judge.' }]
    }, variables);

    const config = judgeConf.config; // Interpolated configuration
    judgeConf.tracker.trackSuccess();

Generated using TypeDoc