Interface for performing AI operations using LaunchDarkly.

Hierarchy

  • LDAIClient

Methods

  • Parameters

    • key: string
    • context: LDContext
    • Optional defaultValue: LDAIAgentConfigDefault
      Optional
    • Optional variables: Record<string, unknown>
      Optional

    Returns Promise<LDAIAgentConfig>

    Deprecated

    Use agentConfig instead. This method will be removed in a future version.

  • Retrieves and processes a single AI Config agent based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized instructions.

    Parameters

    • key: string

      The key of the AI Config agent.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    • Optional defaultValue: LDAIAgentConfigDefault

      Optional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.

      Optional
    • Optional variables: Record<string, unknown>

      A map of key-value pairs representing dynamic variables to be injected into the instructions. The keys correspond to placeholders within the template, and the values are the corresponding replacements.

      Optional

    Returns Promise<LDAIAgentConfig>

    An AI agent with customized instructions and a tracker. If the configuration cannot be accessed from LaunchDarkly, then the return value will include information from the defaultValue. The returned tracker can be used to track AI operation metrics (latency, token usage, etc.).

    Example

    const key = "research_agent";
    const context = {...};
    const variables = { topic: 'climate change' };
    const agentConfig = await client.agentConfig(key, context, {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    instructions: 'You are a research assistant.',
    }, variables);

    const researchResult = agentConfig.instructions; // Interpolated instructions
    agentConfig.tracker.trackSuccess();
  • Retrieves and processes multiple AI Config agents based on the provided agent configurations and LaunchDarkly context. This includes the model configuration and the customized instructions.

    Type Parameters

    Parameters

    • agentConfigs: T

      An array of agent configurations, each containing the agent key, optional default configuration (when omitted or null, a disabled default is used), and variables for instructions interpolation.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    Returns Promise<Record<T[number]["key"], LDAIAgentConfig>>

    A map of agent keys to their respective AI agents with customized instructions and tracker. If a configuration cannot be accessed from LaunchDarkly, then the return value will include information from the respective defaultValue. The returned tracker can be used to track AI operation metrics (latency, token usage, etc.).

    Example

    const agentConfigsList = [
    {
    key: 'research_agent',
    defaultValue: {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    instructions: 'You are a research assistant.'
    },
    variables: { topic: 'climate change' }
    },
    {
    key: 'writing_agent',
    defaultValue: {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    instructions: 'You are a writing assistant.'
    },
    variables: { style: 'academic' }
    }
    ] as const;
    const context = {...};

    const configs = await client.agentConfigs(agentConfigsList, context);
    const researchResult = configs["research_agent"].instructions; // Interpolated instructions
    configs["research_agent"].tracker.trackSuccess();
  • Fetches an agent graph configuration from LaunchDarkly and returns an AgentGraphDefinition.

    When the graph is enabled the method validates that:

    • The graph flag can be evaluated.
    • A single root node is present.
    • All nodes in the graph are reachable from the root (no disconnected nodes).
    • Every referenced agent config can be fetched and is enabled.

    If any validation check fails, the returned definition has enabled set to false with an empty node collection. When the logger level is DEBUG, a message describing the failure is emitted.

    Parameters

    • graphKey: string

      The LaunchDarkly flag key for the agent graph configuration.

    • context: LDContext

      The LaunchDarkly context used for flag evaluation and tracking.

    • Optional variables: Record<string, unknown>

      Optional key-value pairs used for Mustache template interpolation in each node's agent config instructions. Applied uniformly to all nodes.

      Optional

    Returns Promise<AgentGraphDefinition>

    A promise that resolves to an AgentGraphDefinition. Check enabled before traversing.

    Example

    const graph = await aiClient.agentGraph('my-agent-graph', context, { userName: 'Sandy' });
    if (graph.enabled) {
    graph.traverse((node, ctx) => {
    // build your provider-specific node here
    });
    }
  • Type Parameters

    Parameters

    • agentConfigs: T
    • context: LDContext

    Returns Promise<Record<T[number]["key"], LDAIAgentConfig>>

    Deprecated

    Use agentConfigs instead. This method will be removed in a future version.

  • Retrieves and processes a completion AI Config based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized messages.

    Parameters

    • key: string

      The key of the AI Config.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    • Optional defaultValue: LDAICompletionConfigDefault

      Optional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.

      Optional
    • Optional variables: Record<string, unknown>

      A map of key-value pairs representing dynamic variables to be injected into the message content. The keys correspond to placeholders within the template, and the values are the corresponding replacements.

      Optional

    Returns Promise<LDAICompletionConfig>

    The AI config, customized messages, and a tracker. If the configuration cannot be accessed from LaunchDarkly, then the return value will include information from the defaultValue. The returned tracker can be used to track AI operation metrics (latency, token usage, etc.).

    Example

    const key = "welcome_prompt";
    const context = {...};
    const variables = {username: 'john'};
    const defaultValue = {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    };

    const result = completionConfig(key, context, defaultValue, variables);
    // Output:
    {
    enabled: true,
    config: {
    modelId: "gpt-4o",
    temperature: 0.2,
    maxTokens: 4096,
    userDefinedKey: "myValue",
    },
    messages: [
    {
    role: "system",
    content: "You are an amazing GPT."
    },
    {
    role: "user",
    content: "Explain how you're an amazing GPT."
    }
    ],
    tracker: ...
    }
  • Returns a TrackedChat instance for chat interactions. This method serves as the primary entry point for creating TrackedChat instances from configuration.

    Parameters

    • key: string

      The key identifying the AI chat configuration to use.

    • context: LDContext

      The standard LDContext used when evaluating flags.

    • Optional defaultValue: LDAICompletionConfigDefault

      Optional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.

      Optional
    • Optional variables: Record<string, unknown>

      Dictionary of values for instruction interpolation. The variables will also be used for judge evaluation. For the judge only, the variables message_history and response_to_evaluate are reserved and will be ignored.

      Optional
    • Optional defaultAiProvider: "openai" | "langchain" | "vercel"

      Optional default AI provider to use.

      Optional

    Returns Promise<undefined | TrackedChat>

    A promise that resolves to the TrackedChat instance, or null if the configuration is disabled.

    Example

    const key = "customer_support_chat";
    const context = {...};
    const defaultValue = {
    enabled: true,
    model: { name: "gpt-4" },
    provider: { name: "openai" },
    messages: [
    { role: "system", content: "You are a helpful customer support agent." }
    ]
    };
    const variables = { customerName: 'John' };

    const chat = await client.createChat(key, context, defaultValue, variables);
    if (chat) {
    const response = await chat.invoke("I need help with my order");
    console.log(response.message.content);
    }
  • Reconstructs an LDGraphTracker from a resumption token, preserving the original runId so events from a resumed session are correlated correctly.

    Security note: The token encodes the flag variation key and version. Keep it server-side; do not expose it to untrusted clients.

    Parameters

    • token: string

      URL-safe Base64-encoded token from LDGraphTracker.resumptionToken.

    • context: LDContext

      LDContext to associate with the reconstructed tracker.

    Returns LDGraphTracker

  • Creates and returns a new Judge instance for AI evaluation.

    Parameters

    • key: string

      The key identifying the AI judge configuration to use

    • context: LDContext

      Standard LDContext used when evaluating flags

    • Optional defaultValue: LDAIJudgeConfigDefault

      Optional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.

      Optional
    • Optional variables: Record<string, unknown>

      Dictionary of values for instruction interpolation. The variables message_history and response_to_evaluate are reserved for the judge and will be ignored.

      Optional
    • Optional defaultAiProvider: "openai" | "langchain" | "vercel"

      Optional default AI provider to use.

      Optional

    Returns Promise<undefined | Judge>

    Promise that resolves to a Judge instance or undefined if disabled/unsupported

    Example

    const judge = await client.createJudge(
    "relevance-judge",
    context,
    {
    enabled: true,
    model: { name: "gpt-4" },
    provider: { name: "openai" },
    evaluationMetricKey: '$ld:ai:judge:relevance',
    messages: [{ role: 'system', content: 'You are a relevance judge.' }]
    },
    { metric: "relevance" }
    );

    if (judge) {
    const result = await judge.evaluate("User question", "AI response");
    console.log('Relevance score:', result.evals.relevance?.score);
    }
  • Reconstructs an AIConfigTracker from a resumption token string previously obtained from a tracker's resumptionToken property. Use this to associate deferred events (such as user feedback) with the original invocation's runId.

    Parameters

    • token: string

      A URL-safe Base64-encoded resumption token string.

    • context: LDContext

      The evaluation context to use for subsequent track calls.

    Returns LDAIConfigTracker

    A reconstructed AIConfigTracker with the original runId preserved.

  • Parameters

    • key: string
    • context: LDContext
    • Optional defaultValue: LDAICompletionConfigDefault
      Optional
    • Optional variables: Record<string, unknown>
      Optional
    • Optional defaultAiProvider: "openai" | "langchain" | "vercel"
      Optional

    Returns Promise<undefined | TrackedChat>

    Deprecated

    Use createChat instead. This method will be removed in a future version.

  • Retrieves and processes a Judge AI Config based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized messages for evaluation.

    Parameters

    • key: string

      The key of the Judge AI Config.

    • context: LDContext

      The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.

    • Optional defaultValue: LDAIJudgeConfigDefault

      Optional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.

      Optional
    • Optional variables: Record<string, unknown>

      Optional variables for template interpolation in messages and instructions.

      Optional

    Returns Promise<LDAIJudgeConfig>

    A promise that resolves to a tracked judge configuration.

    Example

    const judgeConf = await client.judgeConfig(key, context, {
    enabled: true,
    model: { name: 'gpt-4' },
    provider: { name: 'openai' },
    evaluationMetricKey: '$ld:ai:judge:relevance',
    messages: [{ role: 'system', content: 'You are a relevance judge.' }]
    }, variables);

    const config = judgeConf.config; // Interpolated configuration
    judgeConf.tracker.trackSuccess();

Generated using TypeDoc