Optional defaultValue: LDAIAgentConfigDefaultOptional Optional variables: Record<string, unknown>Optional Use agentConfig instead. This method will be removed in a future version.
Retrieves and processes a single AI Config agent based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized instructions.
The key of the AI Config agent.
The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.
Optional defaultValue: LDAIAgentConfigDefaultOptional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.
Optional Optional variables: Record<string, unknown>A map of key-value pairs representing dynamic variables to be injected into the instructions. The keys correspond to placeholders within the template, and the values are the corresponding replacements.
Optional An AI agent with customized instructions and a tracker. If the configuration
cannot be accessed from LaunchDarkly, then the return value will include information from the
defaultValue. The returned tracker can be used to track AI operation metrics (latency, token usage, etc.).
const key = "research_agent";
const context = {...};
const variables = { topic: 'climate change' };
const agentConfig = await client.agentConfig(key, context, {
enabled: true,
model: { name: 'gpt-4' },
provider: { name: 'openai' },
instructions: 'You are a research assistant.',
}, variables);
const researchResult = agentConfig.instructions; // Interpolated instructions
agentConfig.tracker.trackSuccess();
Retrieves and processes multiple AI Config agents based on the provided agent configurations and LaunchDarkly context. This includes the model configuration and the customized instructions.
An array of agent configurations, each containing the agent key, optional default configuration (when omitted or null, a disabled default is used), and variables for instructions interpolation.
The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.
A map of agent keys to their respective AI agents with customized instructions and tracker.
If a configuration cannot be accessed from LaunchDarkly, then the return value will include information
from the respective defaultValue. The returned tracker can be used to track AI operation metrics
(latency, token usage, etc.).
const agentConfigsList = [
{
key: 'research_agent',
defaultValue: {
enabled: true,
model: { name: 'gpt-4' },
provider: { name: 'openai' },
instructions: 'You are a research assistant.'
},
variables: { topic: 'climate change' }
},
{
key: 'writing_agent',
defaultValue: {
enabled: true,
model: { name: 'gpt-4' },
provider: { name: 'openai' },
instructions: 'You are a writing assistant.'
},
variables: { style: 'academic' }
}
] as const;
const context = {...};
const configs = await client.agentConfigs(agentConfigsList, context);
const researchResult = configs["research_agent"].instructions; // Interpolated instructions
configs["research_agent"].tracker.trackSuccess();
Fetches an agent graph configuration from LaunchDarkly and returns an AgentGraphDefinition.
When the graph is enabled the method validates that:
If any validation check fails, the returned definition has
enabled set to false with an empty
node collection. When the logger level is DEBUG, a message describing the
failure is emitted.
The LaunchDarkly flag key for the agent graph configuration.
The LaunchDarkly context used for flag evaluation and tracking.
Optional variables: Record<string, unknown>Optional key-value pairs used for Mustache template interpolation in each node's agent config instructions. Applied uniformly to all nodes.
Optional A promise that resolves to an AgentGraphDefinition. Check enabled before traversing.
const graph = await aiClient.agentGraph('my-agent-graph', context, { userName: 'Sandy' });
if (graph.enabled) {
graph.traverse((node, ctx) => {
// build your provider-specific node here
});
}
Use agentConfigs instead. This method will be removed in a future version.
Retrieves and processes a completion AI Config based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized messages.
The key of the AI Config.
The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.
Optional defaultValue: LDAICompletionConfigDefaultOptional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.
Optional Optional variables: Record<string, unknown>A map of key-value pairs representing dynamic variables to be injected into the message content. The keys correspond to placeholders within the template, and the values are the corresponding replacements.
Optional The AI config, customized messages, and a tracker. If the configuration cannot be accessed from
LaunchDarkly, then the return value will include information from the defaultValue. The returned tracker can
be used to track AI operation metrics (latency, token usage, etc.).
const key = "welcome_prompt";
const context = {...};
const variables = {username: 'john'};
const defaultValue = {
enabled: true,
model: { name: 'gpt-4' },
provider: { name: 'openai' },
};
const result = completionConfig(key, context, defaultValue, variables);
// Output:
{
enabled: true,
config: {
modelId: "gpt-4o",
temperature: 0.2,
maxTokens: 4096,
userDefinedKey: "myValue",
},
messages: [
{
role: "system",
content: "You are an amazing GPT."
},
{
role: "user",
content: "Explain how you're an amazing GPT."
}
],
tracker: ...
}
Optional defaultValue: LDAICompletionConfigDefaultOptional Optional variables: Record<string, unknown>Optional Use completionConfig instead. This method will be removed in a future version.
Returns a TrackedChat instance for chat interactions. This method serves as the primary entry point for creating TrackedChat instances from configuration.
The key identifying the AI chat configuration to use.
The standard LDContext used when evaluating flags.
Optional defaultValue: LDAICompletionConfigDefaultOptional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.
Optional Optional variables: Record<string, unknown>Dictionary of values for instruction interpolation.
The variables will also be used for judge evaluation. For the judge only, the variables
message_history and response_to_evaluate are reserved and will be ignored.
Optional Optional defaultAiProvider: "openai" | "langchain" | "vercel"Optional default AI provider to use.
Optional A promise that resolves to the TrackedChat instance, or null if the configuration is disabled.
const key = "customer_support_chat";
const context = {...};
const defaultValue = {
enabled: true,
model: { name: "gpt-4" },
provider: { name: "openai" },
messages: [
{ role: "system", content: "You are a helpful customer support agent." }
]
};
const variables = { customerName: 'John' };
const chat = await client.createChat(key, context, defaultValue, variables);
if (chat) {
const response = await chat.invoke("I need help with my order");
console.log(response.message.content);
}
Reconstructs an LDGraphTracker from a resumption token, preserving
the original runId so events from a resumed session are correlated correctly.
Security note: The token encodes the flag variation key and version. Keep it server-side; do not expose it to untrusted clients.
URL-safe Base64-encoded token from LDGraphTracker.resumptionToken.
LDContext to associate with the reconstructed tracker.
Creates and returns a new Judge instance for AI evaluation.
The key identifying the AI judge configuration to use
Standard LDContext used when evaluating flags
Optional defaultValue: LDAIJudgeConfigDefaultOptional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.
Optional Optional variables: Record<string, unknown>Dictionary of values for instruction interpolation.
The variables message_history and response_to_evaluate are reserved for the judge and will be ignored.
Optional Optional defaultAiProvider: "openai" | "langchain" | "vercel"Optional default AI provider to use.
Optional Promise that resolves to a Judge instance or undefined if disabled/unsupported
const judge = await client.createJudge(
"relevance-judge",
context,
{
enabled: true,
model: { name: "gpt-4" },
provider: { name: "openai" },
evaluationMetricKey: '$ld:ai:judge:relevance',
messages: [{ role: 'system', content: 'You are a relevance judge.' }]
},
{ metric: "relevance" }
);
if (judge) {
const result = await judge.evaluate("User question", "AI response");
console.log('Relevance score:', result.evals.relevance?.score);
}
Reconstructs an AIConfigTracker from a resumption token string previously
obtained from a tracker's resumptionToken property. Use this to associate
deferred events (such as user feedback) with the original invocation's runId.
A URL-safe Base64-encoded resumption token string.
The evaluation context to use for subsequent track calls.
A reconstructed AIConfigTracker with the original runId preserved.
Optional defaultValue: LDAICompletionConfigDefaultOptional Optional variables: Record<string, unknown>Optional Optional defaultAiProvider: "openai" | "langchain" | "vercel"Optional Use createChat instead. This method will be removed in a future version.
Retrieves and processes a Judge AI Config based on the provided key, LaunchDarkly context, and variables. This includes the model configuration and the customized messages for evaluation.
The key of the Judge AI Config.
The LaunchDarkly context object that contains relevant information about the current environment, user, or session. This context may influence how the configuration is processed or personalized.
Optional defaultValue: LDAIJudgeConfigDefaultOptional fallback when the configuration is not available from LaunchDarkly. When omitted or null, a disabled default is used.
Optional Optional variables: Record<string, unknown>Optional variables for template interpolation in messages and instructions.
Optional A promise that resolves to a tracked judge configuration.
const judgeConf = await client.judgeConfig(key, context, {
enabled: true,
model: { name: 'gpt-4' },
provider: { name: 'openai' },
evaluationMetricKey: '$ld:ai:judge:relevance',
messages: [{ role: 'system', content: 'You are a relevance judge.' }]
}, variables);
const config = judgeConf.config; // Interpolated configuration
judgeConf.tracker.trackSuccess();
Generated using TypeDoc
Interface for performing AI operations using LaunchDarkly.