Example


const prompt = ChatPromptTemplate.fromMessages([
["ai", "You are a helpful assistant"],
["human", "{input}"],
new MessagesPlaceholder("agent_scratchpad"),
]);

const runnableAgent = RunnableSequence.from([
{
input: (i: { input: string; steps: ToolsAgentStep[] }) => i.input,
agent_scratchpad: (i: { input: string; steps: ToolsAgentStep[] }) =>
formatToOpenAIToolMessages(i.steps),
},
prompt,
new ChatOpenAI({
modelName: "gpt-3.5-turbo-1106",
temperature: 0,
}).bind({ tools: tools.map(formatToOpenAITool) }),
new OpenAIToolsAgentOutputParser(),
]).withConfig({ runName: "OpenAIToolsAgent" });

const result = await runnableAgent.invoke({
input:
"What is the sum of the current temperature in San Francisco, New York, and Tokyo?",
});

Hierarchy

  • AgentMultiActionOutputParser
    • OpenAIToolsAgentOutputParser

Constructors

  • Parameters

    • Optional kwargs: SerializedFields
    • Rest ..._args: never[]

    Returns OpenAIToolsAgentOutputParser

Methods

  • Default implementation of batch, which calls invoke N times. Subclasses should override this method if they can batch more efficiently.

    Parameters

    Returns Promise<(AgentFinish | AgentAction[])[]>

    An array of RunOutputs, or mixed RunOutputs and errors if batchOptions.returnExceptions is set

  • Parameters

    Returns Promise<(Error | AgentFinish | AgentAction[])[]>

  • Parameters

    Returns Promise<(Error | AgentFinish | AgentAction[])[]>

  • Returns string

  • Calls the parser with a given input and optional configuration options. If the input is a string, it creates a generation with the input as text and calls parseResult. If the input is a BaseMessage, it creates a generation with the input as a message and the content of the input as text, and then calls parseResult.

    Parameters

    • input: string | BaseMessage

      The input to the parser, which can be a string or a BaseMessage.

    • Optional options: BaseCallbackConfig

      Optional configuration options.

    Returns Promise<AgentFinish | AgentAction[]>

    A promise of the parsed output.

  • Parameters

    • text: string

    Returns Promise<AgentFinish | AgentAction[]>

  • Parses the output message into a ToolsAgentAction[] or AgentFinish object.

    Parameters

    Returns AgentFinish | ToolsAgentAction[]

    A ToolsAgentAction[] or AgentFinish object.

  • Parses the result of an LLM call with a given prompt. By default, it simply calls parseResult.

    Parameters

    Returns Promise<AgentFinish | AgentAction[]>

    A promise of the parsed output.

  • Create a new runnable sequence that runs each individual runnable in series, piping the output of one runnable into another runnable or runnable-like.

    Type Parameters

    • NewRunOutput

    Parameters

    Returns RunnableSequence<string | BaseMessage, Exclude<NewRunOutput, Error>>

    A new runnable sequence.

  • Stream output in chunks.

    Parameters

    Returns Promise<IterableReadableStream<AgentFinish | AgentAction[]>>

    A readable stream that is also an iterable.

  • Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state.

    Parameters

    • input: string | BaseMessage
    • Optional options: Partial<BaseCallbackConfig>
    • Optional streamOptions: Omit<LogStreamCallbackHandlerInput, "autoClose">

    Returns AsyncGenerator<RunLogPatch, any, unknown>

  • Returns Serialized

  • Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated.

    Parameters

    Returns AsyncGenerator<AgentFinish | AgentAction[], any, unknown>

  • Parameters

    • thing: any

    Returns thing is Runnable<any, any, BaseCallbackConfig>

Generated using TypeDoc