Constructor for InvocationContext
OptionalartifactOptionalmemoryOptionalinvocationOptionalbranch?: stringOptionaluserOptionalendOptionalliveOptionalactiveOptionaltranscriptionOptionalrunOptionaltransferOptional ReadonlyartifactReadonlysessionOptional ReadonlymemoryReadonlypluginThe plugin manager for this invocation context.
ReadonlyinvocationThe id of this invocation context. Readonly.
Optional ReadonlybranchThe branch of the invocation context.
The format is like agent_1.agent_2.agent_3, where agent_1 is the parent of agent_2, and agent_2 is the parent of agent_3.
Branch is used when multiple sub-agents shouldn't see their peer agents' conversation history.
The current agent of this invocation context. Readonly.
Optional ReadonlyuserThe user content that started this invocation. Readonly.
ReadonlysessionThe current session of this invocation context. Readonly.
Whether to end this invocation.
Set to True in callbacks or tools to terminate this invocation.
OptionalliveThe queue to receive live requests.
OptionalactiveThe running streaming tools of this invocation.
OptionaltranscriptionCaches necessary, data audio or contents, that are needed by transcription.
OptionalrunConfigurations for live agents under this invocation.
OptionaltransferTransfer context for multi-agent workflows Tracks agent transfer chain for telemetry
App name from the session
User ID from the session
Creates a child invocation context for a sub-agent
An invocation context represents the data of a single invocation of an agent.
An invocation:
An invocation runs an agent until it does not request to transfer to another agent.
An agent call:
An LLM agent call is an agent with a BaseLLMFlow. An LLM agent call can contain one or multiple steps.
An LLM agent runs steps in a loop until:
A step:
The summarization of the function response is considered another step, since it is another llm call.
A step ends when it's done calling llm and tools, or if the end_invocation is set to true at any time.