ProtectedloggerThe name of the LLM, e.g. gemini-2.5-flash or gemini-2.5-flash-001.
StaticsupportedProtectedgenerateImplementation method to be overridden by subclasses. This replaces the abstract generateContentAsync method.
Generates one content from the given contents and tools.
LlmRequest, the request to send to the LLM.
Optionalstream: booleanbool = false, whether to do streaming call.
a generator of LlmResponse.
For non-streaming call, it will only yield one LlmResponse.
For streaming call, it may yield more than one response, but all yielded responses should be treated as one response by merging the parts list.
ProtectedmaybeAppends a user content, so that model can continue to output.
LlmRequest, the request to send to the LLM.
Creates a live connection to the LLM.
BaseLLMConnection, the connection to the LLM.
AI SDK integration that accepts a pre-configured LanguageModel. Enables ADK to work with any provider supported by Vercel's AI SDK.