The name of the LLM, e.g. gemini-2.5-flash or gemini-2.5-flash-001.
ProtectedloggerProvides the api client.
Gets the API backend type.
Gets the tracking headers.
Gets the live API version.
Gets the live API client.
Generates one content from the given contents and tools.
LlmRequest, the request to send to the LLM.
Optionalstream: booleanbool = false, whether to do streaming call.
a generator of LlmResponse.
For non-streaming call, it will only yield one LlmResponse.
For streaming call, it may yield more than one response, but all yielded responses should be treated as one response by merging the parts list.
ProtectedmaybeAppends a user content, so that model can continue to output.
LlmRequest, the request to send to the LLM.
StaticsupportedProtectedgenerateMain content generation method - handles both streaming and non-streaming
Connects to the Gemini model and returns an llm connection.
Integration for Gemini models.