The name of the LLM, e.g. gemini-2.5-flash or gemini-2.5-flash-001.
ProtectedloggerGenerates one content from the given contents and tools.
LlmRequest, the request to send to the LLM.
Optionalstream: booleanbool = false, whether to do streaming call.
a generator of LlmResponse.
For non-streaming call, it will only yield one LlmResponse.
For streaming call, it may yield more than one response, but all yielded responses should be treated as one response by merging the parts list.
ProtectedmaybeAppends a user content, so that model can continue to output.
LlmRequest, the request to send to the LLM.
StaticsupportedProtectedgenerateMain content generation method - handles both streaming and non-streaming
Live connection is not supported for OpenAI models
OpenAI LLM implementation using GPT models Enhanced with comprehensive debug logging similar to Google LLM