From e7d182c1d4f42d4a35be8b87d83de3b04237813a Mon Sep 17 00:00:00 2001 From: Isaac Eldridge Date: Mon, 5 Jan 2026 10:17:22 -0800 Subject: [PATCH] Add support for additional large language models in agent networks documentation Updated the af-agent-networks.adoc to include support for Azure OpenAI, OpenAI API, and Gemini API. Added a table detailing requirements and suggested models for each LLM. Revised af-project-files.adoc to reference the new section on LLMs for better clarity on model usage and configuration settings. --- modules/ROOT/pages/af-agent-networks.adoc | 30 +++++++++++++++++++++++ modules/ROOT/pages/af-project-files.adoc | 21 +++++++++------- 2 files changed, 42 insertions(+), 9 deletions(-) diff --git a/modules/ROOT/pages/af-agent-networks.adoc b/modules/ROOT/pages/af-agent-networks.adoc index 1a2c12506..7fb11575c 100644 --- a/modules/ROOT/pages/af-agent-networks.adoc +++ b/modules/ROOT/pages/af-agent-networks.adoc @@ -60,6 +60,36 @@ These servers will be published to Exchange. The asset might also be published to Exchange. . An asset that has been added to this agent network project from Exchange. +== Large Language Models + +Agent network brokers support these LLMs. + +* Azure OpenAI +* OpenAI API +* Gemini API + +The following table details the requirements and recommended models. + +|=== +|*Model Provider* |*Required Endpoint* |*Required Capabilities* |*Suggested Models* + +|OpenAI |`/responses` a| +* Reasoning +* Native structured output +* Function and Custom tool calling +a| +* For lower latency: GPT-5-mini +* For complex reasoning: Evaluate models + +|Gemini |`/generateContent` (Native API) a| +* Native Thinking (via thinkingConfig) +* Native structured output (responseSchema) +* Function and custom tool calling +a| +* For lower latency: Gemini 2.5 Flash, Gemini 2.5 Flash-Lite +* For complex reasoning: Gemini 3 Pro (Deep Think capabilities) +|=== + == Considerations Agent networks have these considerations. diff --git a/modules/ROOT/pages/af-project-files.adoc b/modules/ROOT/pages/af-project-files.adoc index c1b97a017..6027d70d2 100644 --- a/modules/ROOT/pages/af-project-files.adoc +++ b/modules/ROOT/pages/af-project-files.adoc @@ -231,14 +231,7 @@ The `spec` element has these properties. The value of this section is a reference to one of the LLMs defined in Anypoint Exchange or in the `llmProviders` section of `agent-network.yaml`. Because it's a reference, you can choose to share the same LLM across all the brokers in your agent network. Or, you can have different brokers use different LLMs to better suit their tasks. -Agent network brokers support OpenAI models. The model you use must support: - -* The `/responses` endpoint -* Reasoning -* Native structured output -* Function and Custom tool calling - -GPT-5-mini and GPT-5.1 meet these requirements. GPT-5-mini and GPT-5.1 meet these requirements. Evaluate the different models to find the right balance for your needs. For lower latency, consider smaller models like GPT-5-mini. +For more information on supported LLMs, see xref:af-agent-networks.adoc#large-language-models[]. The `llm` element has these properties. @@ -264,7 +257,17 @@ a| |`configuration.openAI.topP` |Nucleus sampling parameter. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Number |Any number |No |`configuration.openAI.topLogprobs` |Number of most likely tokens to return at each position. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Integer |Any integer |No |`configuration.openAI.maxOutputTokens` |Maximum number of tokens to generate. |Integer |Any integer |No -|=== +|`configuration.gemini` |OpenAI specific configuration settings. |Object |Object with OpenAI settings |No +|`configuration.gemini.thinkingConfig.thinkingBudget` |Sets a token budget for the reasoning phase. (Applies only to Gemini 2.5 series.) |Integer a| +* `0` (Disabled) +* `1024` to `32768` |No +|`configuration.gemini.thinkingConfig.thinkingLevel` |Controls the depth of the reasoning process. (Applies only to Gemini 3 series.) |String a| +* `High` (Default) +* `Low` |No +|`configuration.gemini.temperature` |Controls randomness. |Number |For Gemini 3 and Gemini 2.5, Google recommends keeping this at `1.0` (default) to avoid breaking the reasoning chain. |No +|`configuration.gemini.topP` |Nucleus sampling parameter. |Number |Any number |No +|`configuration.gemini.responseLogprobs` |Whether to return log probabilities. |Boolean |true or false |No +|=== [[instructions-section]] ==== Instructions Section