Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions modules/ROOT/pages/af-agent-networks.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -60,6 +60,36 @@ These servers will be published to Exchange.
The asset might also be published to Exchange.
. An asset that has been added to this agent network project from Exchange.

== Large Language Models

Agent network brokers support these LLMs.

* Azure OpenAI
* OpenAI API
* Gemini API

The following table details the requirements and recommended models.

|===
|*Model Provider* |*Required Endpoint* |*Required Capabilities* |*Suggested Models*

|OpenAI |`/responses` a|
* Reasoning
* Native structured output
* Function and Custom tool calling
a|
* For lower latency: GPT-5-mini
* For complex reasoning: Evaluate models

|Gemini |`/generateContent` (Native API) a|
* Native Thinking (via thinkingConfig)
* Native structured output (responseSchema)
* Function and custom tool calling
a|
* For lower latency: Gemini 2.5 Flash, Gemini 2.5 Flash-Lite
* For complex reasoning: Gemini 3 Pro (Deep Think capabilities)
|===

== Considerations

Agent networks have these considerations.
Expand Down
21 changes: 12 additions & 9 deletions modules/ROOT/pages/af-project-files.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -231,14 +231,7 @@ The `spec` element has these properties.

The value of this section is a reference to one of the LLMs defined in Anypoint Exchange or in the `llmProviders` section of `agent-network.yaml`. Because it's a reference, you can choose to share the same LLM across all the brokers in your agent network. Or, you can have different brokers use different LLMs to better suit their tasks.

Agent network brokers support OpenAI models. The model you use must support:

* The `/responses` endpoint
* Reasoning
* Native structured output
* Function and Custom tool calling

GPT-5-mini and GPT-5.1 meet these requirements. GPT-5-mini and GPT-5.1 meet these requirements. Evaluate the different models to find the right balance for your needs. For lower latency, consider smaller models like GPT-5-mini.
For more information on supported LLMs, see xref:af-agent-networks.adoc#large-language-models[].

The `llm` element has these properties.

Expand All @@ -264,7 +257,17 @@ a|
|`configuration.openAI.topP` |Nucleus sampling parameter. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Number |Any number |No
|`configuration.openAI.topLogprobs` |Number of most likely tokens to return at each position. Requires GPT-5.1 and `reasoningEffort` set to `NONE`. |Integer |Any integer |No
|`configuration.openAI.maxOutputTokens` |Maximum number of tokens to generate. |Integer |Any integer |No
|===
|`configuration.gemini` |OpenAI specific configuration settings. |Object |Object with OpenAI settings |No
|`configuration.gemini.thinkingConfig.thinkingBudget` |Sets a token budget for the reasoning phase. (Applies only to Gemini 2.5 series.) |Integer a|
* `0` (Disabled)
* `1024` to `32768` |No
|`configuration.gemini.thinkingConfig.thinkingLevel` |Controls the depth of the reasoning process. (Applies only to Gemini 3 series.) |String a|
* `High` (Default)
* `Low` |No
|`configuration.gemini.temperature` |Controls randomness. |Number |For Gemini 3 and Gemini 2.5, Google recommends keeping this at `1.0` (default) to avoid breaking the reasoning chain. |No
|`configuration.gemini.topP` |Nucleus sampling parameter. |Number |Any number |No
|`configuration.gemini.responseLogprobs` |Whether to return log probabilities. |Boolean |true or false |No
|===

[[instructions-section]]
==== Instructions Section
Expand Down