Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion sdk/ai/azure-ai-agents/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ and `sample_agents_browser_automation_async.py`.

### Breaking Changes
- enable_auto_function_calls supports positional arguments instead of keyword arguments.
- Please see the [agents migration guide](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/AGENTS_MIGRATION_GUIDE.md) on how to use `azure-ai-projects` with `azure-ai-agents` package.
- Please see the [agents migration guide](https://github.com/Azure/azure-sdk-for-python/blob/release/azure-ai-projects/1.0.0/sdk/ai/azure-ai-projects/AGENTS_MIGRATION_GUIDE.md) on how to use `azure-ai-projects` with `azure-ai-agents` package.

### Features Added
- Initial version - splits off Azure AI Agents functionality from the Azure AI Projects SDK.
Expand Down
12 changes: 6 additions & 6 deletions sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py
Original file line number Diff line number Diff line change
Expand Up @@ -656,14 +656,14 @@ class ServiceTier(str, Enum, metaclass=CaseInsensitiveEnumMeta):
"""Specifies the processing type used for serving the request.

* If set to 'auto', then the request will be processed with the service tier
configured in the Project settings. Unless otherwise configured, the Project will use
'default'.
configured in the Project settings. Unless otherwise configured, the Project will use
'default'.
* If set to 'default', then the request will be processed with the standard
pricing and performance for the selected model.
pricing and performance for the selected model.
* If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)'
or 'priority', then the request will be processed with the corresponding service
tier. [Contact sales](https://openai.com/contact-sales) to learn more about Priority
processing.
or 'priority', then the request will be processed with the corresponding service
tier. [Contact sales](https://openai.com/contact-sales) to learn more about Priority
processing.
* When not set, the default behavior is 'auto'.

When the ``service_tier`` parameter is set, the response body will include the ``service_tier``
Expand Down
102 changes: 37 additions & 65 deletions sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -10068,7 +10068,6 @@ class Response(_Model):
:ivar metadata: Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters. Required.
:vartype metadata: dict[str, str]
Expand All @@ -10081,7 +10080,6 @@ class Response(_Model):
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or ``temperature`` but not both. Required.
:vartype top_p: float
:ivar user: A unique identifier representing your end-user, which can help OpenAI to monitor
Expand Down Expand Up @@ -10119,38 +10117,29 @@ class Response(_Model):
and `Structured Outputs <https://platform.openai.com/docs/guides/structured-outputs>`_.
:vartype text: ~azure.ai.projects.models.ResponseText
:ivar tools: An array of tools the model may call while generating a response. You
can specify which tool to use by setting the ``tool_choice`` parameter.

The two categories of tools you can provide the model are:

can specify which tool to use by setting the _tool_choice_ parameter.
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter reference _tool_choice_ should use backticks for consistency with the rest of the codebase. It should be `tool_choice` instead of _tool_choice_.

Suggested change
can specify which tool to use by setting the _tool_choice_ parameter.
can specify which tool to use by setting the `tool_choice` parameter.

Copilot uses AI. Check for mistakes.
The two categories of tools you can provide the model are:


* **Built-in tools**: Tools that are provided by OpenAI that extend the
model's capabilities, like [web
search](https://platform.openai.com/docs/guides/tools-web-search)
or [file search](https://platform.openai.com/docs/guides/tools-file-search). Learn more about
[built-in tools](https://platform.openai.com/docs/guides/tools).
* **Function calls (custom tools)**: Functions that are defined by you,
enabling the model to call your own code. Learn more about
[function calling](https://platform.openai.com/docs/guides/function-calling).
* Built-in tools: Tools that are provided by OpenAI that extend the
model's capabilities, like web search or file search.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code.
:vartype tools: list[~azure.ai.projects.models.Tool]
:ivar tool_choice: How the model should select which tool (or tools) to use when generating
a response. See the ``tools`` parameter to see how to specify which tools
a response. See the tools parameter to see how to specify which tools
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter reference should use backticks for consistency with the rest of the codebase: `tools` instead of tools.

Suggested change
a response. See the tools parameter to see how to specify which tools
a response. See the `tools` parameter to see how to specify which tools

Copilot uses AI. Check for mistakes.
the model can call. Is either a Union[str, "_models.ToolChoiceOptions"] type or a
ToolChoiceObject type.
:vartype tool_choice: str or ~azure.ai.projects.models.ToolChoiceOptions or
~azure.ai.projects.models.ToolChoiceObject
:ivar prompt:
:vartype prompt: ~azure.ai.projects.models.Prompt
:ivar truncation: The truncation strategy to use for the model response.

* `auto`: If the context of this response and previous ones exceeds
the model's context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
the model's context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
* `disabled` (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a
Literal["disabled"] type.
size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a Literal["disabled"] type.
:vartype truncation: str or str
:ivar id: Unique identifier for this Response. Required.
:vartype id: str
Expand All @@ -10169,18 +10158,13 @@ class Response(_Model):
:ivar incomplete_details: Details about why the response is incomplete. Required.
:vartype incomplete_details: ~azure.ai.projects.models.ResponseIncompleteDetails1
:ivar output: An array of content items generated by the model.



* The length and order of items in the `output` array is dependent
on the model's response.
* The length and order of items in the `output` array is dependent on the model's response.
* Rather than accessing the first item in the `output` array and
assuming it's an `assistant` message with the content generated by
the model, you might consider using the `output_text` property where
supported in SDKs. Required.
assuming it's an `assistant` message with the content generated by
the model, you might consider using the `output_text` property where
supported in SDKs. Required.
:vartype output: list[~azure.ai.projects.models.ItemResource]
:ivar instructions: A system (or developer) message inserted into the model's context.

When using along with ``previous_response_id``, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses. Required. Is either a str type or
Expand All @@ -10207,7 +10191,6 @@ class Response(_Model):
"""Set of 16 key-value pairs that can be attached to an object. This can be
useful for storing additional information about the object in a structured
format, and querying for objects via API or the dashboard.

Keys are strings with a maximum length of 64 characters. Values are strings
with a maximum length of 512 characters. Required."""
temperature: float = rest_field(visibility=["read", "create", "update", "delete", "query"])
Expand All @@ -10219,7 +10202,6 @@ class Response(_Model):
where the model considers the results of the tokens with top_p probability
mass. So 0.1 means only the tokens comprising the top 10% probability mass
are considered.

We generally recommend altering this or ``temperature`` but not both. Required."""
user: str = rest_field(visibility=["read", "create", "update", "delete", "query"])
"""A unique identifier representing your end-user, which can help OpenAI to monitor and detect
Expand Down Expand Up @@ -10257,39 +10239,33 @@ class Response(_Model):
and `Structured Outputs <https://platform.openai.com/docs/guides/structured-outputs>`_."""
tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"])
"""An array of tools the model may call while generating a response. You
can specify which tool to use by setting the ``tool_choice`` parameter.

The two categories of tools you can provide the model are:



* **Built-in tools**: Tools that are provided by OpenAI that extend the
model's capabilities, like [web
search](https://platform.openai.com/docs/guides/tools-web-search)
or [file search](https://platform.openai.com/docs/guides/tools-file-search). Learn more about
[built-in tools](https://platform.openai.com/docs/guides/tools).
* **Function calls (custom tools)**: Functions that are defined by you,
enabling the model to call your own code. Learn more about
[function calling](https://platform.openai.com/docs/guides/function-calling)."""
can specify which tool to use by setting the tool_choice parameter.
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter name should use backticks for consistency: `tool_choice` instead of tool_choice.

Suggested change
can specify which tool to use by setting the tool_choice parameter.
can specify which tool to use by setting the `tool_choice` parameter.

Copilot uses AI. Check for mistakes.
The two categories of tools you can provide the model are:

* Built-in tools: Tools that are provided by OpenAI that extend the
model's capabilities, like web search or file search. Learn more about
built-in tools at https://platform.openai.com/docs/guides/tools.
* Function calls (custom tools): Functions that are defined by you,
enabling the model to call your own code. Learn more about
function calling at https://platform.openai.com/docs/guides/function-calling."""
tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceObject"]] = rest_field(
visibility=["read", "create", "update", "delete", "query"]
)
"""How the model should select which tool (or tools) to use when generating
a response. See the ``tools`` parameter to see how to specify which tools
a response. See the tools parameter to see how to specify which tools
Copy link

Copilot AI Nov 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parameter reference should use backticks for consistency with the rest of the codebase: `tools` instead of tools.

Suggested change
a response. See the tools parameter to see how to specify which tools
a response. See the `tools` parameter to see how to specify which tools

Copilot uses AI. Check for mistakes.
the model can call. Is either a Union[str, \"_models.ToolChoiceOptions\"] type or a
ToolChoiceObject type."""
prompt: Optional["_models.Prompt"] = rest_field(visibility=["read", "create", "update", "delete", "query"])
truncation: Optional[Literal["auto", "disabled"]] = rest_field(
visibility=["read", "create", "update", "delete", "query"]
)
"""The truncation strategy to use for the model response.

* `auto`: If the context of this response and previous ones exceeds
the model's context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
* `disabled` (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error. Is either a Literal[\"auto\"] type or
* `auto`: If the context of this response and previous ones exceeds
the model's context window size, the model will truncate the
response to fit the context window by dropping input items in the
middle of the conversation.
* `disabled` (default): If a model response will exceed the context window
size for a model, the request will fail with a 400 error. Is either a Literal[\"auto\"] type or
a Literal[\"disabled\"] type."""
id: str = rest_field(visibility=["read", "create", "update", "delete", "query"])
"""Unique identifier for this Response. Required."""
Expand All @@ -10315,20 +10291,16 @@ class Response(_Model):
"""Details about why the response is incomplete. Required."""
output: list["_models.ItemResource"] = rest_field(visibility=["read", "create", "update", "delete", "query"])
"""An array of content items generated by the model.



* The length and order of items in the `output` array is dependent
on the model's response.
* Rather than accessing the first item in the `output` array and
assuming it's an `assistant` message with the content generated by
the model, you might consider using the `output_text` property where
supported in SDKs. Required."""
* The length and order of items in the `output` array is dependent
on the model's response.
* Rather than accessing the first item in the `output` array and
assuming it's an `assistant` message with the content generated by
the model, you might consider using the `output_text` property where
supported in SDKs. Required."""
instructions: Union[str, list["_models.ItemParam"]] = rest_field(
visibility=["read", "create", "update", "delete", "query"]
)
"""A system (or developer) message inserted into the model's context.

When using along with ``previous_response_id``, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses. Required. Is either a str type or
Expand Down
1 change: 0 additions & 1 deletion sdk/ai/azure-ai-projects/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,6 @@ pytyped = ["py.typed"]

[tool.azure-sdk-build]
verifytypes = false
sphinx = false

[tool.mypy]
exclude = [
Expand Down
Loading