diff --git a/sdk/ai/azure-ai-agents/CHANGELOG.md b/sdk/ai/azure-ai-agents/CHANGELOG.md index 372da3655cde..5ead429e5eb4 100644 --- a/sdk/ai/azure-ai-agents/CHANGELOG.md +++ b/sdk/ai/azure-ai-agents/CHANGELOG.md @@ -217,7 +217,7 @@ and `sample_agents_browser_automation_async.py`. ### Breaking Changes - enable_auto_function_calls supports positional arguments instead of keyword arguments. -- Please see the [agents migration guide](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/ai/azure-ai-projects/AGENTS_MIGRATION_GUIDE.md) on how to use `azure-ai-projects` with `azure-ai-agents` package. +- Please see the [agents migration guide](https://github.com/Azure/azure-sdk-for-python/blob/release/azure-ai-projects/1.0.0/sdk/ai/azure-ai-projects/AGENTS_MIGRATION_GUIDE.md) on how to use `azure-ai-projects` with `azure-ai-agents` package. ### Features Added - Initial version - splits off Azure AI Agents functionality from the Azure AI Projects SDK. diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py index 4fb25d632829..499771123b1b 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_enums.py @@ -656,14 +656,14 @@ class ServiceTier(str, Enum, metaclass=CaseInsensitiveEnumMeta): """Specifies the processing type used for serving the request. * If set to 'auto', then the request will be processed with the service tier - configured in the Project settings. Unless otherwise configured, the Project will use - 'default'. + configured in the Project settings. Unless otherwise configured, the Project will use + 'default'. * If set to 'default', then the request will be processed with the standard - pricing and performance for the selected model. + pricing and performance for the selected model. * If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' - or 'priority', then the request will be processed with the corresponding service - tier. [Contact sales](https://openai.com/contact-sales) to learn more about Priority - processing. + or 'priority', then the request will be processed with the corresponding service + tier. [Contact sales](https://openai.com/contact-sales) to learn more about Priority + processing. * When not set, the default behavior is 'auto'. When the ``service_tier`` parameter is set, the response body will include the ``service_tier`` diff --git a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py index 4d33d9353187..55a0998e78c7 100644 --- a/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py +++ b/sdk/ai/azure-ai-projects/azure/ai/projects/models/_models.py @@ -10068,7 +10068,6 @@ class Response(_Model): :ivar metadata: Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. - Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Required. :vartype metadata: dict[str, str] @@ -10081,7 +10080,6 @@ class Response(_Model): where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - We generally recommend altering this or ``temperature`` but not both. Required. :vartype top_p: float :ivar user: A unique identifier representing your end-user, which can help OpenAI to monitor @@ -10119,23 +10117,16 @@ class Response(_Model): and `Structured Outputs `_. :vartype text: ~azure.ai.projects.models.ResponseText :ivar tools: An array of tools the model may call while generating a response. You - can specify which tool to use by setting the ``tool_choice`` parameter. - - The two categories of tools you can provide the model are: - + can specify which tool to use by setting the _tool_choice_ parameter. + The two categories of tools you can provide the model are: - - * **Built-in tools**: Tools that are provided by OpenAI that extend the - model's capabilities, like [web - search](https://platform.openai.com/docs/guides/tools-web-search) - or [file search](https://platform.openai.com/docs/guides/tools-file-search). Learn more about - [built-in tools](https://platform.openai.com/docs/guides/tools). - * **Function calls (custom tools)**: Functions that are defined by you, - enabling the model to call your own code. Learn more about - [function calling](https://platform.openai.com/docs/guides/function-calling). + * Built-in tools: Tools that are provided by OpenAI that extend the + model's capabilities, like web search or file search. + * Function calls (custom tools): Functions that are defined by you, + enabling the model to call your own code. :vartype tools: list[~azure.ai.projects.models.Tool] :ivar tool_choice: How the model should select which tool (or tools) to use when generating - a response. See the ``tools`` parameter to see how to specify which tools + a response. See the tools parameter to see how to specify which tools the model can call. Is either a Union[str, "_models.ToolChoiceOptions"] type or a ToolChoiceObject type. :vartype tool_choice: str or ~azure.ai.projects.models.ToolChoiceOptions or @@ -10143,14 +10134,12 @@ class Response(_Model): :ivar prompt: :vartype prompt: ~azure.ai.projects.models.Prompt :ivar truncation: The truncation strategy to use for the model response. - * `auto`: If the context of this response and previous ones exceeds - the model's context window size, the model will truncate the - response to fit the context window by dropping input items in the - middle of the conversation. + the model's context window size, the model will truncate the + response to fit the context window by dropping input items in the + middle of the conversation. * `disabled` (default): If a model response will exceed the context window - size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a - Literal["disabled"] type. + size for a model, the request will fail with a 400 error. Is either a Literal["auto"] type or a Literal["disabled"] type. :vartype truncation: str or str :ivar id: Unique identifier for this Response. Required. :vartype id: str @@ -10169,18 +10158,13 @@ class Response(_Model): :ivar incomplete_details: Details about why the response is incomplete. Required. :vartype incomplete_details: ~azure.ai.projects.models.ResponseIncompleteDetails1 :ivar output: An array of content items generated by the model. - - - - * The length and order of items in the `output` array is dependent - on the model's response. + * The length and order of items in the `output` array is dependent on the model's response. * Rather than accessing the first item in the `output` array and - assuming it's an `assistant` message with the content generated by - the model, you might consider using the `output_text` property where - supported in SDKs. Required. + assuming it's an `assistant` message with the content generated by + the model, you might consider using the `output_text` property where + supported in SDKs. Required. :vartype output: list[~azure.ai.projects.models.ItemResource] :ivar instructions: A system (or developer) message inserted into the model's context. - When using along with ``previous_response_id``, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. Required. Is either a str type or @@ -10207,7 +10191,6 @@ class Response(_Model): """Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. - Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters. Required.""" temperature: float = rest_field(visibility=["read", "create", "update", "delete", "query"]) @@ -10219,7 +10202,6 @@ class Response(_Model): where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. - We generally recommend altering this or ``temperature`` but not both. Required.""" user: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) """A unique identifier representing your end-user, which can help OpenAI to monitor and detect @@ -10257,25 +10239,20 @@ class Response(_Model): and `Structured Outputs `_.""" tools: Optional[list["_models.Tool"]] = rest_field(visibility=["read", "create", "update", "delete", "query"]) """An array of tools the model may call while generating a response. You - can specify which tool to use by setting the ``tool_choice`` parameter. - - The two categories of tools you can provide the model are: - - - - * **Built-in tools**: Tools that are provided by OpenAI that extend the - model's capabilities, like [web - search](https://platform.openai.com/docs/guides/tools-web-search) - or [file search](https://platform.openai.com/docs/guides/tools-file-search). Learn more about - [built-in tools](https://platform.openai.com/docs/guides/tools). - * **Function calls (custom tools)**: Functions that are defined by you, - enabling the model to call your own code. Learn more about - [function calling](https://platform.openai.com/docs/guides/function-calling).""" + can specify which tool to use by setting the tool_choice parameter. + The two categories of tools you can provide the model are: + + * Built-in tools: Tools that are provided by OpenAI that extend the + model's capabilities, like web search or file search. Learn more about + built-in tools at https://platform.openai.com/docs/guides/tools. + * Function calls (custom tools): Functions that are defined by you, + enabling the model to call your own code. Learn more about + function calling at https://platform.openai.com/docs/guides/function-calling.""" tool_choice: Optional[Union[str, "_models.ToolChoiceOptions", "_models.ToolChoiceObject"]] = rest_field( visibility=["read", "create", "update", "delete", "query"] ) """How the model should select which tool (or tools) to use when generating - a response. See the ``tools`` parameter to see how to specify which tools + a response. See the tools parameter to see how to specify which tools the model can call. Is either a Union[str, \"_models.ToolChoiceOptions\"] type or a ToolChoiceObject type.""" prompt: Optional["_models.Prompt"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) @@ -10283,13 +10260,12 @@ class Response(_Model): visibility=["read", "create", "update", "delete", "query"] ) """The truncation strategy to use for the model response. - - * `auto`: If the context of this response and previous ones exceeds - the model's context window size, the model will truncate the - response to fit the context window by dropping input items in the - middle of the conversation. - * `disabled` (default): If a model response will exceed the context window - size for a model, the request will fail with a 400 error. Is either a Literal[\"auto\"] type or + * `auto`: If the context of this response and previous ones exceeds + the model's context window size, the model will truncate the + response to fit the context window by dropping input items in the + middle of the conversation. + * `disabled` (default): If a model response will exceed the context window + size for a model, the request will fail with a 400 error. Is either a Literal[\"auto\"] type or a Literal[\"disabled\"] type.""" id: str = rest_field(visibility=["read", "create", "update", "delete", "query"]) """Unique identifier for this Response. Required.""" @@ -10315,20 +10291,16 @@ class Response(_Model): """Details about why the response is incomplete. Required.""" output: list["_models.ItemResource"] = rest_field(visibility=["read", "create", "update", "delete", "query"]) """An array of content items generated by the model. - - - - * The length and order of items in the `output` array is dependent - on the model's response. - * Rather than accessing the first item in the `output` array and - assuming it's an `assistant` message with the content generated by - the model, you might consider using the `output_text` property where - supported in SDKs. Required.""" + * The length and order of items in the `output` array is dependent + on the model's response. + * Rather than accessing the first item in the `output` array and + assuming it's an `assistant` message with the content generated by + the model, you might consider using the `output_text` property where + supported in SDKs. Required.""" instructions: Union[str, list["_models.ItemParam"]] = rest_field( visibility=["read", "create", "update", "delete", "query"] ) """A system (or developer) message inserted into the model's context. - When using along with ``previous_response_id``, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses. Required. Is either a str type or diff --git a/sdk/ai/azure-ai-projects/pyproject.toml b/sdk/ai/azure-ai-projects/pyproject.toml index 30b9d2683cd8..dedc2490f12c 100644 --- a/sdk/ai/azure-ai-projects/pyproject.toml +++ b/sdk/ai/azure-ai-projects/pyproject.toml @@ -63,7 +63,6 @@ pytyped = ["py.typed"] [tool.azure-sdk-build] verifytypes = false -sphinx = false [tool.mypy] exclude = [