Skip to content

Conversation

@daniel-sanche
Copy link
Contributor

PR created by the Librarian CLI to generate Cloud Client Libraries code from protos.

BEGIN_COMMIT

BEGIN_NESTED_COMMIT
feat: publish the pipelines API to the stable branch

PiperOrigin-RevId: 840829013
Library-IDs: google-cloud-firestore
Source-link: googleapis/googleapis@534adc56
END_NESTED_COMMIT

BEGIN_NESTED_COMMIT
docs: minor api documentation changes

PiperOrigin-RevId: 840398028
Library-IDs: google-cloud-firestore
Source-link: googleapis/googleapis@bfdeefc2
END_NESTED_COMMIT

END_COMMIT

This pull request is generated with proto changes between
googleapis/googleapis@659ea6e9
(exclusive) and
googleapis/googleapis@534adc56
(inclusive).

Librarian Version: v0.7.0
Language Image: us-central1-docker.pkg.dev/cloud-sdk-librarian-prod/images-prod/python-librarian-generator@sha256:ce48ed695c727f7e13efd1fd68f466a55a0d772c87b69158720cec39965bc8b2

@daniel-sanche daniel-sanche requested review from a team as code owners December 15, 2025 23:23
@product-auto-label product-auto-label bot added size: xl Pull request size is extra large. api: firestore Issues related to the googleapis/python-firestore API. labels Dec 15, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @daniel-sanche, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates the new Firestore Pipelines API into the google-cloud-firestore Python client library. It introduces the execute_pipeline method, allowing developers to construct and run complex, multi-stage queries directly within the client. This enhancement provides a more structured approach to query execution, supported by new data types for defining pipelines and capturing execution statistics, and is fully implemented across all supported communication protocols.

Highlights

  • New Firestore Pipelines API: Introduces the ExecutePipeline method, allowing for the execution of structured pipeline queries in Firestore.
  • Client Method Additions: Both synchronous (FirestoreClient) and asynchronous (FirestoreAsyncClient) Python clients now include the execute_pipeline method.
  • New Data Types: Adds StructuredPipeline, Function, Pipeline, and ExplainStats types to support the new API functionality.
  • Transport Layer Integration: Implements the ExecutePipeline method across gRPC, gRPC AsyncIO, and REST transports, ensuring broad compatibility.
  • Comprehensive Testing: Includes new unit tests for the execute_pipeline method, covering various scenarios for both gRPC and REST clients.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is an auto-generated update from protos, primarily introducing the execute_pipeline API. The changes include adding the new RPC and its related types, transport implementations, and tests. My review focuses on a performance improvement in the new client methods. I've suggested pre-compiling regular expressions at the module level to avoid repeated compilation on each method call, which will improve efficiency.

Comment on lines +1324 to +1334
routing_param_regex = re.compile("^projects/(?P<project_id>[^/]+)(?:/.*)?$")
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("project_id"):
header_params["project_id"] = regex_match.group("project_id")

routing_param_regex = re.compile(
"^projects/[^/]+/databases/(?P<database_id>[^/]+)(?:/.*)?$"
)
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("database_id"):
header_params["database_id"] = regex_match.group("database_id")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For performance, it's best to compile regular expressions only once at the module level, rather than inside a function that might be called frequently. This avoids the overhead of recompiling the regex on every call.

Please define _PROJECT_ID_REGEX and _DATABASE_ID_REGEX as module-level constants. I've also used more descriptive variable names for the match objects to improve clarity and avoid reusing the regex_match variable.

Suggested change
routing_param_regex = re.compile("^projects/(?P<project_id>[^/]+)(?:/.*)?$")
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("project_id"):
header_params["project_id"] = regex_match.group("project_id")
routing_param_regex = re.compile(
"^projects/[^/]+/databases/(?P<database_id>[^/]+)(?:/.*)?$"
)
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("database_id"):
header_params["database_id"] = regex_match.group("database_id")
# For performance, compile regexes at module level and reuse them.
# Using _PROJECT_ID_REGEX defined at module level.
project_id_match = _PROJECT_ID_REGEX.match(request.database)
if project_id_match and project_id_match.group("project_id"):
header_params["project_id"] = project_id_match.group("project_id")
# Using _DATABASE_ID_REGEX defined at module level.
database_id_match = _DATABASE_ID_REGEX.match(request.database)
if database_id_match and database_id_match.group("database_id"):
header_params["database_id"] = database_id_match.group("database_id")

Comment on lines +1705 to +1715
routing_param_regex = re.compile("^projects/(?P<project_id>[^/]+)(?:/.*)?$")
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("project_id"):
header_params["project_id"] = regex_match.group("project_id")

routing_param_regex = re.compile(
"^projects/[^/]+/databases/(?P<database_id>[^/]+)(?:/.*)?$"
)
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("database_id"):
header_params["database_id"] = regex_match.group("database_id")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For performance, it's best to compile regular expressions only once at the module level, rather than inside a function that might be called frequently. This avoids the overhead of recompiling the regex on every call.

Please define _PROJECT_ID_REGEX and _DATABASE_ID_REGEX as module-level constants. I've also used more descriptive variable names for the match objects to improve clarity and avoid reusing the regex_match variable.

Suggested change
routing_param_regex = re.compile("^projects/(?P<project_id>[^/]+)(?:/.*)?$")
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("project_id"):
header_params["project_id"] = regex_match.group("project_id")
routing_param_regex = re.compile(
"^projects/[^/]+/databases/(?P<database_id>[^/]+)(?:/.*)?$"
)
regex_match = routing_param_regex.match(request.database)
if regex_match and regex_match.group("database_id"):
header_params["database_id"] = regex_match.group("database_id")
# For performance, compile regexes at module level and reuse them.
# Using _PROJECT_ID_REGEX defined at module level.
project_id_match = _PROJECT_ID_REGEX.match(request.database)
if project_id_match and project_id_match.group("project_id"):
header_params["project_id"] = project_id_match.group("project_id")
# Using _DATABASE_ID_REGEX defined at module level.
database_id_match = _DATABASE_ID_REGEX.match(request.database)
if database_id_match and database_id_match.group("database_id"):
header_params["database_id"] = database_id_match.group("database_id")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: firestore Issues related to the googleapis/python-firestore API. size: xl Pull request size is extra large.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants