Skip to content

Conversation

@AlexZMLyu
Copy link

Bases batching on size rather than row count to avoid exceeding an internal 10MB limit. Also removes an obsolete assertion in the test.

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:

  • Make sure to open an issue as a bug/issue before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea
  • Ensure the tests and linter pass
  • Code coverage does not decrease (if any source code was changed)
  • Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕

Bases batching on size rather than row count to avoid exceeding an
internal 10MB limit. Also removes an obsolete assertion in the test.
@AlexZMLyu AlexZMLyu requested a review from a team as a code owner December 11, 2025 05:43
@gemini-code-assist
Copy link

Summary of Changes

Hello @AlexZMLyu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the data batching mechanism within the BigQuery Storage Arrow sample. The primary goal is to prevent AppendRowsRequest payloads from exceeding BigQuery's internal 10MB size limit by implementing a size-aware batching strategy. This change improves the robustness and reliability of data ingestion when using PyArrow with BigQuery Storage.

Highlights

  • Batching Logic Update: The generate_write_requests function in the BigQuery Storage Arrow sample has been updated to base batching on the serialized size of PyArrow record batches rather than row count. This ensures that individual AppendRowsRequest payloads do not exceed the 10MB internal limit, using a conservative 7MB soft limit.
  • Improved Error Handling: A new check has been added to raise a ValueError if a single PyArrow batch containing just one row is larger than the maximum allowed request size (7MB), preventing potential issues with oversized individual records.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the batching logic in the BigQuery Storage Arrow sample to be based on byte size instead of row count. This is a great improvement as it prevents requests from exceeding the 10MB size limit. The new implementation is more robust. I've suggested a small refactoring to reduce code duplication, which will improve maintainability.

- Changes query to SELECT DISTINCT int64_col to count unique rows.
- Asserts the count is exactly TABLE_LENGTH, removing the allowance for extra rows from potential retries.
@parthea parthea changed the title Fix: Update BigQuery Storage Arrow samples batching logic docs(samples): Update BigQuery Storage Arrow samples batching logic Dec 17, 2025
@parthea parthea requested a review from GaoleMeng December 17, 2025 16:16
…ample

- Updates batching logic to use serialized size to avoid exceeding API limits.
- Ensures all rows in the PyArrow table are serialized for the request.
- Includes enhancements for measuring serialized row sizes.
- Changed `generate_write_requests` to be a generator, yielding requests
  instead of returning a list.
- Made `stream.send()` calls blocking by calling `future.result()` immediately,
  ensuring requests are sent sequentially.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants