-
Notifications
You must be signed in to change notification settings - Fork 121
[Local Catalog] Integrate catalog API behind a feature flag #16301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Local Catalog] Integrate catalog API behind a feature flag #16301
Conversation
…-flag # Conflicts: # Modules/Sources/Experiments/DefaultFeatureFlagService.swift # Modules/Sources/Experiments/FeatureFlag.swift
…er-initiated action, currently from the CTA in catalog settings.
…talog ready immediately or not).
…WOOMOB-1609-catalog-api-behind-feature-flag # Conflicts: # Modules/Sources/Networking/Remote/POSCatalogSyncRemote.swift # Modules/Sources/Yosemite/Tools/POS/POSCatalogSyncCoordinator.swift # WooCommerce/Classes/Tools/BackgroundTasks/ForegroundPOSCatalogSyncDispatcher.swift
… currently returns all products and variations with public status.
… catalog remote.
|
|
joshheald
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works really well, thanks for integrating it! A few small points inline.
| } | ||
|
|
||
| func pollForCatalogCompletion(siteID: Int64, syncRemote: POSCatalogSyncRemoteProtocol) async throws -> String { | ||
| // Each attempt is made 1 second after the last one completes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should have some backoff here – e.g. after a minute of checking every second, maybe every 5 seconds is fine even if it means someone's waiting 4 seconds longer than they would with 1s polling.
I also wonder whether we might have any rate limiting issues doing this with some hosts, WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps we should have some backoff here
Great idea, some progressive backoff (1s → 5s after 1 minute like you suggested) would be good here. Polling every second for potentially long-running catalog generations unnecessarily consumes server resources. I also plan to add some basic retry mechanism for the polling, so that one failing request doesn't end the full sync. I will implement both in a future PR.
I also wonder whether we might have any rate limiting issues doing this with some hosts, WDYT?
I did some search about rate limiting in WP/WC, there doesn't seem to be a built-in rate limiting system for the plugins and WC API. The WC Store API does have rate limiting support, with default maximum of 25 requests per 10 seconds. Some plugins like WordFence offer the rate limiting feature. Other findings:
- Major hosting providers (WP Engine, Kinsta, SiteGround, Bluehost) don't seem to implement explicit REST API rate limiting but use resource-based limits (CPU, bandwidth, visits) instead.
- WordPress VIP has global rate-limits (REST APIs aren't explicitly mentioned like XML-RPC and login endpoints, but I assume rate limiting applies to API requests too) of 10 requests per second.
Right now, the polling frequency is a bit under 1 request per second due to the overhead of the request (one request is made after the end of the previous one), and likely fine for the common default settings. However, the backoff implementation will further help reduce the risk of encountering server resource issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the research!
| let variations = catalog.variations.filter { variation in | ||
| let parentExists = productIDs.contains { $0 == variation.productID } | ||
| if !parentExists { | ||
| DDLogWarn("Variation \(variation.productVariationID) references missing product \(variation.productID)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good thinking logging this.
| DDLogWarn("Variation \(variation.productVariationID) references missing product \(variation.productID)") | |
| DDLogWarn("Variation \(variation.productVariationID) references missing product \(variation.productID) - it will not be available in POS.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated in 786a89a.
| } | ||
| return downloadURL | ||
| case .pending, .processing: | ||
| DDLogInfo("🟣 Catalog request \(response.status)... (attempt \(attempts + 1)/\(maxAttempts))") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will flood the logs, for large catalogs...
Currently we log with .verbose even in prod. Perhaps we should change that to .info in AppDelegate, and then update these to be Debug logs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! As an alternative to making global changes for the whole app, I reduced the logging noise by only logging every 10th polling attempt in e1990ed. This keeps the logs useful for debugging (every ~10 seconds with 1s polling or ~50 seconds with 5s backoff polling later) without flooding them for long-running catalog generations. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine, yeah... but I do think we should change this separately anyway.
IMO, the right way to do this would be to make this a .debug or even .verbose level log. Then we keep debug builds log output set to verbose and set production logging to info. We'd have to check there aren't other places where we're using debug log levels that we want for debugging customer issues, so it makes sense not to do that in this PR.
… that the product is not available in POS.
|
Merging now, will work on the follow-up tasks in #16301 (comment). Feel free to comment on this PR still. |

Part of WOOMOB-1609
Just one review is required.
Description
This PR integrates the catalog API endpoints into the POS catalog sync flow behind a feature flag. The implementation adds support for forcing catalog regeneration and handles edge cases where the API returns variations without their parent products.
Key changes:
pointOfSaleCatalogAPIfeature flag to control whether to use the new catalog API or the existing paginated endpoints. Default to false, as the catalog API is still not fully ready in coreregenerateCatalogparameter to force catalog generation from merchant-initiated actions (e.g., manual refresh in catalog settings)simple,variable,variationproduct types withpublishstatus. As a future improvement, we can update the product DB arguments later to include all status cases to avoid this workaround)Test Steps
Prerequisites:
self-managed-fun-shop(small store, < 30s),largefuntesting(~5 minutes), andlargerfuntesting(~9 minutes) Pressable test sites already have this setup, please DM me if you'd like access.POSLocalCatalogEligibilityService'sdefaultCatalogSizeLimitfrom 1000 to a large number like 10000000.pointOfSaleCatalogAPIfeature flag inDefaultFeatureFlagService.RELEASE-NOTES.txtif necessary.