-
Notifications
You must be signed in to change notification settings - Fork 39
feat: introduce redis for caching configurations #797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughRemoves the Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
🤖 Fix all issues with AI Agents
In @crates/context_aware_config/src/api/config/handlers.rs:
- Around line 773-801: The cache keys for both fetch_from_redis_else_writeback
calls ignore the requested version, causing wrong cached data to be returned for
versioned requests; update both keys to include the version parameter (use the
incoming query_filters.version for the version lookup key and the resolved
config_version value for the config payload key) so keys become unique per
version (e.g., append the version string/number to the format! key); modify the
first fetch (CONFIG_VERSION_KEY_SUFFIX) and the second fetch (CONFIG_KEY_SUFFIX)
to incorporate the appropriate version identifier and ensure you propagate the
same version identifier into the second key after resolving config_version.
- Around line 678-706: The cache keys built for fetch_from_redis_else_writeback
(using schema_name and CONFIG_VERSION_KEY_SUFFIX / CONFIG_KEY_SUFFIX) do not
incorporate the requested version from query_filters.version, causing
stale/wrong versions to be served; fix by including the version identifier
(e.g., format!("{}:{}{}", schema_name.0,
query_filters.version.unwrap_or("latest"), CONFIG_VERSION_KEY_SUFFIX) and
similarly for CONFIG_KEY_SUFFIX) or, alternatively, bypass cache when
query_filters.version is a specific version and only use caching for the
"latest" path; update the key construction where fetch_from_redis_else_writeback
is called (references: fetch_from_redis_else_writeback,
CONFIG_VERSION_KEY_SUFFIX, CONFIG_KEY_SUFFIX, query_filters.version,
get_config_version, generate_config_from_version, schema_name,
workspace_context) accordingly.
In @crates/experimentation_platform/src/api/experiment_groups/handlers.rs:
- Around line 149-154: The handlers currently propagate Redis failures because
they call put_experiment_groups_in_redis(...).await? after successful DB
mutations; change these calls in all mutation handlers (create, update,
add-members, remove-members, delete, backfill) to await the future without the
`?` and handle errors by logging them (e.g., using the existing logger/tracing)
instead of returning an error—i.e., run let res =
put_experiment_groups_in_redis(...).await and if res.is_err() log the error with
context including the workspace/schema and operation name so the DB mutation
still returns success even if the cache update fails.
- Around line 431-456: The DB transaction result is being ignored and Redis
upsert is called unconditionally, which can return an error to the client even
when the DB delete succeeded; change the flow so you first capture the
transaction result from db_conn.transaction into result (the existing variable),
check if result is Ok(marked_group) before calling
put_experiment_groups_in_redis, and if the Redis call fails do not convert that
into an overall error—either log the Redis error and return the successful
transaction Json(marked_group) to the caller or enqueue an async retry, ensuring
you never call put_experiment_groups_in_redis when the transaction returned Err
and never overwrite a successful DB result with a Redis error; use the symbols
result and put_experiment_groups_in_redis to locate and update the control flow.
In @crates/service_utils/src/middlewares/auth_z.rs:
- Around line 136-138: The function get_auth_z_provider currently silently falls
back to "DISABLED" when AUTH_Z_PROVIDER is missing; remove the permissive
fallback so missing config panics instead. Change get_auth_z_provider to call
get_from_env_unsafe("AUTH_Z_PROVIDER") and propagate the Result/Option as before
(e.g., use unwrap() or otherwise panic) rather than using
unwrap_or("DISABLED".to_string()), so absence of AUTH_Z_PROVIDER fails fast and
does not silently disable authorization.
In @crates/service_utils/src/redis.rs:
- Around line 34-36: The ok_or call creating `pool` is unreachable because
`redis_pool` is already handled by an early return when None; remove the `let
pool = redis_pool.ok_or(...)?;` expression and instead use the existing
`redis_pool` value (or unwrap it safely) where `pool` is needed, referencing the
`redis_pool` and `pool` identifiers in the `crates/service_utils/src/redis.rs`
diff so no dead fallback error branch remains.
- Around line 55-64: The Redis write-back in client.set inside
crates/service_utils/src/redis.rs currently maps and propagates errors with ?
which causes the whole operation to fail on transient Redis issues; change this
to mirror put_config_in_redis's non-failing behavior by swallowing the Redis
error: call client.set(...).await and if it returns Err(e) log an error via
log::error!(...) but do not convert it into an early return—i.e., remove the
map_err(...)? chain and instead use an if let Err(e) = ... { log::error!(...); }
so DB-successful operations aren’t aborted by Redis write-back failures.
In @crates/service_utils/src/service.rs:
- Around line 20-21: Replace the low-severity info log used when obtaining a DB
connection fails — change the log::info!("Unable to get db connection from pool,
error: {e}") call to a higher-severity level such as log::warn! (or log::error!
if you want strict visibility) so connection pool or DB availability issues are
surfaced appropriately; update the same log invocation in service.rs (the
log::info! with the "Unable to get db connection" message) to log::warn! and
keep the existing error interpolation `{e}` and the following
Err(unexpected_error!(...)) unchanged.
- Around line 14-16: Add an inline comment immediately above the call to
conn.set_prepared_statement_cache_size(diesel::connection::CacheSize::Disabled)
explaining that the prepared statement cache is explicitly disabled to preserve
schema-swapping compatibility (referencing the CHANGELOG rationale) and noting
the trade-off (reduced performance) so future maintainers understand the reason
for this configuration choice.
In @crates/superposition_core/include/superposition_core.h:
- Around line 6-20: Expand the Safety block for core_get_resolved_config (and
the other functions in the same range) to document every pointer parameter and
the return: state which const char* parameters may be NULL (or must be
non-NULL), require input strings to be null-terminated UTF-8, require ebuf to be
non-NULL and sufficiently large to receive error text, specify that a non-NULL
return is heap-allocated and must be freed by the caller via core_free_string,
and clarify that a NULL return signals an error with details written into ebuf
(or if ebuf is NULL, behavior is undefined). Include these same explicit
ownership/validity rules for the other functions mentioned in lines 22-36.
- Around line 40-49: Update the safety doc for core_free_string to match the
implementation: remove the assertion that "s must not be null" and add a bullet
stating that a null pointer is accepted and treated as a no-op (the function
returns immediately), while keeping the other bullets about validity, non-usage
after free, and double-free UB; reference the function name core_free_string to
locate and update its comment.
- Line 5: Remove the stray character causing the syntax error by deleting the
lone 'k' on line 5 of the superposition_core.h header; ensure the header
contains only valid C/C++ declarations and whitespace so it compiles cleanly.
- Around line 1-63: This header is missing include guards; wrap the entire
contents (all declarations such as core_get_resolved_config,
core_get_resolved_config_with_reasoning, core_test_connection, core_free_string,
core_get_applicable_variants) in a conventional #ifndef/#define/#endif guard (or
add a single #pragma once at the top) so the file cannot be included multiple
times and avoid redefinition errors; ensure the chosen guard macro name is
unique and placed before the first include and the #endif after the last
declaration.
- Around line 1-63: Wrap all function declarations in an extern "C" block so C++
compilers don't mangle the Rust C ABI symbols: after the includes open `extern
"C" {` (guarded by #ifdef __cplusplus) and close it at the end of the header
before any trailing content; ensure the block encloses declarations such as
core_get_resolved_config, core_get_resolved_config_with_reasoning,
core_test_connection, core_free_string, and core_get_applicable_variants.
🧹 Nitpick comments (5)
docker-compose.yaml (1)
26-33: Consider adding Redis data persistence for development.The Redis service configuration is correct for basic usage. However, there's no volume mounted for data persistence, meaning all cached data will be lost when the container restarts.
💡 Optional: Add Redis persistence
If you want to persist Redis data across container restarts during development:
redis: image: redis:7 container_name: superposition_redis restart: on-failure ports: - 6379:6379 command: redis-server network_mode: bridge + volumes: + - ./docker-compose/redis/data:/datacrates/context_aware_config/src/helpers.rs (1)
293-312: Consider logging Redis write failures instead of silently discarding errors.The
let _ =pattern discards Redis write errors. While cache writes are not critical for correctness (the DB is the source of truth), logging failures would improve observability and help diagnose Redis connectivity issues.🔎 Proposed improvement for error visibility
- let _ = redis_pool + if let Err(e) = redis_pool .set::<(), String, String>(config_key, parsed_config, None, None, false) - .await; + .await + { + log::warn!("Failed to write config to Redis: {}", e); + } - let _ = redis_pool + if let Err(e) = redis_pool .set::<(), String, String>(last_modified_at_key, last_modified, None, None, false) - .await; + .await + { + log::warn!("Failed to write last_modified to Redis: {}", e); + }crates/context_aware_config/src/api/config/handlers.rs (3)
157-167: Unnecessary clone ofi64(aCopytype).🔎 Suggested fix
fn add_config_version_to_header( config_version: &Option<i64>, resp_builder: &mut HttpResponseBuilder, ) { if let Some(val) = config_version { resp_builder.insert_header(( AppHeader::XConfigVersion.to_string(), - val.clone().to_string(), + val.to_string(), )); } }
180-195: Consider reducing log level frominfotodebug.Line 192 logs the
last_modifiedheader value atinfolevel. This will be logged for every request and may create noise. Consider usingdebugortracelevel for request-specific diagnostic information.- log::info!("last modified {last_modified:?}"); + log::debug!("last modified {last_modified:?}");
648-734: Consider batching Redis lookups for better performance.The handler makes 4 sequential Redis calls (max_created_at, config_version, config, audit_id). For high-traffic endpoints, consider:
- Using Redis MGET/pipelining to batch reads
- Consolidating cached data into a single key with a composite struct
This would reduce network round-trips from 4 to 1.
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (22)
.env.examplecrates/context_aware_config/Cargo.tomlcrates/context_aware_config/src/api/config/handlers.rscrates/context_aware_config/src/api/context/handlers.rscrates/context_aware_config/src/api/default_config/handlers.rscrates/context_aware_config/src/api/dimension/handlers.rscrates/context_aware_config/src/helpers.rscrates/experimentation_platform/Cargo.tomlcrates/experimentation_platform/src/api/experiment_groups/handlers.rscrates/experimentation_platform/src/api/experiment_groups/helpers.rscrates/experimentation_platform/src/api/experiments/handlers.rscrates/experimentation_platform/src/api/experiments/helpers.rscrates/service_utils/Cargo.tomlcrates/service_utils/src/lib.rscrates/service_utils/src/middlewares/auth_z.rscrates/service_utils/src/redis.rscrates/service_utils/src/service.rscrates/service_utils/src/service/types.rscrates/superposition/Cargo.tomlcrates/superposition/src/app_state.rscrates/superposition_core/include/superposition_core.hdocker-compose.yaml
💤 Files with no reviewable changes (1)
- crates/context_aware_config/src/api/dimension/handlers.rs
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-02T20:59:01.233Z
Learnt from: ayushjain17
Repo: juspay/superposition PR: 543
File: crates/service_utils/src/middlewares/auth_z.rs:141-152
Timestamp: 2026-01-02T20:59:01.233Z
Learning: In crates/service_utils/src/middlewares/auth_z.rs, the AuthZHandler::init function is intentionally designed to panic on startup if AUTH_Z_PROVIDER environment variable is missing or set to an unknown value. This fail-fast behavior is expected and preferred for this critical authorization configuration.
Applied to files:
crates/service_utils/src/middlewares/auth_z.rs
📚 Learning: 2026-01-03T13:25:40.584Z
Learnt from: ayushjain17
Repo: juspay/superposition PR: 816
File: crates/frontend/src/pages/webhook.rs:136-137
Timestamp: 2026-01-03T13:25:40.584Z
Learning: In the superposition codebase (Rust frontend), the `Workspace` and `OrganisationId` newtype wrappers implement `Deref`, which allows `&Workspace` and `&OrganisationId` to be automatically coerced to `&str` when passed to functions expecting `&str` parameters. Manual `.0` dereferencing is not needed.
Applied to files:
crates/service_utils/Cargo.toml
🧬 Code graph analysis (8)
crates/service_utils/src/service/types.rs (1)
crates/service_utils/src/service.rs (1)
get_db_connection(9-24)
crates/superposition/src/app_state.rs (1)
crates/service_utils/src/helpers.rs (1)
get_from_env_or_default(57-71)
crates/experimentation_platform/src/api/experiment_groups/helpers.rs (3)
crates/service_utils/src/redis.rs (1)
serde_json(105-105)crates/context_aware_config/src/helpers.rs (4)
redis_pool(293-294)redis_pool(296-297)redis_pool(306-307)redis_pool(310-311)crates/experimentation_platform/src/api/experiments/helpers.rs (1)
pool(954-955)
crates/context_aware_config/src/api/default_config/handlers.rs (1)
crates/context_aware_config/src/helpers.rs (1)
put_config_in_redis(268-314)
crates/context_aware_config/src/api/context/handlers.rs (1)
crates/context_aware_config/src/helpers.rs (2)
put_config_in_redis(268-314)validate_change_reason(504-530)
crates/superposition_core/include/superposition_core.h (1)
crates/superposition_core/src/ffi_legacy.rs (5)
core_get_resolved_config(48-190)core_get_resolved_config_with_reasoning(197-340)core_test_connection(344-346)core_free_string(356-360)core_get_applicable_variants(367-454)
crates/context_aware_config/src/api/config/handlers.rs (3)
crates/service_utils/src/redis.rs (2)
serde_json(105-105)fetch_from_redis_else_writeback(20-69)crates/service_utils/src/service.rs (1)
get_db_connection(9-24)crates/service_utils/src/middlewares/workspace_context.rs (1)
new(28-33)
crates/experimentation_platform/src/api/experiment_groups/handlers.rs (3)
crates/service_utils/src/redis.rs (1)
fetch_from_redis_else_writeback(20-69)crates/service_utils/src/service.rs (1)
get_db_connection(9-24)crates/experimentation_platform/src/api/experiment_groups/helpers.rs (6)
fetch_and_validate_members(34-74)fetch_experiment_group(444-455)put_experiment_groups_in_redis(457-494)diesel(394-398)experiment_groups(449-453)experiment_groups(471-474)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Testing
- GitHub Check: checks (aarch64-darwin)
🔇 Additional comments (26)
crates/superposition/Cargo.toml (1)
20-20: Implementation is sound. The change to makefreda required dependency is properly implemented with comprehensive runtime configuration handling. The code correctly initializes Redis only whenREDIS_URLis set, gracefully falls back to database operations when Redis is unavailable, and all references to the removedhigh-performance-modefeature have been cleaned up. TheAppState::redisfield is correctly typed asOption<RedisPool>to handle both configured and unconfigured scenarios..env.example (1)
42-45: LGTM!The Redis configuration examples are well-documented and follow the existing pattern for environment variable placeholders.
crates/service_utils/src/lib.rs (1)
7-7: LGTM!The new
redismodule is correctly declared and follows the existing module structure.crates/experimentation_platform/Cargo.toml (1)
19-19: LGTM!Adding
fredwith themetricsfeature enables Redis support for the experimentation platform, consistent with the PR objectives.crates/service_utils/Cargo.toml (1)
18-18: LGTM!The dependency updates support the new Redis utilities:
fredis now required (no longer optional) withmetricsenabledsuperposition_macrosprovides theunexpected_error!macro used in the newredis.rsmoduleAlso applies to: 35-35
crates/context_aware_config/src/api/context/handlers.rs (1)
53-56: LGTM! Redis caching integration looks solid.The refactoring successfully removes conditional compilation of Redis operations. All mutation handlers now unconditionally call
put_config_in_redisafter their database transactions commit. Based on theput_config_in_redisimplementation (fromhelpers.rs), the function gracefully handles the case where Redis is not configured, returning early without error.The integration follows a consistent pattern:
- Database transaction completes
- Version ID is captured
- Redis cache is updated with the new version
- Response includes the config version header
Also applies to: 129-130, 181-182, 244-245, 449-450, 649-649, 736-736
crates/context_aware_config/Cargo.toml (1)
20-20: No action required regarding fred dependency security.Fred 9.2.1 (workspace version) has no known security advisories. The transitive rustls dependency (0.23.29) is already patched against RUSTSEC-2024-0399, which affected only rustls 0.23.13–0.23.17. The project also uses rustls 0.21.12, which predates the vulnerability.
crates/service_utils/src/service/types.rs (2)
52-52: LGTM: Redis field is now optional for runtime configuration.Making
redisanOption<RedisPool>aligns with the PR objective to enable Redis caching conditionally based on environment configuration rather than compile-time feature flags.
214-219: LGTM: Improved DB connection error handling.The refactoring to use
get_db_connectioncentralizes the prepared statement cache configuration and provides consistent error handling with better logging.crates/context_aware_config/src/api/default_config/handlers.rs (4)
45-45: LGTM: Unconditional import for runtime-configured Redis.The feature flag is now removed since
put_config_in_redisinternally handles the case when Redis is not configured by returning early.
152-152: LGTM: Redis cache update after create operation.Correctly updates the Redis cache after a successful config creation.
274-274: LGTM: Redis cache update after update operation.Correctly updates the Redis cache after a successful config update.
494-496: LGTM: Conditional Redis cache update after delete.The Redis cache is only updated when the delete operation succeeds (
resp.is_ok()), preventing cache updates on failed operations.crates/context_aware_config/src/helpers.rs (3)
16-19: LGTM: Consistent key suffix constants.Using centralized constants for Redis key suffixes improves maintainability and ensures consistency across the codebase.
274-281: LGTM: Proper Redis availability guard.The early return when Redis is not configured prevents unnecessary work and aligns with the runtime-configurable Redis approach.
302-302: LGTM: Schema name scoping for multi-tenant support.Adding
schema_nameto the event_log query ensures correct tenant isolation.crates/experimentation_platform/src/api/experiments/helpers.rs (2)
12-12: LGTM: Required imports for Redis integration.The imports align with the new Redis caching functionality being added to the experiments module.
Also applies to: 16-19, 25-25, 44-46
920-964: LGTM with one observation: Consistent Redis caching pattern for experiments.The function correctly:
- Guards against missing Redis configuration with early return
- Filters only active experiments for caching
- Uses proper error handling for both DB and Redis operations
- Logs success for observability
One difference from
put_config_in_redis: this function usespool.next_connected()whereas the config helper uses the pool directly. Both approaches work, butnext_connected()explicitly selects a connected client, which is slightly more explicit.crates/experimentation_platform/src/api/experiments/handlers.rs (5)
29-39: LGTM: Redis integration imports.Required imports for Redis-backed caching functionality.
390-396: LGTM: Redis cache update after experiment creation.Correctly updates the experiments cache in Redis after a successful creation.
465-471: LGTM: Consistent Redis cache updates after state changes.All experiment lifecycle handlers (conclude, discard, update, pause, resume) now correctly refresh the Redis cache after successful mutations.
Also applies to: 758-764, 1781-1787, 1854-1860, 1963-1969
1027-1098: Solid Redis-backed list implementation with appropriate fallback.The refactored
list_handler:
- Uses
fetch_from_redis_else_writebackfor last-modified timestamp- Conditionally reads from Redis only when fetching all active experiments (lines 1063-1067)
- Falls back to DB for other query patterns
The logic for
read_from_rediscorrectly limits Redis reads to the specific case where the cached data would match the query (all active experiments).
1101-1237: LGTM: Well-structured DB query helper.The
list_experiments_dbfunction cleanly encapsulates the database query logic, supporting:
- Filtering by status, name, creator, IDs, dates
- Sorting options
- Pagination
- In-memory dimension filtering when needed
crates/experimentation_platform/src/api/experiment_groups/helpers.rs (2)
7-7: LGTM: Required imports for Redis integration.The imports align with the Redis caching pattern used in the experiments module.
Also applies to: 11-11, 29-29
457-494: LGTM: Consistent Redis caching for experiment groups.The function follows the established pattern:
- Early return when Redis is not configured
- Loads all experiment groups ordered by last_modified
- Wraps in
PaginatedResponse::all()for API consistency- Proper error handling with logging
This mirrors the
put_experiments_in_redisimplementation, maintaining code consistency across the caching layer.crates/superposition/src/app_state.rs (1)
37-76: Redis initialization is correct, but verify intended startup behavior.The conditional Redis pool creation based on
REDIS_URLenvironment variable is well-implemented. However, note that:
- Lines 44-46: Panics on invalid Redis URL format (appropriate for misconfiguration)
- Lines 48-62: Panics on pool creation failure
- Line 68: Panics if Redis connection fails after
wait_for_connect()This fail-fast behavior means the application will not start if Redis is configured (REDIS_URL set) but unreachable. Verify this is the intended behavior, as it differs from some cache-optional patterns where the app continues without cache on connection failure.
| let mut version = fetch_from_redis_else_writeback::<Option<i64>>( | ||
| format!("{}{CONFIG_VERSION_KEY_SUFFIX}", schema_name.0), | ||
| &schema_name, | ||
| state.redis.clone(), | ||
| state.db_pool.clone(), | ||
| |db_pool| { | ||
| let DbConnection(mut conn) = get_db_connection(db_pool)?; | ||
| get_config_version(&query_filters.version, &workspace_context, &mut conn) | ||
| }, | ||
| ) | ||
| .await | ||
| .map_err(|e| unexpected_error!("Config version not found due to: {}", e))?; | ||
|
|
||
| let mut config = fetch_from_redis_else_writeback::<Config>( | ||
| format!("{}{CONFIG_KEY_SUFFIX}", schema_name.0), | ||
| &schema_name, | ||
| state.redis.clone(), | ||
| state.db_pool.clone(), | ||
| |db_pool| { | ||
| let DbConnection(mut conn) = get_db_connection(db_pool)?; | ||
| generate_config_from_version( | ||
| &mut version, | ||
| &mut conn, | ||
| &workspace_context.schema_name, | ||
| ) | ||
| }, | ||
| ) | ||
| .await | ||
| .map_err(|e| unexpected_error!("failed to generate config: {}", e))?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Cache keys do not include version parameter - wrong version may be served from cache.
The cache keys for config_version (line 679) and config (line 692) are schema-scoped but do not include the requested version from query_filters.version. This means:
- If a request for version "5" is cached, subsequent requests for version "10" (or "latest") will incorrectly receive version 5's data.
- The
get_config_versionlogic at line 685 respects the version parameter, but this is only executed on cache miss.
🔎 Suggested approach
Include the version in the cache key, or only cache "latest" version and bypass cache for specific version requests:
let query_filters = query_filters.into_inner();
+ // Only use cache for "latest" or unspecified version; bypass for specific versions
+ let use_cache = query_filters.version.as_ref().map_or(true, |v| v == "latest");
+ let redis_pool = if use_cache { state.redis.clone() } else { None };
+
let mut version = fetch_from_redis_else_writeback::<Option<i64>>(
format!("{}{CONFIG_VERSION_KEY_SUFFIX}", schema_name.0),
&schema_name,
- state.redis.clone(),
+ redis_pool.clone(),
state.db_pool.clone(),
|db_pool| {
let DbConnection(mut conn) = get_db_connection(db_pool)?;
get_config_version(&query_filters.version, &workspace_context, &mut conn)
},
)
.await
.map_err(|e| unexpected_error!("Config version not found due to: {}", e))?;
let mut config = fetch_from_redis_else_writeback::<Config>(
format!("{}{CONFIG_KEY_SUFFIX}", schema_name.0),
&schema_name,
- state.redis.clone(),
+ redis_pool,
state.db_pool.clone(),
// ...
)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @crates/context_aware_config/src/api/config/handlers.rs around lines 678 -
706, The cache keys built for fetch_from_redis_else_writeback (using schema_name
and CONFIG_VERSION_KEY_SUFFIX / CONFIG_KEY_SUFFIX) do not incorporate the
requested version from query_filters.version, causing stale/wrong versions to be
served; fix by including the version identifier (e.g., format!("{}:{}{}",
schema_name.0, query_filters.version.unwrap_or("latest"),
CONFIG_VERSION_KEY_SUFFIX) and similarly for CONFIG_KEY_SUFFIX) or,
alternatively, bypass cache when query_filters.version is a specific version and
only use caching for the "latest" path; update the key construction where
fetch_from_redis_else_writeback is called (references:
fetch_from_redis_else_writeback, CONFIG_VERSION_KEY_SUFFIX, CONFIG_KEY_SUFFIX,
query_filters.version, get_config_version, generate_config_from_version,
schema_name, workspace_context) accordingly.
| let mut config_version = fetch_from_redis_else_writeback::<Option<i64>>( | ||
| format!("{}{CONFIG_VERSION_KEY_SUFFIX}", schema_name.0), | ||
| &schema_name, | ||
| state.redis.clone(), | ||
| state.db_pool.clone(), | ||
| |db_pool| { | ||
| let DbConnection(mut conn) = get_db_connection(db_pool)?; | ||
| get_config_version(&query_filters.version, &workspace_context, &mut conn) | ||
| }, | ||
| ) | ||
| .await | ||
| .map_err(|e| unexpected_error!("Config version not found due to: {}", e))?; | ||
|
|
||
| let mut config = fetch_from_redis_else_writeback::<Config>( | ||
| format!("{}{CONFIG_KEY_SUFFIX}", schema_name.0), | ||
| &schema_name, | ||
| state.redis.clone(), | ||
| state.db_pool.clone(), | ||
| |db_pool| { | ||
| let DbConnection(mut conn) = get_db_connection(db_pool)?; | ||
| generate_config_from_version( | ||
| &mut config_version, | ||
| &mut conn, | ||
| &workspace_context.schema_name, | ||
| ) | ||
| }, | ||
| ) | ||
| .await | ||
| .map_err(|e| unexpected_error!("failed to generate config: {}", e))?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same cache key issue as get_handler - version parameter not respected from cache.
This handler has the identical problem: cache keys don't include the version, so specific version requests may receive wrong cached data.
Apply the same fix as suggested for get_handler.
🤖 Prompt for AI Agents
In @crates/context_aware_config/src/api/config/handlers.rs around lines 773 -
801, The cache keys for both fetch_from_redis_else_writeback calls ignore the
requested version, causing wrong cached data to be returned for versioned
requests; update both keys to include the version parameter (use the incoming
query_filters.version for the version lookup key and the resolved config_version
value for the config payload key) so keys become unique per version (e.g.,
append the version string/number to the format! key); modify the first fetch
(CONFIG_VERSION_KEY_SUFFIX) and the second fetch (CONFIG_KEY_SUFFIX) to
incorporate the appropriate version identifier and ensure you propagate the same
version identifier into the second key after resolving config_version.
| put_experiment_groups_in_redis( | ||
| state.redis.clone(), | ||
| &mut conn, | ||
| &workspace_request.schema_name, | ||
| ) | ||
| .await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Redis cache update failures should not fail successful mutations.
The same pattern appears in all mutation handlers (create, update, add-members, remove-members, delete, backfill): the Redis cache update uses await?, so if Redis fails after a successful DB transaction, the client receives an error despite the operation having succeeded.
Consider logging Redis failures instead of propagating them, since the cache will be refreshed on the next mutation.
🔎 Suggested pattern
- put_experiment_groups_in_redis(
- state.redis.clone(),
- &mut conn,
- &workspace_request.schema_name,
- )
- .await?;
+ if let Err(e) = put_experiment_groups_in_redis(
+ state.redis.clone(),
+ &mut conn,
+ &workspace_request.schema_name,
+ )
+ .await
+ {
+ log::error!("Failed to update experiment groups cache: {}", e);
+ }Apply this pattern to all mutation handlers.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| put_experiment_groups_in_redis( | |
| state.redis.clone(), | |
| &mut conn, | |
| &workspace_request.schema_name, | |
| ) | |
| .await?; | |
| if let Err(e) = put_experiment_groups_in_redis( | |
| state.redis.clone(), | |
| &mut conn, | |
| &workspace_request.schema_name, | |
| ) | |
| .await | |
| { | |
| log::error!("Failed to update experiment groups cache: {}", e); | |
| } |
🤖 Prompt for AI Agents
In @crates/experimentation_platform/src/api/experiment_groups/handlers.rs around
lines 149 - 154, The handlers currently propagate Redis failures because they
call put_experiment_groups_in_redis(...).await? after successful DB mutations;
change these calls in all mutation handlers (create, update, add-members,
remove-members, delete, backfill) to await the future without the `?` and handle
errors by logging them (e.g., using the existing logger/tracing) instead of
returning an error—i.e., run let res = put_experiment_groups_in_redis(...).await
and if res.is_err() log the error with context including the workspace/schema
and operation name so the DB mutation still returns success even if the cache
update fails.
| async fn list_handler( | ||
| pagination_params: superposition_query::Query<PaginationParams>, | ||
| filters: superposition_query::Query<ExpGroupFilters>, | ||
| db_conn: DbConnection, | ||
| schema_name: SchemaName, | ||
| state: Data<AppState>, | ||
| ) -> superposition::Result<Json<PaginatedResponse<ExperimentGroup>>> { | ||
| let key = format!("{}::{}", schema_name.0, EXPERIMENT_GROUPS_LIST_KEY_SUFFIX); | ||
| fetch_from_redis_else_writeback::<PaginatedResponse<ExperimentGroup>>( | ||
| key, | ||
| &schema_name, | ||
| state.redis.clone(), | ||
| state.db_pool.clone(), | ||
| |db_pool| { | ||
| let db_conn = get_db_connection(db_pool)?; | ||
| list_experiment_groups_db( | ||
| pagination_params, | ||
| filters, | ||
| db_conn, | ||
| schema_name.clone(), | ||
| ) | ||
| }, | ||
| ) | ||
| .await | ||
| .map(Json) | ||
| .map_err(|e| unexpected_error!(e)) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Cache key format mismatch and pagination/filters bypassed when serving from cache.
Two issues here:
-
Key format mismatch: Line 323 uses
format!("{}::{}", schema_name.0, EXPERIMENT_GROUPS_LIST_KEY_SUFFIX)with a::separator, butput_experiment_groups_in_redisinhelpers.rsusesformat!("{}{EXPERIMENT_GROUPS_LIST_KEY_SUFFIX}", **schema_name)without a separator. This will cause cache misses on reads and stale data accumulation. -
Filters/pagination ignored from cache: When Redis returns cached data,
pagination_paramsandfiltersare completely bypassed. The cache storesPaginatedResponse::all(...), so filtered or paginated requests will receive the entire unfiltered list, breaking pagination and filtering functionality.
🔎 Suggested approach
Either:
- Include filter/pagination params in the cache key (complex), or
- Only cache the full list and apply filtering/pagination post-retrieval:
async fn list_handler(
pagination_params: superposition_query::Query<PaginationParams>,
filters: superposition_query::Query<ExpGroupFilters>,
schema_name: SchemaName,
state: Data<AppState>,
) -> superposition::Result<Json<PaginatedResponse<ExperimentGroup>>> {
- let key = format!("{}::{}", schema_name.0, EXPERIMENT_GROUPS_LIST_KEY_SUFFIX);
+ let key = format!("{}{EXPERIMENT_GROUPS_LIST_KEY_SUFFIX}", schema_name.0);
fetch_from_redis_else_writeback::<PaginatedResponse<ExperimentGroup>>(
key,
&schema_name,
state.redis.clone(),
state.db_pool.clone(),
|db_pool| {
let db_conn = get_db_connection(db_pool)?;
- list_experiment_groups_db(
- pagination_params,
- filters,
- db_conn,
- schema_name.clone(),
- )
+ // Fetch all groups for cache, apply filters after
+ list_all_experiment_groups_db(db_conn, schema_name.clone())
},
)
.await
+ .map(|cached| apply_filters_and_pagination(cached, &filters, &pagination_params))
.map(Json)
.map_err(|e| unexpected_error!(e))
}Committable suggestion skipped: line range outside the PR's diff.
| let result = db_conn | ||
| .transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| { | ||
| let marked_group = diesel::update(experiment_groups::experiment_groups) | ||
| .filter(experiment_groups::id.eq(&id)) | ||
| .set(( | ||
| experiment_groups::last_modified_by.eq(user.email), | ||
| experiment_groups::last_modified_at.eq(chrono::Utc::now()), | ||
| )) | ||
| .returning(ExperimentGroup::as_returning()) | ||
| .schema_name(&schema_name) | ||
| .get_result(conn)?; | ||
| if !marked_group.member_experiment_ids.is_empty() { | ||
| return Err(bad_argument!( | ||
| "Cannot delete experiment group {} since it has members", | ||
| marked_group.name | ||
| )); | ||
| } | ||
| diesel::delete(experiment_groups::experiment_groups) | ||
| .filter(experiment_groups::id.eq(&id)) | ||
| .schema_name(&schema_name) | ||
| .execute(conn)?; | ||
| Ok(Json(marked_group)) | ||
| }); | ||
| put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name) | ||
| .await?; | ||
| result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent error handling: Redis failure returns error even when DB transaction succeeded.
If the transaction at lines 431-453 succeeds but put_experiment_groups_in_redis at lines 454-455 fails, the client receives an error despite the delete being committed. This creates a confusing state where the operation actually succeeded.
Additionally, put_experiment_groups_in_redis is called unconditionally, even if the transaction failed.
🔎 Suggested fix
let result = db_conn
.transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| {
// ... transaction logic ...
Ok(Json(marked_group))
- });
- put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name)
- .await?;
- result
+ })?;
+ if let Err(e) = put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name).await {
+ log::error!("Failed to update Redis cache after delete: {}", e);
+ // Cache will be refreshed on next write; don't fail the request
+ }
+ Ok(result)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let result = db_conn | |
| .transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| { | |
| let marked_group = diesel::update(experiment_groups::experiment_groups) | |
| .filter(experiment_groups::id.eq(&id)) | |
| .set(( | |
| experiment_groups::last_modified_by.eq(user.email), | |
| experiment_groups::last_modified_at.eq(chrono::Utc::now()), | |
| )) | |
| .returning(ExperimentGroup::as_returning()) | |
| .schema_name(&schema_name) | |
| .get_result(conn)?; | |
| if !marked_group.member_experiment_ids.is_empty() { | |
| return Err(bad_argument!( | |
| "Cannot delete experiment group {} since it has members", | |
| marked_group.name | |
| )); | |
| } | |
| diesel::delete(experiment_groups::experiment_groups) | |
| .filter(experiment_groups::id.eq(&id)) | |
| .schema_name(&schema_name) | |
| .execute(conn)?; | |
| Ok(Json(marked_group)) | |
| }); | |
| put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name) | |
| .await?; | |
| result | |
| let result = db_conn | |
| .transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| { | |
| let marked_group = diesel::update(experiment_groups::experiment_groups) | |
| .filter(experiment_groups::id.eq(&id)) | |
| .set(( | |
| experiment_groups::last_modified_by.eq(user.email), | |
| experiment_groups::last_modified_at.eq(chrono::Utc::now()), | |
| )) | |
| .returning(ExperimentGroup::as_returning()) | |
| .schema_name(&schema_name) | |
| .get_result(conn)?; | |
| if !marked_group.member_experiment_ids.is_empty() { | |
| return Err(bad_argument!( | |
| "Cannot delete experiment group {} since it has members", | |
| marked_group.name | |
| )); | |
| } | |
| diesel::delete(experiment_groups::experiment_groups) | |
| .filter(experiment_groups::id.eq(&id)) | |
| .schema_name(&schema_name) | |
| .execute(conn)?; | |
| Ok(Json(marked_group)) | |
| })?; | |
| if let Err(e) = put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name).await { | |
| log::error!("Failed to update Redis cache after delete: {}", e); | |
| // Cache will be refreshed on next write; don't fail the request | |
| } | |
| Ok(result) |
🤖 Prompt for AI Agents
In @crates/experimentation_platform/src/api/experiment_groups/handlers.rs around
lines 431 - 456, The DB transaction result is being ignored and Redis upsert is
called unconditionally, which can return an error to the client even when the DB
delete succeeded; change the flow so you first capture the transaction result
from db_conn.transaction into result (the existing variable), check if result is
Ok(marked_group) before calling put_experiment_groups_in_redis, and if the Redis
call fails do not convert that into an overall error—either log the Redis error
and return the successful transaction Json(marked_group) to the caller or
enqueue an async retry, ensuring you never call put_experiment_groups_in_redis
when the transaction returned Err and never overwrite a successful DB result
with a Redis error; use the symbols result and put_experiment_groups_in_redis to
locate and update the control flow.
| log::info!("Unable to get db connection from pool, error: {e}"); | ||
| Err(unexpected_error!("Could not get a DB connection, contact an admin and check logs for further information")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Use appropriate log level for connection failures.
Database connection failures should be logged at warn or error level, not info. Connection pool exhaustion or database unavailability are operational issues that warrant higher visibility.
🔎 Suggested fix
- log::info!("Unable to get db connection from pool, error: {e}");
+ log::error!("Unable to get db connection from pool: {e}");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| log::info!("Unable to get db connection from pool, error: {e}"); | |
| Err(unexpected_error!("Could not get a DB connection, contact an admin and check logs for further information")) | |
| log::error!("Unable to get db connection from pool: {e}"); | |
| Err(unexpected_error!("Could not get a DB connection, contact an admin and check logs for further information")) |
🤖 Prompt for AI Agents
In @crates/service_utils/src/service.rs around lines 20 - 21, Replace the
low-severity info log used when obtaining a DB connection fails — change the
log::info!("Unable to get db connection from pool, error: {e}") call to a
higher-severity level such as log::warn! (or log::error! if you want strict
visibility) so connection pool or DB availability issues are surfaced
appropriately; update the same log invocation in service.rs (the log::info! with
the "Unable to get db connection" message) to log::warn! and keep the existing
error interpolation `{e}` and the following Err(unexpected_error!(...))
unchanged.
| #include <stdarg.h> | ||
| #include <stdbool.h> | ||
| #include <stdint.h> | ||
| #include <stdlib.h> | ||
| k | ||
| /** | ||
| * # Safety | ||
| * | ||
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | ||
| * error message. | ||
| */ | ||
| char *core_get_resolved_config(const char *default_config_json, | ||
| const char *contexts_json, | ||
| const char *overrides_json, | ||
| const char *dimensions, | ||
| const char *query_data_json, | ||
| const char *merge_strategy_str, | ||
| const char *filter_prefixes_json, | ||
| const char *experimentation_json, | ||
| char *ebuf); | ||
|
|
||
| /** | ||
| * # Safety | ||
| * | ||
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | ||
| * error message. | ||
| */ | ||
| char *core_get_resolved_config_with_reasoning(const char *default_config_json, | ||
| const char *contexts_json, | ||
| const char *overrides_json, | ||
| const char *dimensions, | ||
| const char *query_data_json, | ||
| const char *merge_strategy_str, | ||
| const char *filter_prefixes_json, | ||
| const char *experimentation_json, | ||
| char *ebuf); | ||
|
|
||
| int32_t core_test_connection(void); | ||
|
|
||
| /** | ||
| * # Safety | ||
| * | ||
| * This function is unsafe because: | ||
| * - `s` must be a valid pointer to a C string previously allocated by this library | ||
| * - `s` must not be null | ||
| * - The caller must ensure `s` is not used after this function is called | ||
| * - Double-free will cause undefined behavior | ||
| */ | ||
| void core_free_string(char *s); | ||
|
|
||
| /** | ||
| * # Safety | ||
| * | ||
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | ||
| * error message. | ||
| */ | ||
| char *core_get_applicable_variants(const char *experiments_json, | ||
| const char *experiment_groups_json, | ||
| const char *dimensions, | ||
| const char *query_data_json, | ||
| const char *identifier, | ||
| const char *filter_prefixes_json, | ||
| char *ebuf); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add include guards or #pragma once to prevent multiple inclusion.
This header file lacks include guards, which can cause redefinition errors when the header is included multiple times across different translation units.
🔎 Proposed fix
Add include guards at the beginning and end of the file:
+#ifndef SUPERPOSITION_CORE_H
+#define SUPERPOSITION_CORE_H
+
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>And at the end of the file:
const char *identifier,
const char *filter_prefixes_json,
char *ebuf);
+
+#endif /* SUPERPOSITION_CORE_H */Alternatively, use #pragma once at the top of the file (more concise but slightly less portable):
+#pragma once
+
#include <stdarg.h>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #include <stdarg.h> | |
| #include <stdbool.h> | |
| #include <stdint.h> | |
| #include <stdlib.h> | |
| k | |
| /** | |
| * # Safety | |
| * | |
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | |
| * error message. | |
| */ | |
| char *core_get_resolved_config(const char *default_config_json, | |
| const char *contexts_json, | |
| const char *overrides_json, | |
| const char *dimensions, | |
| const char *query_data_json, | |
| const char *merge_strategy_str, | |
| const char *filter_prefixes_json, | |
| const char *experimentation_json, | |
| char *ebuf); | |
| /** | |
| * # Safety | |
| * | |
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | |
| * error message. | |
| */ | |
| char *core_get_resolved_config_with_reasoning(const char *default_config_json, | |
| const char *contexts_json, | |
| const char *overrides_json, | |
| const char *dimensions, | |
| const char *query_data_json, | |
| const char *merge_strategy_str, | |
| const char *filter_prefixes_json, | |
| const char *experimentation_json, | |
| char *ebuf); | |
| int32_t core_test_connection(void); | |
| /** | |
| * # Safety | |
| * | |
| * This function is unsafe because: | |
| * - `s` must be a valid pointer to a C string previously allocated by this library | |
| * - `s` must not be null | |
| * - The caller must ensure `s` is not used after this function is called | |
| * - Double-free will cause undefined behavior | |
| */ | |
| void core_free_string(char *s); | |
| /** | |
| * # Safety | |
| * | |
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | |
| * error message. | |
| */ | |
| char *core_get_applicable_variants(const char *experiments_json, | |
| const char *experiment_groups_json, | |
| const char *dimensions, | |
| const char *query_data_json, | |
| const char *identifier, | |
| const char *filter_prefixes_json, | |
| char *ebuf); | |
| #ifndef SUPERPOSITION_CORE_H | |
| #define SUPERPOSITION_CORE_H | |
| #include <stdarg.h> | |
| #include <stdbool.h> | |
| #include <stdint.h> | |
| #include <stdlib.h> | |
| k | |
| /** | |
| * # Safety | |
| * | |
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | |
| * error message. | |
| */ | |
| char *core_get_resolved_config(const char *default_config_json, | |
| const char *contexts_json, | |
| const char *overrides_json, | |
| const char *dimensions, | |
| const char *query_data_json, | |
| const char *merge_strategy_str, | |
| const char *filter_prefixes_json, | |
| const char *experimentation_json, | |
| char *ebuf); | |
| /** | |
| * # Safety | |
| * | |
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | |
| * error message. | |
| */ | |
| char *core_get_resolved_config_with_reasoning(const char *default_config_json, | |
| const char *contexts_json, | |
| const char *overrides_json, | |
| const char *dimensions, | |
| const char *query_data_json, | |
| const char *merge_strategy_str, | |
| const char *filter_prefixes_json, | |
| const char *experimentation_json, | |
| char *ebuf); | |
| int32_t core_test_connection(void); | |
| /** | |
| * # Safety | |
| * | |
| * This function is unsafe because: | |
| * - `s` must be a valid pointer to a C string previously allocated by this library | |
| * - `s` must not be null | |
| * - The caller must ensure `s` is not used after this function is called | |
| * - Double-free will cause undefined behavior | |
| */ | |
| void core_free_string(char *s); | |
| /** | |
| * # Safety | |
| * | |
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | |
| * error message. | |
| */ | |
| char *core_get_applicable_variants(const char *experiments_json, | |
| const char *experiment_groups_json, | |
| const char *dimensions, | |
| const char *query_data_json, | |
| const char *identifier, | |
| const char *filter_prefixes_json, | |
| char *ebuf); | |
| #endif /* SUPERPOSITION_CORE_H */ |
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 1 - 63,
This header is missing include guards; wrap the entire contents (all
declarations such as core_get_resolved_config,
core_get_resolved_config_with_reasoning, core_test_connection, core_free_string,
core_get_applicable_variants) in a conventional #ifndef/#define/#endif guard (or
add a single #pragma once at the top) so the file cannot be included multiple
times and avoid redefinition errors; ensure the chosen guard macro name is
unique and placed before the first include and the #endif after the last
declaration.
🛠️ Refactor suggestion | 🟠 Major
Add extern "C" guards for C++ compatibility.
The header should wrap all function declarations in extern "C" to ensure C++ compilers don't apply name mangling, which would prevent linking against the Rust-generated C ABI functions.
🔎 Proposed fix
Add extern "C" guards after the include guards:
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
+#ifdef __cplusplus
+extern "C" {
+#endif
+
/**
* # SafetyAnd close them at the end:
const char *filter_prefixes_json,
char *ebuf);
+#ifdef __cplusplus
+}
+#endif
+
#endif /* SUPERPOSITION_CORE_H */Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 1 - 63,
Wrap all function declarations in an extern "C" block so C++ compilers don't
mangle the Rust C ABI symbols: after the includes open `extern "C" {` (guarded
by #ifdef __cplusplus) and close it at the end of the header before any trailing
content; ensure the block encloses declarations such as
core_get_resolved_config, core_get_resolved_config_with_reasoning,
core_test_connection, core_free_string, and core_get_applicable_variants.
| #include <stdbool.h> | ||
| #include <stdint.h> | ||
| #include <stdlib.h> | ||
| k |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the erroneous character.
Line 5 contains a single character k which is invalid C syntax and will cause a compilation error.
🔎 Proposed fix
-k📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| k |
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h at line 5, Remove the
stray character causing the syntax error by deleting the lone 'k' on line 5 of
the superposition_core.h header; ensure the header contains only valid C/C++
declarations and whitespace so it compiles cleanly.
| /** | ||
| * # Safety | ||
| * | ||
| * Caller ensures that `ebuf` is a sufficiently long buffer to store the | ||
| * error message. | ||
| */ | ||
| char *core_get_resolved_config(const char *default_config_json, | ||
| const char *contexts_json, | ||
| const char *overrides_json, | ||
| const char *dimensions, | ||
| const char *query_data_json, | ||
| const char *merge_strategy_str, | ||
| const char *filter_prefixes_json, | ||
| const char *experimentation_json, | ||
| char *ebuf); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Expand safety documentation to cover all pointer parameters and return values.
The safety documentation only mentions the ebuf parameter but doesn't document requirements for the other const char* parameters. Callers need to know:
- Whether null pointers are acceptable for any parameters
- Validity requirements for input strings (null-terminated, UTF-8, etc.)
- Return value ownership: when the returned
char*is non-null, the caller must free it usingcore_free_string - What a null return value indicates (error condition with details in ebuf)
🔎 Example enhanced documentation
/**
* # Safety
*
+ * All pointer parameters must be valid:
+ * - Input JSON strings must be valid, non-null, null-terminated UTF-8
+ * - `ebuf` must be a pre-allocated buffer of sufficient size (recommend 1024+ bytes)
+ *
+ * # Return value
+ *
+ * Returns a pointer to a newly allocated JSON string on success, which must be
+ * freed by the caller using `core_free_string`. Returns null on error, with
+ * details written to `ebuf`.
+ *
- * Caller ensures that `ebuf` is a sufficiently long buffer to store the
- * error message.
*/Also applies to: 22-36
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 6 - 20,
Expand the Safety block for core_get_resolved_config (and the other functions in
the same range) to document every pointer parameter and the return: state which
const char* parameters may be NULL (or must be non-NULL), require input strings
to be null-terminated UTF-8, require ebuf to be non-NULL and sufficiently large
to receive error text, specify that a non-NULL return is heap-allocated and must
be freed by the caller via core_free_string, and clarify that a NULL return
signals an error with details written into ebuf (or if ebuf is NULL, behavior is
undefined). Include these same explicit ownership/validity rules for the other
functions mentioned in lines 22-36.
| /** | ||
| * # Safety | ||
| * | ||
| * This function is unsafe because: | ||
| * - `s` must be a valid pointer to a C string previously allocated by this library | ||
| * - `s` must not be null | ||
| * - The caller must ensure `s` is not used after this function is called | ||
| * - Double-free will cause undefined behavior | ||
| */ | ||
| void core_free_string(char *s); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the safety documentation for null pointer handling.
The documentation states "s must not be null," but the implementation in ffi_legacy.rs explicitly handles null pointers as a no-op (if s.is_null() { return; }). The documentation should accurately reflect this behavior.
🔎 Proposed fix
/**
* # Safety
*
* This function is unsafe because:
* - `s` must be a valid pointer to a C string previously allocated by this library
- * - `s` must not be null
+ * - `s` may be null (no-op if null)
* - The caller must ensure `s` is not used after this function is called
* - Double-free will cause undefined behavior
*/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * # Safety | |
| * | |
| * This function is unsafe because: | |
| * - `s` must be a valid pointer to a C string previously allocated by this library | |
| * - `s` must not be null | |
| * - The caller must ensure `s` is not used after this function is called | |
| * - Double-free will cause undefined behavior | |
| */ | |
| void core_free_string(char *s); | |
| /** | |
| * # Safety | |
| * | |
| * This function is unsafe because: | |
| * - `s` must be a valid pointer to a C string previously allocated by this library | |
| * - `s` may be null (no-op if null) | |
| * - The caller must ensure `s` is not used after this function is called | |
| * - Double-free will cause undefined behavior | |
| */ | |
| void core_free_string(char *s); |
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 40 - 49,
Update the safety doc for core_free_string to match the implementation: remove
the assertion that "s must not be null" and add a bullet stating that a null
pointer is accepted and treated as a no-op (the function returns immediately),
while keeping the other bullets about validity, non-usage after free, and
double-free UB; reference the function name core_free_string to locate and
update its comment.
4821295 to
821fd9f
Compare
- cache get config API responses - cache get experiment list responses - cache get experiment groups response
Problem
Provider APIs are dependent on DB connections and DB uptime, and tend to be slower.
Solution
Introduce redis as a cache store for get config response, get experiments list and get experiment groups list.
Environment variable changes
REDIS_URL=""
REDIS_POOL_SIZE=""
REDIS_MAX_ATTEMPTS=""
REDIS_CONN_TIMEOUT=""
Summary by CodeRabbit
Release Notes
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.