Skip to content

Conversation

@Datron
Copy link
Collaborator

@Datron Datron commented Dec 23, 2025

Problem

Provider APIs are dependent on DB connections and DB uptime, and tend to be slower.

Solution

Introduce redis as a cache store for get config response, get experiments list and get experiment groups list.

Environment variable changes

REDIS_URL=""
REDIS_POOL_SIZE=""
REDIS_MAX_ATTEMPTS=""
REDIS_CONN_TIMEOUT=""

Summary by CodeRabbit

Release Notes

  • New Features

    • Added Redis caching for configuration and experiment data lookups, improving performance with optional configuration support.
    • Introduced environment configuration options for Redis setup with connection pool and timeout settings.
  • Chores

    • Updated dependencies to include Redis support as a standard feature.
    • Added Redis service to development environment configuration.

✏️ Tip: You can customize this high-level summary in your review settings.

@semanticdiff-com
Copy link

semanticdiff-com bot commented Dec 23, 2025

@Datron Datron requested a review from a team as a code owner December 23, 2025 07:26
@Datron Datron changed the title Redis reads feat: introduce redis for caching configurations Dec 23, 2025
@coderabbitai
Copy link

coderabbitai bot commented Jan 6, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Removes the high-performance-mode compile-time feature flag and makes Redis caching always compiled but optionally enabled at runtime based on REDIS_URL configuration. Config and experiment handlers are refactored to use cache-first data retrieval with database fallback. Handler signatures updated to accept application state instead of direct database connections.

Changes

Cohort / File(s) Summary
Dependency Management
.env.example, crates/context_aware_config/Cargo.toml, crates/experimentation_platform/Cargo.toml, crates/service_utils/Cargo.toml, crates/superposition/Cargo.toml
Removed optional flag from fred dependency across crates, making Redis always required. Dropped high-performance-mode feature entirely. Added redis module to service_utils public API.
Config Handler Refactoring
crates/context_aware_config/src/api/config/handlers.rs
Replaced feature-flagged high-performance path with unconditional Redis-backed caching for config data. Added public helpers: fetch_audit_id, generate_config_from_version. Introduced cache utilities: get_config_version_from_workspace, is_not_modified, add_last_modified_to_header, add_config_version_to_header. Refactored get_handler and resolve_handler to use Redis with DB fallback and populate response headers.
Context & Dimension Handlers
crates/context_aware_config/src/api/context/handlers.rs, crates/context_aware_config/src/api/dimension/handlers.rs, crates/context_aware_config/src/api/default_config/handlers.rs
Removed conditional compilation around Redis integration; put_config_in_redis now called unconditionally after create, update, delete, and bulk operations.
Redis Utilities & Helpers
crates/context_aware_config/src/helpers.rs, crates/service_utils/src/redis.rs
Added Redis configuration guard and constants (CONFIG_KEY_SUFFIX, AUDIT_ID_KEY_SUFFIX, CONFIG_VERSION_KEY_SUFFIX, LAST_MODIFIED_KEY_SUFFIX). Introduced generic fetch_from_redis_else_writeback and get_data_from_redis functions with serialization and fallback logic. Replaced hard-coded Redis strings with configurable constants.
Experiment Groups Handlers & Helpers
crates/experimentation_platform/src/api/experiment_groups/handlers.rs, crates/experimentation_platform/src/api/experiment_groups/helpers.rs
Added put_experiment_groups_in_redis helper. Changed list_handler and delete_handler signatures to accept state: Data<AppState> instead of DbConnection. Implemented Redis-backed read path with list_experiment_groups_db helper. Cache invalidation after mutations.
Experiments Handlers & Helpers
crates/experimentation_platform/src/api/experiments/handlers.rs, crates/experimentation_platform/src/api/experiments/helpers.rs
Added put_experiments_in_redis function. Refactored list_handler to use state parameter and Redis-backed reads. Added list_experiments_db helper. Cache refresh after create, conclude, discard, update, pause, resume operations.
Application State & Infrastructure
crates/superposition/src/app_state.rs, crates/service_utils/src/service/types.rs, crates/service_utils/src/service.rs, crates/service_utils/src/lib.rs
Made AppState redis field optional (Option<RedisPool>). Added conditional Redis pool initialization based on REDIS_URL; logs when disabled. Added get_db_connection helper and public types module. Updated auth_z middleware to use unwrap_or instead of panic.
C API & Infrastructure
crates/superposition_core/include/superposition_core.h, docker-compose.yaml
Added C FFI declarations for config resolution, connection testing, string management, and variant selection. Enabled redis service in docker-compose with standard configuration.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • feat: AuthZ setup #543: Modifies service_utils auth_z middleware (AUTH_Z_PROVIDER handling) with related code-level changes to get_auth_z_provider.

Suggested reviewers

  • knutties

Poem

🐰 A rabbit hops through Redis rows,
Cache-first speeds where fallback flows,
No flags, just config—feature-free,
Optional pools, optionally.
Caching blooms for all to see! 🌿

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: introduce redis for caching configurations' directly matches the PR's main objective of adding Redis caching to improve API performance by reducing database dependency.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🤖 Fix all issues with AI Agents
In @crates/context_aware_config/src/api/config/handlers.rs:
- Around line 773-801: The cache keys for both fetch_from_redis_else_writeback
calls ignore the requested version, causing wrong cached data to be returned for
versioned requests; update both keys to include the version parameter (use the
incoming query_filters.version for the version lookup key and the resolved
config_version value for the config payload key) so keys become unique per
version (e.g., append the version string/number to the format! key); modify the
first fetch (CONFIG_VERSION_KEY_SUFFIX) and the second fetch (CONFIG_KEY_SUFFIX)
to incorporate the appropriate version identifier and ensure you propagate the
same version identifier into the second key after resolving config_version.
- Around line 678-706: The cache keys built for fetch_from_redis_else_writeback
(using schema_name and CONFIG_VERSION_KEY_SUFFIX / CONFIG_KEY_SUFFIX) do not
incorporate the requested version from query_filters.version, causing
stale/wrong versions to be served; fix by including the version identifier
(e.g., format!("{}:{}{}", schema_name.0,
query_filters.version.unwrap_or("latest"), CONFIG_VERSION_KEY_SUFFIX) and
similarly for CONFIG_KEY_SUFFIX) or, alternatively, bypass cache when
query_filters.version is a specific version and only use caching for the
"latest" path; update the key construction where fetch_from_redis_else_writeback
is called (references: fetch_from_redis_else_writeback,
CONFIG_VERSION_KEY_SUFFIX, CONFIG_KEY_SUFFIX, query_filters.version,
get_config_version, generate_config_from_version, schema_name,
workspace_context) accordingly.

In @crates/experimentation_platform/src/api/experiment_groups/handlers.rs:
- Around line 149-154: The handlers currently propagate Redis failures because
they call put_experiment_groups_in_redis(...).await? after successful DB
mutations; change these calls in all mutation handlers (create, update,
add-members, remove-members, delete, backfill) to await the future without the
`?` and handle errors by logging them (e.g., using the existing logger/tracing)
instead of returning an error—i.e., run let res =
put_experiment_groups_in_redis(...).await and if res.is_err() log the error with
context including the workspace/schema and operation name so the DB mutation
still returns success even if the cache update fails.
- Around line 431-456: The DB transaction result is being ignored and Redis
upsert is called unconditionally, which can return an error to the client even
when the DB delete succeeded; change the flow so you first capture the
transaction result from db_conn.transaction into result (the existing variable),
check if result is Ok(marked_group) before calling
put_experiment_groups_in_redis, and if the Redis call fails do not convert that
into an overall error—either log the Redis error and return the successful
transaction Json(marked_group) to the caller or enqueue an async retry, ensuring
you never call put_experiment_groups_in_redis when the transaction returned Err
and never overwrite a successful DB result with a Redis error; use the symbols
result and put_experiment_groups_in_redis to locate and update the control flow.

In @crates/service_utils/src/middlewares/auth_z.rs:
- Around line 136-138: The function get_auth_z_provider currently silently falls
back to "DISABLED" when AUTH_Z_PROVIDER is missing; remove the permissive
fallback so missing config panics instead. Change get_auth_z_provider to call
get_from_env_unsafe("AUTH_Z_PROVIDER") and propagate the Result/Option as before
(e.g., use unwrap() or otherwise panic) rather than using
unwrap_or("DISABLED".to_string()), so absence of AUTH_Z_PROVIDER fails fast and
does not silently disable authorization.

In @crates/service_utils/src/redis.rs:
- Around line 34-36: The ok_or call creating `pool` is unreachable because
`redis_pool` is already handled by an early return when None; remove the `let
pool = redis_pool.ok_or(...)?;` expression and instead use the existing
`redis_pool` value (or unwrap it safely) where `pool` is needed, referencing the
`redis_pool` and `pool` identifiers in the `crates/service_utils/src/redis.rs`
diff so no dead fallback error branch remains.
- Around line 55-64: The Redis write-back in client.set inside
crates/service_utils/src/redis.rs currently maps and propagates errors with ?
which causes the whole operation to fail on transient Redis issues; change this
to mirror put_config_in_redis's non-failing behavior by swallowing the Redis
error: call client.set(...).await and if it returns Err(e) log an error via
log::error!(...) but do not convert it into an early return—i.e., remove the
map_err(...)? chain and instead use an if let Err(e) = ... { log::error!(...); }
so DB-successful operations aren’t aborted by Redis write-back failures.

In @crates/service_utils/src/service.rs:
- Around line 20-21: Replace the low-severity info log used when obtaining a DB
connection fails — change the log::info!("Unable to get db connection from pool,
error: {e}") call to a higher-severity level such as log::warn! (or log::error!
if you want strict visibility) so connection pool or DB availability issues are
surfaced appropriately; update the same log invocation in service.rs (the
log::info! with the "Unable to get db connection" message) to log::warn! and
keep the existing error interpolation `{e}` and the following
Err(unexpected_error!(...)) unchanged.
- Around line 14-16: Add an inline comment immediately above the call to
conn.set_prepared_statement_cache_size(diesel::connection::CacheSize::Disabled)
explaining that the prepared statement cache is explicitly disabled to preserve
schema-swapping compatibility (referencing the CHANGELOG rationale) and noting
the trade-off (reduced performance) so future maintainers understand the reason
for this configuration choice.

In @crates/superposition_core/include/superposition_core.h:
- Around line 6-20: Expand the Safety block for core_get_resolved_config (and
the other functions in the same range) to document every pointer parameter and
the return: state which const char* parameters may be NULL (or must be
non-NULL), require input strings to be null-terminated UTF-8, require ebuf to be
non-NULL and sufficiently large to receive error text, specify that a non-NULL
return is heap-allocated and must be freed by the caller via core_free_string,
and clarify that a NULL return signals an error with details written into ebuf
(or if ebuf is NULL, behavior is undefined). Include these same explicit
ownership/validity rules for the other functions mentioned in lines 22-36.
- Around line 40-49: Update the safety doc for core_free_string to match the
implementation: remove the assertion that "s must not be null" and add a bullet
stating that a null pointer is accepted and treated as a no-op (the function
returns immediately), while keeping the other bullets about validity, non-usage
after free, and double-free UB; reference the function name core_free_string to
locate and update its comment.
- Line 5: Remove the stray character causing the syntax error by deleting the
lone 'k' on line 5 of the superposition_core.h header; ensure the header
contains only valid C/C++ declarations and whitespace so it compiles cleanly.
- Around line 1-63: This header is missing include guards; wrap the entire
contents (all declarations such as core_get_resolved_config,
core_get_resolved_config_with_reasoning, core_test_connection, core_free_string,
core_get_applicable_variants) in a conventional #ifndef/#define/#endif guard (or
add a single #pragma once at the top) so the file cannot be included multiple
times and avoid redefinition errors; ensure the chosen guard macro name is
unique and placed before the first include and the #endif after the last
declaration.
- Around line 1-63: Wrap all function declarations in an extern "C" block so C++
compilers don't mangle the Rust C ABI symbols: after the includes open `extern
"C" {` (guarded by #ifdef __cplusplus) and close it at the end of the header
before any trailing content; ensure the block encloses declarations such as
core_get_resolved_config, core_get_resolved_config_with_reasoning,
core_test_connection, core_free_string, and core_get_applicable_variants.
🧹 Nitpick comments (5)
docker-compose.yaml (1)

26-33: Consider adding Redis data persistence for development.

The Redis service configuration is correct for basic usage. However, there's no volume mounted for data persistence, meaning all cached data will be lost when the container restarts.

💡 Optional: Add Redis persistence

If you want to persist Redis data across container restarts during development:

  redis:
    image: redis:7
    container_name: superposition_redis
    restart: on-failure
    ports:
      - 6379:6379
    command: redis-server
    network_mode: bridge
+   volumes:
+     - ./docker-compose/redis/data:/data
crates/context_aware_config/src/helpers.rs (1)

293-312: Consider logging Redis write failures instead of silently discarding errors.

The let _ = pattern discards Redis write errors. While cache writes are not critical for correctness (the DB is the source of truth), logging failures would improve observability and help diagnose Redis connectivity issues.

🔎 Proposed improvement for error visibility
-    let _ = redis_pool
+    if let Err(e) = redis_pool
         .set::<(), String, String>(config_key, parsed_config, None, None, false)
-        .await;
+        .await
+    {
+        log::warn!("Failed to write config to Redis: {}", e);
+    }
-    let _ = redis_pool
+    if let Err(e) = redis_pool
         .set::<(), String, String>(last_modified_at_key, last_modified, None, None, false)
-        .await;
+        .await
+    {
+        log::warn!("Failed to write last_modified to Redis: {}", e);
+    }
crates/context_aware_config/src/api/config/handlers.rs (3)

157-167: Unnecessary clone of i64 (a Copy type).

🔎 Suggested fix
fn add_config_version_to_header(
    config_version: &Option<i64>,
    resp_builder: &mut HttpResponseBuilder,
) {
    if let Some(val) = config_version {
        resp_builder.insert_header((
            AppHeader::XConfigVersion.to_string(),
-           val.clone().to_string(),
+           val.to_string(),
        ));
    }
}

180-195: Consider reducing log level from info to debug.

Line 192 logs the last_modified header value at info level. This will be logged for every request and may create noise. Consider using debug or trace level for request-specific diagnostic information.

-   log::info!("last modified {last_modified:?}");
+   log::debug!("last modified {last_modified:?}");

648-734: Consider batching Redis lookups for better performance.

The handler makes 4 sequential Redis calls (max_created_at, config_version, config, audit_id). For high-traffic endpoints, consider:

  1. Using Redis MGET/pipelining to batch reads
  2. Consolidating cached data into a single key with a composite struct

This would reduce network round-trips from 4 to 1.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 450b010 and 5ff904e.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (22)
  • .env.example
  • crates/context_aware_config/Cargo.toml
  • crates/context_aware_config/src/api/config/handlers.rs
  • crates/context_aware_config/src/api/context/handlers.rs
  • crates/context_aware_config/src/api/default_config/handlers.rs
  • crates/context_aware_config/src/api/dimension/handlers.rs
  • crates/context_aware_config/src/helpers.rs
  • crates/experimentation_platform/Cargo.toml
  • crates/experimentation_platform/src/api/experiment_groups/handlers.rs
  • crates/experimentation_platform/src/api/experiment_groups/helpers.rs
  • crates/experimentation_platform/src/api/experiments/handlers.rs
  • crates/experimentation_platform/src/api/experiments/helpers.rs
  • crates/service_utils/Cargo.toml
  • crates/service_utils/src/lib.rs
  • crates/service_utils/src/middlewares/auth_z.rs
  • crates/service_utils/src/redis.rs
  • crates/service_utils/src/service.rs
  • crates/service_utils/src/service/types.rs
  • crates/superposition/Cargo.toml
  • crates/superposition/src/app_state.rs
  • crates/superposition_core/include/superposition_core.h
  • docker-compose.yaml
💤 Files with no reviewable changes (1)
  • crates/context_aware_config/src/api/dimension/handlers.rs
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-02T20:59:01.233Z
Learnt from: ayushjain17
Repo: juspay/superposition PR: 543
File: crates/service_utils/src/middlewares/auth_z.rs:141-152
Timestamp: 2026-01-02T20:59:01.233Z
Learning: In crates/service_utils/src/middlewares/auth_z.rs, the AuthZHandler::init function is intentionally designed to panic on startup if AUTH_Z_PROVIDER environment variable is missing or set to an unknown value. This fail-fast behavior is expected and preferred for this critical authorization configuration.

Applied to files:

  • crates/service_utils/src/middlewares/auth_z.rs
📚 Learning: 2026-01-03T13:25:40.584Z
Learnt from: ayushjain17
Repo: juspay/superposition PR: 816
File: crates/frontend/src/pages/webhook.rs:136-137
Timestamp: 2026-01-03T13:25:40.584Z
Learning: In the superposition codebase (Rust frontend), the `Workspace` and `OrganisationId` newtype wrappers implement `Deref`, which allows `&Workspace` and `&OrganisationId` to be automatically coerced to `&str` when passed to functions expecting `&str` parameters. Manual `.0` dereferencing is not needed.

Applied to files:

  • crates/service_utils/Cargo.toml
🧬 Code graph analysis (8)
crates/service_utils/src/service/types.rs (1)
crates/service_utils/src/service.rs (1)
  • get_db_connection (9-24)
crates/superposition/src/app_state.rs (1)
crates/service_utils/src/helpers.rs (1)
  • get_from_env_or_default (57-71)
crates/experimentation_platform/src/api/experiment_groups/helpers.rs (3)
crates/service_utils/src/redis.rs (1)
  • serde_json (105-105)
crates/context_aware_config/src/helpers.rs (4)
  • redis_pool (293-294)
  • redis_pool (296-297)
  • redis_pool (306-307)
  • redis_pool (310-311)
crates/experimentation_platform/src/api/experiments/helpers.rs (1)
  • pool (954-955)
crates/context_aware_config/src/api/default_config/handlers.rs (1)
crates/context_aware_config/src/helpers.rs (1)
  • put_config_in_redis (268-314)
crates/context_aware_config/src/api/context/handlers.rs (1)
crates/context_aware_config/src/helpers.rs (2)
  • put_config_in_redis (268-314)
  • validate_change_reason (504-530)
crates/superposition_core/include/superposition_core.h (1)
crates/superposition_core/src/ffi_legacy.rs (5)
  • core_get_resolved_config (48-190)
  • core_get_resolved_config_with_reasoning (197-340)
  • core_test_connection (344-346)
  • core_free_string (356-360)
  • core_get_applicable_variants (367-454)
crates/context_aware_config/src/api/config/handlers.rs (3)
crates/service_utils/src/redis.rs (2)
  • serde_json (105-105)
  • fetch_from_redis_else_writeback (20-69)
crates/service_utils/src/service.rs (1)
  • get_db_connection (9-24)
crates/service_utils/src/middlewares/workspace_context.rs (1)
  • new (28-33)
crates/experimentation_platform/src/api/experiment_groups/handlers.rs (3)
crates/service_utils/src/redis.rs (1)
  • fetch_from_redis_else_writeback (20-69)
crates/service_utils/src/service.rs (1)
  • get_db_connection (9-24)
crates/experimentation_platform/src/api/experiment_groups/helpers.rs (6)
  • fetch_and_validate_members (34-74)
  • fetch_experiment_group (444-455)
  • put_experiment_groups_in_redis (457-494)
  • diesel (394-398)
  • experiment_groups (449-453)
  • experiment_groups (471-474)
⏰ Context from checks skipped due to timeout of 900000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Testing
  • GitHub Check: checks (aarch64-darwin)
🔇 Additional comments (26)
crates/superposition/Cargo.toml (1)

20-20: Implementation is sound. The change to make fred a required dependency is properly implemented with comprehensive runtime configuration handling. The code correctly initializes Redis only when REDIS_URL is set, gracefully falls back to database operations when Redis is unavailable, and all references to the removed high-performance-mode feature have been cleaned up. The AppState::redis field is correctly typed as Option<RedisPool> to handle both configured and unconfigured scenarios.

.env.example (1)

42-45: LGTM!

The Redis configuration examples are well-documented and follow the existing pattern for environment variable placeholders.

crates/service_utils/src/lib.rs (1)

7-7: LGTM!

The new redis module is correctly declared and follows the existing module structure.

crates/experimentation_platform/Cargo.toml (1)

19-19: LGTM!

Adding fred with the metrics feature enables Redis support for the experimentation platform, consistent with the PR objectives.

crates/service_utils/Cargo.toml (1)

18-18: LGTM!

The dependency updates support the new Redis utilities:

  • fred is now required (no longer optional) with metrics enabled
  • superposition_macros provides the unexpected_error! macro used in the new redis.rs module

Also applies to: 35-35

crates/context_aware_config/src/api/context/handlers.rs (1)

53-56: LGTM! Redis caching integration looks solid.

The refactoring successfully removes conditional compilation of Redis operations. All mutation handlers now unconditionally call put_config_in_redis after their database transactions commit. Based on the put_config_in_redis implementation (from helpers.rs), the function gracefully handles the case where Redis is not configured, returning early without error.

The integration follows a consistent pattern:

  1. Database transaction completes
  2. Version ID is captured
  3. Redis cache is updated with the new version
  4. Response includes the config version header

Also applies to: 129-130, 181-182, 244-245, 449-450, 649-649, 736-736

crates/context_aware_config/Cargo.toml (1)

20-20: No action required regarding fred dependency security.

Fred 9.2.1 (workspace version) has no known security advisories. The transitive rustls dependency (0.23.29) is already patched against RUSTSEC-2024-0399, which affected only rustls 0.23.13–0.23.17. The project also uses rustls 0.21.12, which predates the vulnerability.

crates/service_utils/src/service/types.rs (2)

52-52: LGTM: Redis field is now optional for runtime configuration.

Making redis an Option<RedisPool> aligns with the PR objective to enable Redis caching conditionally based on environment configuration rather than compile-time feature flags.


214-219: LGTM: Improved DB connection error handling.

The refactoring to use get_db_connection centralizes the prepared statement cache configuration and provides consistent error handling with better logging.

crates/context_aware_config/src/api/default_config/handlers.rs (4)

45-45: LGTM: Unconditional import for runtime-configured Redis.

The feature flag is now removed since put_config_in_redis internally handles the case when Redis is not configured by returning early.


152-152: LGTM: Redis cache update after create operation.

Correctly updates the Redis cache after a successful config creation.


274-274: LGTM: Redis cache update after update operation.

Correctly updates the Redis cache after a successful config update.


494-496: LGTM: Conditional Redis cache update after delete.

The Redis cache is only updated when the delete operation succeeds (resp.is_ok()), preventing cache updates on failed operations.

crates/context_aware_config/src/helpers.rs (3)

16-19: LGTM: Consistent key suffix constants.

Using centralized constants for Redis key suffixes improves maintainability and ensures consistency across the codebase.


274-281: LGTM: Proper Redis availability guard.

The early return when Redis is not configured prevents unnecessary work and aligns with the runtime-configurable Redis approach.


302-302: LGTM: Schema name scoping for multi-tenant support.

Adding schema_name to the event_log query ensures correct tenant isolation.

crates/experimentation_platform/src/api/experiments/helpers.rs (2)

12-12: LGTM: Required imports for Redis integration.

The imports align with the new Redis caching functionality being added to the experiments module.

Also applies to: 16-19, 25-25, 44-46


920-964: LGTM with one observation: Consistent Redis caching pattern for experiments.

The function correctly:

  • Guards against missing Redis configuration with early return
  • Filters only active experiments for caching
  • Uses proper error handling for both DB and Redis operations
  • Logs success for observability

One difference from put_config_in_redis: this function uses pool.next_connected() whereas the config helper uses the pool directly. Both approaches work, but next_connected() explicitly selects a connected client, which is slightly more explicit.

crates/experimentation_platform/src/api/experiments/handlers.rs (5)

29-39: LGTM: Redis integration imports.

Required imports for Redis-backed caching functionality.


390-396: LGTM: Redis cache update after experiment creation.

Correctly updates the experiments cache in Redis after a successful creation.


465-471: LGTM: Consistent Redis cache updates after state changes.

All experiment lifecycle handlers (conclude, discard, update, pause, resume) now correctly refresh the Redis cache after successful mutations.

Also applies to: 758-764, 1781-1787, 1854-1860, 1963-1969


1027-1098: Solid Redis-backed list implementation with appropriate fallback.

The refactored list_handler:

  1. Uses fetch_from_redis_else_writeback for last-modified timestamp
  2. Conditionally reads from Redis only when fetching all active experiments (lines 1063-1067)
  3. Falls back to DB for other query patterns

The logic for read_from_redis correctly limits Redis reads to the specific case where the cached data would match the query (all active experiments).


1101-1237: LGTM: Well-structured DB query helper.

The list_experiments_db function cleanly encapsulates the database query logic, supporting:

  • Filtering by status, name, creator, IDs, dates
  • Sorting options
  • Pagination
  • In-memory dimension filtering when needed
crates/experimentation_platform/src/api/experiment_groups/helpers.rs (2)

7-7: LGTM: Required imports for Redis integration.

The imports align with the Redis caching pattern used in the experiments module.

Also applies to: 11-11, 29-29


457-494: LGTM: Consistent Redis caching for experiment groups.

The function follows the established pattern:

  • Early return when Redis is not configured
  • Loads all experiment groups ordered by last_modified
  • Wraps in PaginatedResponse::all() for API consistency
  • Proper error handling with logging

This mirrors the put_experiments_in_redis implementation, maintaining code consistency across the caching layer.

crates/superposition/src/app_state.rs (1)

37-76: Redis initialization is correct, but verify intended startup behavior.

The conditional Redis pool creation based on REDIS_URL environment variable is well-implemented. However, note that:

  1. Lines 44-46: Panics on invalid Redis URL format (appropriate for misconfiguration)
  2. Lines 48-62: Panics on pool creation failure
  3. Line 68: Panics if Redis connection fails after wait_for_connect()

This fail-fast behavior means the application will not start if Redis is configured (REDIS_URL set) but unreachable. Verify this is the intended behavior, as it differs from some cache-optional patterns where the app continues without cache on connection failure.

Comment on lines 678 to 707
let mut version = fetch_from_redis_else_writeback::<Option<i64>>(
format!("{}{CONFIG_VERSION_KEY_SUFFIX}", schema_name.0),
&schema_name,
state.redis.clone(),
state.db_pool.clone(),
|db_pool| {
let DbConnection(mut conn) = get_db_connection(db_pool)?;
get_config_version(&query_filters.version, &workspace_context, &mut conn)
},
)
.await
.map_err(|e| unexpected_error!("Config version not found due to: {}", e))?;

let mut config = fetch_from_redis_else_writeback::<Config>(
format!("{}{CONFIG_KEY_SUFFIX}", schema_name.0),
&schema_name,
state.redis.clone(),
state.db_pool.clone(),
|db_pool| {
let DbConnection(mut conn) = get_db_connection(db_pool)?;
generate_config_from_version(
&mut version,
&mut conn,
&workspace_context.schema_name,
)
},
)
.await
.map_err(|e| unexpected_error!("failed to generate config: {}", e))?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Cache keys do not include version parameter - wrong version may be served from cache.

The cache keys for config_version (line 679) and config (line 692) are schema-scoped but do not include the requested version from query_filters.version. This means:

  1. If a request for version "5" is cached, subsequent requests for version "10" (or "latest") will incorrectly receive version 5's data.
  2. The get_config_version logic at line 685 respects the version parameter, but this is only executed on cache miss.
🔎 Suggested approach

Include the version in the cache key, or only cache "latest" version and bypass cache for specific version requests:

    let query_filters = query_filters.into_inner();
+   // Only use cache for "latest" or unspecified version; bypass for specific versions
+   let use_cache = query_filters.version.as_ref().map_or(true, |v| v == "latest");
+   let redis_pool = if use_cache { state.redis.clone() } else { None };
+
    let mut version = fetch_from_redis_else_writeback::<Option<i64>>(
        format!("{}{CONFIG_VERSION_KEY_SUFFIX}", schema_name.0),
        &schema_name,
-       state.redis.clone(),
+       redis_pool.clone(),
        state.db_pool.clone(),
        |db_pool| {
            let DbConnection(mut conn) = get_db_connection(db_pool)?;
            get_config_version(&query_filters.version, &workspace_context, &mut conn)
        },
    )
    .await
    .map_err(|e| unexpected_error!("Config version not found due to: {}", e))?;

    let mut config = fetch_from_redis_else_writeback::<Config>(
        format!("{}{CONFIG_KEY_SUFFIX}", schema_name.0),
        &schema_name,
-       state.redis.clone(),
+       redis_pool,
        state.db_pool.clone(),
        // ...
    )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @crates/context_aware_config/src/api/config/handlers.rs around lines 678 -
706, The cache keys built for fetch_from_redis_else_writeback (using schema_name
and CONFIG_VERSION_KEY_SUFFIX / CONFIG_KEY_SUFFIX) do not incorporate the
requested version from query_filters.version, causing stale/wrong versions to be
served; fix by including the version identifier (e.g., format!("{}:{}{}",
schema_name.0, query_filters.version.unwrap_or("latest"),
CONFIG_VERSION_KEY_SUFFIX) and similarly for CONFIG_KEY_SUFFIX) or,
alternatively, bypass cache when query_filters.version is a specific version and
only use caching for the "latest" path; update the key construction where
fetch_from_redis_else_writeback is called (references:
fetch_from_redis_else_writeback, CONFIG_VERSION_KEY_SUFFIX, CONFIG_KEY_SUFFIX,
query_filters.version, get_config_version, generate_config_from_version,
schema_name, workspace_context) accordingly.

Comment on lines 773 to 802
let mut config_version = fetch_from_redis_else_writeback::<Option<i64>>(
format!("{}{CONFIG_VERSION_KEY_SUFFIX}", schema_name.0),
&schema_name,
state.redis.clone(),
state.db_pool.clone(),
|db_pool| {
let DbConnection(mut conn) = get_db_connection(db_pool)?;
get_config_version(&query_filters.version, &workspace_context, &mut conn)
},
)
.await
.map_err(|e| unexpected_error!("Config version not found due to: {}", e))?;

let mut config = fetch_from_redis_else_writeback::<Config>(
format!("{}{CONFIG_KEY_SUFFIX}", schema_name.0),
&schema_name,
state.redis.clone(),
state.db_pool.clone(),
|db_pool| {
let DbConnection(mut conn) = get_db_connection(db_pool)?;
generate_config_from_version(
&mut config_version,
&mut conn,
&workspace_context.schema_name,
)
},
)
.await
.map_err(|e| unexpected_error!("failed to generate config: {}", e))?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same cache key issue as get_handler - version parameter not respected from cache.

This handler has the identical problem: cache keys don't include the version, so specific version requests may receive wrong cached data.

Apply the same fix as suggested for get_handler.

🤖 Prompt for AI Agents
In @crates/context_aware_config/src/api/config/handlers.rs around lines 773 -
801, The cache keys for both fetch_from_redis_else_writeback calls ignore the
requested version, causing wrong cached data to be returned for versioned
requests; update both keys to include the version parameter (use the incoming
query_filters.version for the version lookup key and the resolved config_version
value for the config payload key) so keys become unique per version (e.g.,
append the version string/number to the format! key); modify the first fetch
(CONFIG_VERSION_KEY_SUFFIX) and the second fetch (CONFIG_KEY_SUFFIX) to
incorporate the appropriate version identifier and ensure you propagate the same
version identifier into the second key after resolving config_version.

Comment on lines 149 to 154
put_experiment_groups_in_redis(
state.redis.clone(),
&mut conn,
&workspace_request.schema_name,
)
.await?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Redis cache update failures should not fail successful mutations.

The same pattern appears in all mutation handlers (create, update, add-members, remove-members, delete, backfill): the Redis cache update uses await?, so if Redis fails after a successful DB transaction, the client receives an error despite the operation having succeeded.

Consider logging Redis failures instead of propagating them, since the cache will be refreshed on the next mutation.

🔎 Suggested pattern
-    put_experiment_groups_in_redis(
-        state.redis.clone(),
-        &mut conn,
-        &workspace_request.schema_name,
-    )
-    .await?;
+    if let Err(e) = put_experiment_groups_in_redis(
+        state.redis.clone(),
+        &mut conn,
+        &workspace_request.schema_name,
+    )
+    .await
+    {
+        log::error!("Failed to update experiment groups cache: {}", e);
+    }

Apply this pattern to all mutation handlers.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
put_experiment_groups_in_redis(
state.redis.clone(),
&mut conn,
&workspace_request.schema_name,
)
.await?;
if let Err(e) = put_experiment_groups_in_redis(
state.redis.clone(),
&mut conn,
&workspace_request.schema_name,
)
.await
{
log::error!("Failed to update experiment groups cache: {}", e);
}
🤖 Prompt for AI Agents
In @crates/experimentation_platform/src/api/experiment_groups/handlers.rs around
lines 149 - 154, The handlers currently propagate Redis failures because they
call put_experiment_groups_in_redis(...).await? after successful DB mutations;
change these calls in all mutation handlers (create, update, add-members,
remove-members, delete, backfill) to await the future without the `?` and handle
errors by logging them (e.g., using the existing logger/tracing) instead of
returning an error—i.e., run let res = put_experiment_groups_in_redis(...).await
and if res.is_err() log the error with context including the workspace/schema
and operation name so the DB mutation still returns success even if the cache
update fails.

Comment on lines 317 to 357
async fn list_handler(
pagination_params: superposition_query::Query<PaginationParams>,
filters: superposition_query::Query<ExpGroupFilters>,
db_conn: DbConnection,
schema_name: SchemaName,
state: Data<AppState>,
) -> superposition::Result<Json<PaginatedResponse<ExperimentGroup>>> {
let key = format!("{}::{}", schema_name.0, EXPERIMENT_GROUPS_LIST_KEY_SUFFIX);
fetch_from_redis_else_writeback::<PaginatedResponse<ExperimentGroup>>(
key,
&schema_name,
state.redis.clone(),
state.db_pool.clone(),
|db_pool| {
let db_conn = get_db_connection(db_pool)?;
list_experiment_groups_db(
pagination_params,
filters,
db_conn,
schema_name.clone(),
)
},
)
.await
.map(Json)
.map_err(|e| unexpected_error!(e))
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Cache key format mismatch and pagination/filters bypassed when serving from cache.

Two issues here:

  1. Key format mismatch: Line 323 uses format!("{}::{}", schema_name.0, EXPERIMENT_GROUPS_LIST_KEY_SUFFIX) with a :: separator, but put_experiment_groups_in_redis in helpers.rs uses format!("{}{EXPERIMENT_GROUPS_LIST_KEY_SUFFIX}", **schema_name) without a separator. This will cause cache misses on reads and stale data accumulation.

  2. Filters/pagination ignored from cache: When Redis returns cached data, pagination_params and filters are completely bypassed. The cache stores PaginatedResponse::all(...), so filtered or paginated requests will receive the entire unfiltered list, breaking pagination and filtering functionality.

🔎 Suggested approach

Either:

  • Include filter/pagination params in the cache key (complex), or
  • Only cache the full list and apply filtering/pagination post-retrieval:
async fn list_handler(
    pagination_params: superposition_query::Query<PaginationParams>,
    filters: superposition_query::Query<ExpGroupFilters>,
    schema_name: SchemaName,
    state: Data<AppState>,
) -> superposition::Result<Json<PaginatedResponse<ExperimentGroup>>> {
-   let key = format!("{}::{}", schema_name.0, EXPERIMENT_GROUPS_LIST_KEY_SUFFIX);
+   let key = format!("{}{EXPERIMENT_GROUPS_LIST_KEY_SUFFIX}", schema_name.0);
    fetch_from_redis_else_writeback::<PaginatedResponse<ExperimentGroup>>(
        key,
        &schema_name,
        state.redis.clone(),
        state.db_pool.clone(),
        |db_pool| {
            let db_conn = get_db_connection(db_pool)?;
-           list_experiment_groups_db(
-               pagination_params,
-               filters,
-               db_conn,
-               schema_name.clone(),
-           )
+           // Fetch all groups for cache, apply filters after
+           list_all_experiment_groups_db(db_conn, schema_name.clone())
        },
    )
    .await
+   .map(|cached| apply_filters_and_pagination(cached, &filters, &pagination_params))
    .map(Json)
    .map_err(|e| unexpected_error!(e))
}

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 431 to 478
let result = db_conn
.transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| {
let marked_group = diesel::update(experiment_groups::experiment_groups)
.filter(experiment_groups::id.eq(&id))
.set((
experiment_groups::last_modified_by.eq(user.email),
experiment_groups::last_modified_at.eq(chrono::Utc::now()),
))
.returning(ExperimentGroup::as_returning())
.schema_name(&schema_name)
.get_result(conn)?;
if !marked_group.member_experiment_ids.is_empty() {
return Err(bad_argument!(
"Cannot delete experiment group {} since it has members",
marked_group.name
));
}
diesel::delete(experiment_groups::experiment_groups)
.filter(experiment_groups::id.eq(&id))
.schema_name(&schema_name)
.execute(conn)?;
Ok(Json(marked_group))
});
put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name)
.await?;
result
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Inconsistent error handling: Redis failure returns error even when DB transaction succeeded.

If the transaction at lines 431-453 succeeds but put_experiment_groups_in_redis at lines 454-455 fails, the client receives an error despite the delete being committed. This creates a confusing state where the operation actually succeeded.

Additionally, put_experiment_groups_in_redis is called unconditionally, even if the transaction failed.

🔎 Suggested fix
     let result = db_conn
         .transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| {
             // ... transaction logic ...
             Ok(Json(marked_group))
-        });
-    put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name)
-        .await?;
-    result
+        })?;
+    if let Err(e) = put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name).await {
+        log::error!("Failed to update Redis cache after delete: {}", e);
+        // Cache will be refreshed on next write; don't fail the request
+    }
+    Ok(result)
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let result = db_conn
.transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| {
let marked_group = diesel::update(experiment_groups::experiment_groups)
.filter(experiment_groups::id.eq(&id))
.set((
experiment_groups::last_modified_by.eq(user.email),
experiment_groups::last_modified_at.eq(chrono::Utc::now()),
))
.returning(ExperimentGroup::as_returning())
.schema_name(&schema_name)
.get_result(conn)?;
if !marked_group.member_experiment_ids.is_empty() {
return Err(bad_argument!(
"Cannot delete experiment group {} since it has members",
marked_group.name
));
}
diesel::delete(experiment_groups::experiment_groups)
.filter(experiment_groups::id.eq(&id))
.schema_name(&schema_name)
.execute(conn)?;
Ok(Json(marked_group))
});
put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name)
.await?;
result
let result = db_conn
.transaction::<Json<ExperimentGroup>, superposition::AppError, _>(|conn| {
let marked_group = diesel::update(experiment_groups::experiment_groups)
.filter(experiment_groups::id.eq(&id))
.set((
experiment_groups::last_modified_by.eq(user.email),
experiment_groups::last_modified_at.eq(chrono::Utc::now()),
))
.returning(ExperimentGroup::as_returning())
.schema_name(&schema_name)
.get_result(conn)?;
if !marked_group.member_experiment_ids.is_empty() {
return Err(bad_argument!(
"Cannot delete experiment group {} since it has members",
marked_group.name
));
}
diesel::delete(experiment_groups::experiment_groups)
.filter(experiment_groups::id.eq(&id))
.schema_name(&schema_name)
.execute(conn)?;
Ok(Json(marked_group))
})?;
if let Err(e) = put_experiment_groups_in_redis(state.redis.clone(), &mut db_conn, &schema_name).await {
log::error!("Failed to update Redis cache after delete: {}", e);
// Cache will be refreshed on next write; don't fail the request
}
Ok(result)
🤖 Prompt for AI Agents
In @crates/experimentation_platform/src/api/experiment_groups/handlers.rs around
lines 431 - 456, The DB transaction result is being ignored and Redis upsert is
called unconditionally, which can return an error to the client even when the DB
delete succeeded; change the flow so you first capture the transaction result
from db_conn.transaction into result (the existing variable), check if result is
Ok(marked_group) before calling put_experiment_groups_in_redis, and if the Redis
call fails do not convert that into an overall error—either log the Redis error
and return the successful transaction Json(marked_group) to the caller or
enqueue an async retry, ensuring you never call put_experiment_groups_in_redis
when the transaction returned Err and never overwrite a successful DB result
with a Redis error; use the symbols result and put_experiment_groups_in_redis to
locate and update the control flow.

Comment on lines +20 to +21
log::info!("Unable to get db connection from pool, error: {e}");
Err(unexpected_error!("Could not get a DB connection, contact an admin and check logs for further information"))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use appropriate log level for connection failures.

Database connection failures should be logged at warn or error level, not info. Connection pool exhaustion or database unavailability are operational issues that warrant higher visibility.

🔎 Suggested fix
-           log::info!("Unable to get db connection from pool, error: {e}");
+           log::error!("Unable to get db connection from pool: {e}");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
log::info!("Unable to get db connection from pool, error: {e}");
Err(unexpected_error!("Could not get a DB connection, contact an admin and check logs for further information"))
log::error!("Unable to get db connection from pool: {e}");
Err(unexpected_error!("Could not get a DB connection, contact an admin and check logs for further information"))
🤖 Prompt for AI Agents
In @crates/service_utils/src/service.rs around lines 20 - 21, Replace the
low-severity info log used when obtaining a DB connection fails — change the
log::info!("Unable to get db connection from pool, error: {e}") call to a
higher-severity level such as log::warn! (or log::error! if you want strict
visibility) so connection pool or DB availability issues are surfaced
appropriately; update the same log invocation in service.rs (the log::info! with
the "Unable to get db connection" message) to log::warn! and keep the existing
error interpolation `{e}` and the following Err(unexpected_error!(...))
unchanged.

Comment on lines 1 to 63
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
k
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_resolved_config(const char *default_config_json,
const char *contexts_json,
const char *overrides_json,
const char *dimensions,
const char *query_data_json,
const char *merge_strategy_str,
const char *filter_prefixes_json,
const char *experimentation_json,
char *ebuf);

/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_resolved_config_with_reasoning(const char *default_config_json,
const char *contexts_json,
const char *overrides_json,
const char *dimensions,
const char *query_data_json,
const char *merge_strategy_str,
const char *filter_prefixes_json,
const char *experimentation_json,
char *ebuf);

int32_t core_test_connection(void);

/**
* # Safety
*
* This function is unsafe because:
* - `s` must be a valid pointer to a C string previously allocated by this library
* - `s` must not be null
* - The caller must ensure `s` is not used after this function is called
* - Double-free will cause undefined behavior
*/
void core_free_string(char *s);

/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_applicable_variants(const char *experiments_json,
const char *experiment_groups_json,
const char *dimensions,
const char *query_data_json,
const char *identifier,
const char *filter_prefixes_json,
char *ebuf);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add include guards or #pragma once to prevent multiple inclusion.

This header file lacks include guards, which can cause redefinition errors when the header is included multiple times across different translation units.

🔎 Proposed fix

Add include guards at the beginning and end of the file:

+#ifndef SUPERPOSITION_CORE_H
+#define SUPERPOSITION_CORE_H
+
 #include <stdarg.h>
 #include <stdbool.h>
 #include <stdint.h>
 #include <stdlib.h>

And at the end of the file:

                                    const char *identifier,
                                    const char *filter_prefixes_json,
                                    char *ebuf);
+
+#endif /* SUPERPOSITION_CORE_H */

Alternatively, use #pragma once at the top of the file (more concise but slightly less portable):

+#pragma once
+
 #include <stdarg.h>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
k
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_resolved_config(const char *default_config_json,
const char *contexts_json,
const char *overrides_json,
const char *dimensions,
const char *query_data_json,
const char *merge_strategy_str,
const char *filter_prefixes_json,
const char *experimentation_json,
char *ebuf);
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_resolved_config_with_reasoning(const char *default_config_json,
const char *contexts_json,
const char *overrides_json,
const char *dimensions,
const char *query_data_json,
const char *merge_strategy_str,
const char *filter_prefixes_json,
const char *experimentation_json,
char *ebuf);
int32_t core_test_connection(void);
/**
* # Safety
*
* This function is unsafe because:
* - `s` must be a valid pointer to a C string previously allocated by this library
* - `s` must not be null
* - The caller must ensure `s` is not used after this function is called
* - Double-free will cause undefined behavior
*/
void core_free_string(char *s);
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_applicable_variants(const char *experiments_json,
const char *experiment_groups_json,
const char *dimensions,
const char *query_data_json,
const char *identifier,
const char *filter_prefixes_json,
char *ebuf);
#ifndef SUPERPOSITION_CORE_H
#define SUPERPOSITION_CORE_H
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
k
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_resolved_config(const char *default_config_json,
const char *contexts_json,
const char *overrides_json,
const char *dimensions,
const char *query_data_json,
const char *merge_strategy_str,
const char *filter_prefixes_json,
const char *experimentation_json,
char *ebuf);
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_resolved_config_with_reasoning(const char *default_config_json,
const char *contexts_json,
const char *overrides_json,
const char *dimensions,
const char *query_data_json,
const char *merge_strategy_str,
const char *filter_prefixes_json,
const char *experimentation_json,
char *ebuf);
int32_t core_test_connection(void);
/**
* # Safety
*
* This function is unsafe because:
* - `s` must be a valid pointer to a C string previously allocated by this library
* - `s` must not be null
* - The caller must ensure `s` is not used after this function is called
* - Double-free will cause undefined behavior
*/
void core_free_string(char *s);
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_applicable_variants(const char *experiments_json,
const char *experiment_groups_json,
const char *dimensions,
const char *query_data_json,
const char *identifier,
const char *filter_prefixes_json,
char *ebuf);
#endif /* SUPERPOSITION_CORE_H */
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 1 - 63,
This header is missing include guards; wrap the entire contents (all
declarations such as core_get_resolved_config,
core_get_resolved_config_with_reasoning, core_test_connection, core_free_string,
core_get_applicable_variants) in a conventional #ifndef/#define/#endif guard (or
add a single #pragma once at the top) so the file cannot be included multiple
times and avoid redefinition errors; ensure the chosen guard macro name is
unique and placed before the first include and the #endif after the last
declaration.

🛠️ Refactor suggestion | 🟠 Major

Add extern "C" guards for C++ compatibility.

The header should wrap all function declarations in extern "C" to ensure C++ compilers don't apply name mangling, which would prevent linking against the Rust-generated C ABI functions.

🔎 Proposed fix

Add extern "C" guards after the include guards:

 #include <stdarg.h>
 #include <stdbool.h>
 #include <stdint.h>
 #include <stdlib.h>
 
+#ifdef __cplusplus
+extern "C" {
+#endif
+
 /**
  * # Safety

And close them at the end:

                                    const char *filter_prefixes_json,
                                    char *ebuf);
 
+#ifdef __cplusplus
+}
+#endif
+
 #endif /* SUPERPOSITION_CORE_H */

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 1 - 63,
Wrap all function declarations in an extern "C" block so C++ compilers don't
mangle the Rust C ABI symbols: after the includes open `extern "C" {` (guarded
by #ifdef __cplusplus) and close it at the end of the header before any trailing
content; ensure the block encloses declarations such as
core_get_resolved_config, core_get_resolved_config_with_reasoning,
core_test_connection, core_free_string, and core_get_applicable_variants.

#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
k
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Remove the erroneous character.

Line 5 contains a single character k which is invalid C syntax and will cause a compilation error.

🔎 Proposed fix
-k
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
k
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h at line 5, Remove the
stray character causing the syntax error by deleting the lone 'k' on line 5 of
the superposition_core.h header; ensure the header contains only valid C/C++
declarations and whitespace so it compiles cleanly.

Comment on lines +6 to +20
/**
* # Safety
*
* Caller ensures that `ebuf` is a sufficiently long buffer to store the
* error message.
*/
char *core_get_resolved_config(const char *default_config_json,
const char *contexts_json,
const char *overrides_json,
const char *dimensions,
const char *query_data_json,
const char *merge_strategy_str,
const char *filter_prefixes_json,
const char *experimentation_json,
char *ebuf);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Expand safety documentation to cover all pointer parameters and return values.

The safety documentation only mentions the ebuf parameter but doesn't document requirements for the other const char* parameters. Callers need to know:

  • Whether null pointers are acceptable for any parameters
  • Validity requirements for input strings (null-terminated, UTF-8, etc.)
  • Return value ownership: when the returned char* is non-null, the caller must free it using core_free_string
  • What a null return value indicates (error condition with details in ebuf)
🔎 Example enhanced documentation
 /**
  * # Safety
  *
+ * All pointer parameters must be valid:
+ * - Input JSON strings must be valid, non-null, null-terminated UTF-8
+ * - `ebuf` must be a pre-allocated buffer of sufficient size (recommend 1024+ bytes)
+ *
+ * # Return value
+ *
+ * Returns a pointer to a newly allocated JSON string on success, which must be
+ * freed by the caller using `core_free_string`. Returns null on error, with
+ * details written to `ebuf`.
+ *
- * Caller ensures that `ebuf` is a sufficiently long buffer to store the
- * error message.
  */

Also applies to: 22-36

🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 6 - 20,
Expand the Safety block for core_get_resolved_config (and the other functions in
the same range) to document every pointer parameter and the return: state which
const char* parameters may be NULL (or must be non-NULL), require input strings
to be null-terminated UTF-8, require ebuf to be non-NULL and sufficiently large
to receive error text, specify that a non-NULL return is heap-allocated and must
be freed by the caller via core_free_string, and clarify that a NULL return
signals an error with details written into ebuf (or if ebuf is NULL, behavior is
undefined). Include these same explicit ownership/validity rules for the other
functions mentioned in lines 22-36.

Comment on lines +40 to +49
/**
* # Safety
*
* This function is unsafe because:
* - `s` must be a valid pointer to a C string previously allocated by this library
* - `s` must not be null
* - The caller must ensure `s` is not used after this function is called
* - Double-free will cause undefined behavior
*/
void core_free_string(char *s);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Correct the safety documentation for null pointer handling.

The documentation states "s must not be null," but the implementation in ffi_legacy.rs explicitly handles null pointers as a no-op (if s.is_null() { return; }). The documentation should accurately reflect this behavior.

🔎 Proposed fix
 /**
  * # Safety
  *
  * This function is unsafe because:
  * - `s` must be a valid pointer to a C string previously allocated by this library
- * - `s` must not be null
+ * - `s` may be null (no-op if null)
  * - The caller must ensure `s` is not used after this function is called
  * - Double-free will cause undefined behavior
  */
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* # Safety
*
* This function is unsafe because:
* - `s` must be a valid pointer to a C string previously allocated by this library
* - `s` must not be null
* - The caller must ensure `s` is not used after this function is called
* - Double-free will cause undefined behavior
*/
void core_free_string(char *s);
/**
* # Safety
*
* This function is unsafe because:
* - `s` must be a valid pointer to a C string previously allocated by this library
* - `s` may be null (no-op if null)
* - The caller must ensure `s` is not used after this function is called
* - Double-free will cause undefined behavior
*/
void core_free_string(char *s);
🤖 Prompt for AI Agents
In @crates/superposition_core/include/superposition_core.h around lines 40 - 49,
Update the safety doc for core_free_string to match the implementation: remove
the assertion that "s must not be null" and add a bullet stating that a null
pointer is accepted and treated as a no-op (the function returns immediately),
while keeping the other bullets about validity, non-usage after free, and
double-free UB; reference the function name core_free_string to locate and
update its comment.

@Datron Datron force-pushed the redis-reads branch 2 times, most recently from 4821295 to 821fd9f Compare January 6, 2026 11:58
- cache get config API responses
- cache get experiment list responses
- cache get experiment groups response
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants