-
Notifications
You must be signed in to change notification settings - Fork 175
refactor(agent): improve code quality and prompt format #3719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
2 Skipped Deployments
|
Running flow...
|
Updates to Preview Branch (db-agent-prompt) ↗︎
Tasks are run on every commit but only new migration files are pushed.
View logs for this Workflow Run ↗︎. |
WalkthroughAdds user input extraction in createGraph and passes it into convertRequirementsToPrompt. Updates convertAnalyzedRequirementsToPrompt to accept userInput and restructure the prompt with sections (Session Goal, Original User Request, Test Cases). Adjusts corresponding tests to the new signature and prompt format. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User
participant A as Agent Graph
participant AG as callDbAgent
participant PU as convertRequirementsToPrompt
participant L as LLM/DB Agent
U->>A: Send messages
A->>AG: Invoke with messages
AG->>AG: Extract first HumanMessage (isHumanMessage) → userInput
AG->>PU: convertRequirementsToPrompt(requirements, userInput, schemaIssues?)
PU-->>AG: Composed prompt
AG->>L: Use prompt for execution
L-->>AG: Result
AG-->>A: Updated state
A-->>U: Response
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
🤖 Agent Deep Modeling ExecutionProcessing time: 4m 27s Command Output
RUN v3.2.4 /home/runner/work/liam/liam/frontend/internal-packages/agent (node:7578) ExperimentalWarning: WASI is an experimental feature and might change at any time ✅ [INFO] 2025-10-08T11:19:07.228Z ⏺ (pm): Purpose: Save the finalized goal and structured testcases for the simple user management system so QA and DB agents can proceed. Inputs: goal (string), testcases (object with categories and testcase arrays). 🧠 Thinking (pm)... Analyzing BRD sections I'm considering the necessary sections for a Business Requirements Document (BRD), like business context, goals, scope, actors, and user stories. I might include API endpoints for database design, but I need to specify requirements for the DB agent clearly. When thinking about test case categories, I’ll cover users, roles, and permissions plus various SQL operation types. I need to specify handling scenarios like duplicate emails and whether to use soft or hard deletes. Outlining test cases I'm thinking about the test cases, where I can include DELETE operations for roles but only allow soft deletes for users. For SQL types, I'll use UPDATE for deactivation since physical deletion might be disallowed. I should include negative scenarios like trying to hard delete a user row, which should be blocked by policy. Using triggers could enforce this at the SQL level, and QA will need to verify compliance. Additionally, I need to ensure each tool call saves requirements with proper JSON and includes a test cases object while avoiding user dialogue for clarification. Outlining the BRD structure I’m laying out the structure for the Business Requirements Document (BRD). It will include an overview and goals, scope clarifications, stakeholders, and assumptions. I need to cover functional requirements like user authentication, roles, profile management, and admin features. The data model will list each entity with generic fields and constraints without actual SQL. Non-functional requirements will address security, privacy, performance, and more. I'll include edge cases and reporting queries, plus acceptance criteria. Test cases will be structured and embedded into the JSON for the saveRequirements tool, focusing on the "goal" summary and the "testcases" object only. Preparing the tool call and BRD I need to ensure that the tool call includes only the necessary fields while also providing a full analysis and the BRD. After calling the tool, I’ll confirm success. This means incorporating the complete BRD in the message and using functions.saveRequirementsToArtifactTool with the goal string and test cases object. I should maintain a moderate verbosity while defining unique IDs for requirements, like FR-xxx for functional requirements. For example, FR-1 covers user registration with specific conditions like email normalization, a hashed password, and email verification. FR-2 addresses authentication details, including login attempts tracking and a lockout mechanism. Detailing functional requirements and test cases I'm mapping out functional requirements for password management, roles, user lifecycle, profile, audit logging, and various administrative operations. Each requirement will have unique identifiers like FR-3 for password management, indicating features such as token expiration, password policies, and logging critical events. For example, FR-4 outlines roles and permissions with a many-to-many mapping, while FR-6 focuses on user profile details. A robust set of test cases will be crafted, categorized by users, authentication, roles, and others, ensuring comprehensive coverage with around 24-30 test cases to validate functionalities effectively. Preparing documentation for requirements I need to annotate type fields properly for clarity. It's essential that category keys are strings, with values as arrays of objects containing a title and type, which should be either INSERT, UPDATE, DELETE, or SELECT in uppercase. I want to avoid having too many duplicates, so I’ll prepare the content carefully. Then I'll call functions.saveRequirementsToArtifactTool with the goal of defining business and data requirements for a user management system, along with the test cases count noted. I’ll keep formatting simple with bullet lists, confirming success once everything’s saved. 🔧 saveRequirementsToArtifactTool(goal: "Define clear, pro...", testcases: {6 keys}) ⎿ unknown: Requirements saved successfully to artifact 🧠 Thinking (db)... Planning database schema I need to start with a checklist for planning the database schema. First, I'll clarify the operation's purpose and minimal inputs before I call the schema design tool. I'll create several tables including users, roles, user_roles, permissions, role_permissions, password_reset_tokens, and audit_logs. I'll ensure unique case-insensitive emails using a CITEXT type or a lower-case unique index. Structuring email columns carefully with checks and normalization processes is key for future usability and data integrity. I'll also need to consider regex for email format checks. Designing phone number validation and schema I need to start with checking the E.164 phone format using regex, then add constraints and foreign keys before calling the tool to create tables. I'll validate the response after that. The tables I plan to create include users, roles, user_roles, permissions, role_permissions, password_reset_tokens, and audit_logs, using UUIDs for primary keys with a default generation. It’s essential to incorporate timestamps and set appropriate cascading delete behaviors for roles while preventing deletion of system roles with checks. Overall, clarity and thoroughness in constraints are key! Adding constraints in the schema I need to decide whether to include a constraint, possibly a CHECK type, in the schema design. There might be limitations when it comes to referencing a delete operation. So, I think I should just proceed and call the tool to finalize this. It’s all about making sure everything is correctly set up for proper data integrity. Let’s see how it goes! 🔧 schemaDesignTool() 🧠 Thinking (db)... I’m considering that after the tool returns the data, I need to validate it in just 1-2 lines. Then, I should summarize the schema and confirm the results clearly. It’s important to make sure everything is accurate and presented properly. I'm eager to ensure that the user has all the information they need in a concise manner. Let's keep it straightforward and efficient! 🔧 schemaDesignTool() 🧠 Thinking (db)... Planning tool call I need to call the tool right away since there’s an immediate requirement. I've used it before, but now I have to build the schema again while avoiding the invalid gin index issue. I’ll ensure that the tool state is reset first and create tables without the problematic index. I’ll start by preparing the operations and reporting my checklist, followed by a purpose statement and minimal inputs. After that, I’ll call the tool and validate the outcome in a couple of lines before reporting the changes. Let’s get this done! 🔧 schemaDesignTool() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase()-- Title: Archive (soft-delete) a user and set archived_at timestamp\n 🔧 saveTestcase()-- Type: UPDATE\n\n-- Assumptions:\n-- 1) We identify the target user by their UUID (replace :target_user_id with actual 🔧 saveTestcase() UUID).\n-- 2) 🔧 saveTestcase() We prevent archiving already archived users and ensure we 🔧 saveTestcase() notify via audit_logs when applicable.\ 🔧 saveTestcase() 🔧 saveTestcase()n\nBEGIN;\n\n-- 1) Validate target user exists 🔧 saveTestcase() and is not already archived\nWITH target AS (\n SELECT id, archived_at\n FROM 🔧 saveTestcase() 🔧 saveTestcase() users\n WHERE id = :target_user_id\n FOR UPDATE\n)\nUPDATE users\nSET\n archived_at = COALESCE(archived_at, NOW()), 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase()\n updated_at = NOW(),\n status = 'archived'\nFROM target\nWHERE users.id = target.id\n AND target.archived_at IS NULL;\n\n-- 2) If no rows updated, raise a notice to indicate either not found or already archived\nDO $$\nBEGIN\n IF FOUND THEN\n -- log an audit event for archiving\n INSERT INTO audit_logs (id, action, details, user_id, actor_id, created_at)\n VALUES_id = '11111111-1111-1111-1111-111111111111'\n-- with an incorrect current password. We expect zero rows affected if the current password is incorrect.\n\nWITH params AS (\n SELECT\n '11111111-1111-1111-1111-111111111111'::uuid AS user_id,\n 'new_secure_password_hash'::text AS new_password_hash,\n 'incorrect_current_password_hash'::text AS provided_current_hash\n),\nverify AS (\n -- This is a stand-in for the actual password verification logic.\n -- It should return true if provided_current_hash matches the stored hash, otherwise false.\n SELECT\n u.password_hash AS stored_hash,\n p.provided_current_hash,\n p.new_password_hash\n FROM users u\n CROSS JOIN params p\n WHERE u.id = p.user_id\n)\nUPDATE users\nSET ⏺ (qa): { 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() (gen_random_uuid(), 'user.archived', jsonb_build_object('target_user_id', :target_user_id, 'archived_at', (SELECT archived_at FROM users WHERE id = :target_user_id)), :target_user_id, NULL, NOW());\n ELSE\n RAISE NOTICE 'No user archived. The user may not exist or is already archived: %', :target_user_id;\n END IF;\nEND\n$$;\n\nCOMMIT;\n" 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() TRIGGER IF EXISTS trg_users_soft_delete ON users;\nDROP FUNCTION IF EXISTS users_soft_delete_policy();\n\nCREATE OR REPLACE FUNCTION users_soft_delete_policy()\n 🔧 saveTestcase() 🔧 saveTestcase()(node:7578) Warning: The 'import-in-the-middle' hook has already been initialized 🔧 saveTestcase() 🔧 saveTestcase()n status = 'archived'\n WHERE id = OLD.id;\n RETURN NULL;\n ELSE\n RAISE EXCEPTION 'Cannot delete user 🔧 saveTestcase() %: already archived', OLD.id;\n END IF;\nEND;\n$$ LANGUAGE plpgsql;\n\nCREATE TRIGGER trg_users_before_delete\nBEFORE DELETE ON users\nFOR EACH ROW\nEXECUTE FUNCTION users_soft_delete_policy();\n\n-- 3) Demonstrate policy with a test delete attempt that should convert to soft-delete\n-- Insert a sample active user for the test\nINSERT INTO users (id, name, email, email_normalized, locale, status, timezone,\n created_at, updated_at, archived_at, last_login_at, password_hash,\n phone, failed_login_count)\nVALUES (gen_random_uuid(), 'Alice Example', 'alice 🔧 saveTestcase()@example.com', LOWER('alice@example.com'),\n 'en-US', 'active', 'America/New_York', now(), now(), NULL, NULL, NULL, NULL, NULL, 0)\nON CONFLICT DO NOTHING;\n\n-- Attempt to DELETE the user;(node:7578) Warning: The 'import-in-the-middle' hook has already been initialized ⏺ (qa): { ⏺ (qa): { ⏺ (qa): I will adjust the query to remove the placeholder parameter and provide a ready-to-run version using a bound parameter style compatible with common clients (e.g., $1) or a concrete example. Since the test case requires SELECT to validate authorization when a user inherits permissions via assigned roles, here is a production-ready version that uses a parameterized approach.
SQL to save: 🔧 saveTestcase() ⎿ unknown: 26/39 test cases passed, 13 failed ❌ Test Case: Reject registration when email already exists (duplicate email)Error:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (8)
frontend/internal-packages/db/package.json (1)
19-19
: Supabase CLI version and security verificationNo known security advisories for version 2.45.4. The latest available version is 2.48.3; consider upgrading to benefit from recent fixes and features.
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.module.css (1)
22-25
: Consider adding a comment for selector clarity.The
:not([data-disable-hover])
selector correctly checks for attribute presence (not value), enabling hover only when the parent container lacks thedata-disable-hover
attribute. Consider adding a comment to document this behavior for future maintainers.Apply this diff to add a clarifying comment:
+/* Hover expansion only applies when not disabled by parent container */ :not([data-disable-hover]) > .globalNav:hover { width: 11rem; box-shadow: 4px 0 16px rgba(0, 0, 0, 0.2); }
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.tsx (1)
10-10
: Implementation is correct; consider more explicit pattern.The
enableHover
prop anddata-disable-hover
attribute work correctly together. The patterndata-disable-hover={!enableHover || undefined}
correctly omits the attribute whenenableHover
istrue
and sets it whenfalse
.For improved readability, consider using an explicit ternary:
<div className={styles.globalNavContainer} data-global-nav-container - data-disable-hover={!enableHover || undefined} + data-disable-hover={!enableHover ? true : undefined} >Also applies to: 18-18, 21-25
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.test.ts (1)
4-16
: Consider using proper setup/teardown for window.location manipulation.Directly mutating
window.location.search
in tests can cause isolation issues when tests run in parallel or when previous values leak into subsequent tests. Vitest's test isolation may not fully protect against this global mutation.Apply this diff to add proper cleanup:
+import { afterEach, beforeEach, describe, expect, it } from 'vitest' -import { describe, expect, it } from 'vitest' import { getTableLinkHref } from './getTableLinkHref' +describe('getTableLinkHref', () => { + let originalSearch: string + + beforeEach(() => { + originalSearch = window.location.search + }) + + afterEach(() => { + // Restore to avoid test pollution + window.history.replaceState({}, '', `${window.location.pathname}${originalSearch}`) + }) + -it('should return the "active" query parameter with the table name', () => { - window.location.search = '' - - expect(getTableLinkHref('users')).toBe('?active=users') -}) - -describe('when other query parameters are present', () => { - it('should preserve existing query parameters', () => { - window.location.search = '?page=2&sort=asc' - - expect(getTableLinkHref('users')).toBe('?page=2&sort=asc&active=users') + it('should return the "active" query parameter with the table name', () => { + window.history.replaceState({}, '', window.location.pathname) + + expect(getTableLinkHref('users')).toBe('?active=users') + }) + + describe('when other query parameters are present', () => { + it('should preserve existing query parameters', () => { + window.history.replaceState({}, '', `${window.location.pathname}?page=2&sort=asc`) + + expect(getTableLinkHref('users')).toBe('?page=2&sort=asc&active=users') + }) }) })frontend/packages/erd-core/src/schemas/hash/schemas.test.ts (1)
12-16
: Consider testing edge cases with empty table or column names.The current validation allows hashes like
__columns__id
(empty table name) orusers__columns__
(empty column name) since they split into exactly two parts. If these should be invalid, add tests to verify behavior and potentially strengthen validation.Example edge case tests to add:
it('should handle edge cases with empty parts', () => { // Decide if these should pass or fail based on requirements expect(() => parse(hashSchema, '__columns__id')).toThrowError() expect(() => parse(hashSchema, 'users__columns__')).toThrowError() })frontend/packages/schema/src/parser/index.test.ts (1)
29-56
: Consider using a fixture file for test data.The inline JSON object (lines 30-51) makes the test verbose. Extracting to a fixture file in
./liam/input/
would improve readability and maintainability, following the pattern used by other format tests.Example refactor:
it('should parse liam schema JSON correctly', async () => { const schemaJson = fs.readFileSync( path.resolve(__dirname, './liam/input/schema1.in.json'), 'utf-8', ) const { value, errors } = await parse(schemaJson, 'liam') expect(errors).toEqual([]) expect(value).toMatchSnapshot() })frontend/packages/schema/src/parser/liam/index.ts (1)
12-21
: Consider preserving individual validation issues.The error mapping joins all valibot issues into a single string. While functional, this loses the structure of individual validation errors, which could be helpful for debugging.
If you need more detailed error information in the future, consider preserving the structured issues:
const parseSchema = ( data: unknown, ): Result<v.InferOutput<typeof schemaSchema>, Error> => { const result = v.safeParse(schemaSchema, data) if (result.success) { return ok(result.output) } - const errorMessage = result.issues.map((issue) => issue.message).join(', ') - return err(new Error(`Invalid Liam Schema format: ${errorMessage}`)) + return err( + new Error( + `Invalid Liam Schema format: ${JSON.stringify(result.issues.map(i => ({ path: i.path, message: i.message })))}`, + ), + ) }frontend/packages/erd-core/src/schemas/hash/schemas.ts (1)
3-9
: Consider validating against empty segments.The current validator allows empty table or column names (e.g.,
"__columns__x"
,"x__columns__"
,"__columns__"
). If this is unintended, consider adding a non-empty check for the parts.export const hashSchema = custom<`${string}__columns__${string}`>( (input): input is `${string}__columns__${string}` => { if (typeof input !== 'string') return false const parts = input.split('__columns__') - return parts.length === 2 + return parts.length === 2 && parts[0].length > 0 && parts[1].length > 0 }, )
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yaml
is excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (42)
.changeset/add-liam-schema-format.md
(1 hunks)CLAUDE.md
(1 hunks)frontend/apps/app/components/CommonLayout/GlobalNav/GlobalNav.tsx
(2 hunks)frontend/apps/app/components/PublicLayout/PublicGlobalNav/PublicGlobalNav.tsx
(1 hunks)frontend/internal-packages/agent/src/createGraph.ts
(2 hunks)frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
(4 hunks)frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
(2 hunks)frontend/internal-packages/db/package.json
(1 hunks)frontend/internal-packages/db/supabase/database.types.ts
(1 hunks)frontend/internal-packages/schema-bench/README.md
(1 hunks)frontend/internal-packages/schema-bench/package.json
(2 hunks)frontend/internal-packages/schema-bench/src/cli/evaluateWithLangsmith.ts
(1 hunks)frontend/internal-packages/schema-bench/src/cli/executeLiamDbShared.ts
(3 hunks)frontend/internal-packages/schema-bench/src/cli/executeOpenaiUnified.ts
(4 hunks)frontend/internal-packages/schema-bench/src/cli/uploadDatasetToLangsmith.ts
(1 hunks)frontend/internal-packages/schema-bench/src/cli/utils/index.ts
(1 hunks)frontend/internal-packages/schema-bench/src/cli/utils/loadJsonFiles.ts
(2 hunks)frontend/internal-packages/schema-bench/src/langsmith/schemaEvaluator.ts
(1 hunks)frontend/internal-packages/schema-bench/src/langsmith/types.ts
(1 hunks)frontend/packages/cli/src/cli/erdCommand/runPreprocess.test.ts
(1 hunks)frontend/packages/cli/src/cli/index.test.ts
(1 hunks)frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/Columns/ColumnsItem/ColumnsItem.tsx
(2 hunks)frontend/packages/erd-core/src/features/erd/components/ERDRenderer/CommandPalette/CommandPaletteOptions/TableOptions.tsx
(1 hunks)frontend/packages/erd-core/src/features/erd/utils/index.ts
(1 hunks)frontend/packages/erd-core/src/features/erd/utils/url/getTableColumnElementId.ts
(1 hunks)frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.test.ts
(1 hunks)frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.ts
(1 hunks)frontend/packages/erd-core/src/features/erd/utils/url/index.ts
(1 hunks)frontend/packages/erd-core/src/schemas/hash/index.ts
(1 hunks)frontend/packages/erd-core/src/schemas/hash/schemas.test.ts
(1 hunks)frontend/packages/erd-core/src/schemas/hash/schemas.ts
(1 hunks)frontend/packages/erd-core/src/schemas/hash/types.ts
(1 hunks)frontend/packages/erd-core/src/schemas/index.ts
(1 hunks)frontend/packages/schema/src/parser/index.test.ts
(1 hunks)frontend/packages/schema/src/parser/index.ts
(1 hunks)frontend/packages/schema/src/parser/liam/index.test.ts
(1 hunks)frontend/packages/schema/src/parser/liam/index.ts
(1 hunks)frontend/packages/schema/src/parser/supportedFormat/schema.ts
(1 hunks)frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.module.css
(1 hunks)frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.tsx
(1 hunks)scripts/extract-supabase-anon-key.sh
(1 hunks)scripts/extract-supabase-service-key.sh
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Name utility files in camelCase (e.g., mergeSchema.ts)
Files:
frontend/packages/cli/src/cli/erdCommand/runPreprocess.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/index.ts
frontend/packages/erd-core/src/schemas/hash/types.ts
frontend/packages/schema/src/parser/index.ts
frontend/packages/erd-core/src/schemas/hash/schemas.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.ts
frontend/packages/erd-core/src/schemas/hash/index.ts
frontend/packages/cli/src/cli/index.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableColumnElementId.ts
frontend/internal-packages/db/supabase/database.types.ts
frontend/internal-packages/agent/src/createGraph.ts
frontend/packages/erd-core/src/schemas/hash/schemas.ts
frontend/internal-packages/schema-bench/src/cli/utils/loadJsonFiles.ts
frontend/internal-packages/schema-bench/src/langsmith/types.ts
frontend/packages/erd-core/src/schemas/index.ts
frontend/packages/schema/src/parser/liam/index.test.ts
frontend/internal-packages/schema-bench/src/cli/uploadDatasetToLangsmith.ts
frontend/packages/schema/src/parser/supportedFormat/schema.ts
frontend/packages/schema/src/parser/index.test.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
frontend/internal-packages/schema-bench/src/cli/utils/index.ts
frontend/packages/erd-core/src/features/erd/utils/index.ts
frontend/internal-packages/schema-bench/src/cli/executeLiamDbShared.ts
frontend/internal-packages/schema-bench/src/langsmith/schemaEvaluator.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.test.ts
frontend/internal-packages/schema-bench/src/cli/executeOpenaiUnified.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
frontend/internal-packages/schema-bench/src/cli/evaluateWithLangsmith.ts
frontend/packages/schema/src/parser/liam/index.ts
**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Write unit tests with filenames ending in .test.ts or .test.tsx colocated near source
Files:
frontend/packages/cli/src/cli/erdCommand/runPreprocess.test.ts
frontend/packages/erd-core/src/schemas/hash/schemas.test.ts
frontend/packages/cli/src/cli/index.test.ts
frontend/packages/schema/src/parser/liam/index.test.ts
frontend/packages/schema/src/parser/index.test.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.test.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript/TSX across the codebase
**/*.{ts,tsx}
: Use runtime type validation with valibot for external data validation
Prefer early returns for readability
Write simple, direct code without backward compatibility shims; update all call sites together
Use const-assigned arrow functions instead of function declarations for small utilities (e.g., const toggle = () => {})
Follow existing import patterns and tsconfig path aliases
Files:
frontend/packages/cli/src/cli/erdCommand/runPreprocess.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/index.ts
frontend/packages/erd-core/src/schemas/hash/types.ts
frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/Columns/ColumnsItem/ColumnsItem.tsx
frontend/packages/schema/src/parser/index.ts
frontend/packages/erd-core/src/features/erd/components/ERDRenderer/CommandPalette/CommandPaletteOptions/TableOptions.tsx
frontend/packages/erd-core/src/schemas/hash/schemas.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.ts
frontend/packages/erd-core/src/schemas/hash/index.ts
frontend/packages/cli/src/cli/index.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableColumnElementId.ts
frontend/internal-packages/db/supabase/database.types.ts
frontend/internal-packages/agent/src/createGraph.ts
frontend/packages/erd-core/src/schemas/hash/schemas.ts
frontend/internal-packages/schema-bench/src/cli/utils/loadJsonFiles.ts
frontend/internal-packages/schema-bench/src/langsmith/types.ts
frontend/packages/erd-core/src/schemas/index.ts
frontend/packages/schema/src/parser/liam/index.test.ts
frontend/internal-packages/schema-bench/src/cli/uploadDatasetToLangsmith.ts
frontend/packages/schema/src/parser/supportedFormat/schema.ts
frontend/packages/schema/src/parser/index.test.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
frontend/apps/app/components/PublicLayout/PublicGlobalNav/PublicGlobalNav.tsx
frontend/internal-packages/schema-bench/src/cli/utils/index.ts
frontend/packages/erd-core/src/features/erd/utils/index.ts
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.tsx
frontend/internal-packages/schema-bench/src/cli/executeLiamDbShared.ts
frontend/internal-packages/schema-bench/src/langsmith/schemaEvaluator.ts
frontend/apps/app/components/CommonLayout/GlobalNav/GlobalNav.tsx
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.test.ts
frontend/internal-packages/schema-bench/src/cli/executeOpenaiUnified.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
frontend/internal-packages/schema-bench/src/cli/evaluateWithLangsmith.ts
frontend/packages/schema/src/parser/liam/index.ts
frontend/packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Shared libraries and tools live under frontend/packages
Files:
frontend/packages/cli/src/cli/erdCommand/runPreprocess.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/index.ts
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.module.css
frontend/packages/erd-core/src/schemas/hash/types.ts
frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/Columns/ColumnsItem/ColumnsItem.tsx
frontend/packages/schema/src/parser/index.ts
frontend/packages/erd-core/src/features/erd/components/ERDRenderer/CommandPalette/CommandPaletteOptions/TableOptions.tsx
frontend/packages/erd-core/src/schemas/hash/schemas.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.ts
frontend/packages/erd-core/src/schemas/hash/index.ts
frontend/packages/cli/src/cli/index.test.ts
frontend/packages/erd-core/src/features/erd/utils/url/getTableColumnElementId.ts
frontend/packages/erd-core/src/schemas/hash/schemas.ts
frontend/packages/erd-core/src/schemas/index.ts
frontend/packages/schema/src/parser/liam/index.test.ts
frontend/packages/schema/src/parser/supportedFormat/schema.ts
frontend/packages/schema/src/parser/index.test.ts
frontend/packages/erd-core/src/features/erd/utils/index.ts
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.tsx
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.test.ts
frontend/packages/schema/src/parser/liam/index.ts
**/*.module.css
📄 CodeRabbit inference engine (AGENTS.md)
Use CSS Modules named *.module.css and keep types via typed-css-modules
**/*.module.css
: Use CSS variables from @liam-hq/ui for styling tokens
Use spacing CSS variables only for margins/padding; use size units (rem, px, etc.) for width/height
Files:
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.module.css
**/*.css
📄 CodeRabbit inference engine (CLAUDE.md)
Use CSS Modules for all styling (i.e., prefer *.module.css; avoid global CSS)
Files:
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.module.css
**/*.tsx
📄 CodeRabbit inference engine (AGENTS.md)
Name React component files in PascalCase and use TSX (e.g., App.tsx)
**/*.tsx
: Prefix React event handler functions with "handle" (e.g., handleClick)
Import UI components from @liam-hq/ui when available
Import icons from @liam-hq/ui
Files:
frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/Columns/ColumnsItem/ColumnsItem.tsx
frontend/packages/erd-core/src/features/erd/components/ERDRenderer/CommandPalette/CommandPaletteOptions/TableOptions.tsx
frontend/apps/app/components/PublicLayout/PublicGlobalNav/PublicGlobalNav.tsx
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.tsx
frontend/apps/app/components/CommonLayout/GlobalNav/GlobalNav.tsx
**/!(page).tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Use named exports only (no default exports) for React/TSX modules
Files:
frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/Columns/ColumnsItem/ColumnsItem.tsx
frontend/packages/erd-core/src/features/erd/components/ERDRenderer/CommandPalette/CommandPaletteOptions/TableOptions.tsx
frontend/apps/app/components/PublicLayout/PublicGlobalNav/PublicGlobalNav.tsx
frontend/packages/ui/src/components/BaseGlobalNav/BaseGlobalNav.tsx
frontend/apps/app/components/CommonLayout/GlobalNav/GlobalNav.tsx
frontend/internal-packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Infra and tooling (e2e, configs, storybook, agent) live under frontend/internal-packages
Files:
frontend/internal-packages/db/supabase/database.types.ts
frontend/internal-packages/agent/src/createGraph.ts
frontend/internal-packages/schema-bench/src/cli/utils/loadJsonFiles.ts
frontend/internal-packages/schema-bench/src/langsmith/types.ts
frontend/internal-packages/schema-bench/src/cli/uploadDatasetToLangsmith.ts
frontend/internal-packages/schema-bench/README.md
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
frontend/internal-packages/db/package.json
frontend/internal-packages/schema-bench/src/cli/utils/index.ts
frontend/internal-packages/schema-bench/package.json
frontend/internal-packages/schema-bench/src/cli/executeLiamDbShared.ts
frontend/internal-packages/schema-bench/src/langsmith/schemaEvaluator.ts
frontend/internal-packages/schema-bench/src/cli/executeOpenaiUnified.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
frontend/internal-packages/schema-bench/src/cli/evaluateWithLangsmith.ts
frontend/apps/**
📄 CodeRabbit inference engine (AGENTS.md)
Next.js apps live under frontend/apps; target app-specific scripts and configs there
Files:
frontend/apps/app/components/PublicLayout/PublicGlobalNav/PublicGlobalNav.tsx
frontend/apps/app/components/CommonLayout/GlobalNav/GlobalNav.tsx
🧠 Learnings (4)
📚 Learning: 2025-07-30T05:52:56.270Z
Learnt from: hoshinotsuyoshi
PR: liam-hq/liam#2771
File: frontend/internal-packages/schema-bench/src/cli/executeLiamDb.ts:22-22
Timestamp: 2025-07-30T05:52:56.270Z
Learning: The schema-bench package (frontend/internal-packages/schema-bench) has been converted from ESM to CommonJS mode by removing "type": "module" from package.json, making __dirname available and correct to use in TypeScript files within this package.
Applied to files:
frontend/packages/erd-core/src/schemas/hash/index.ts
frontend/internal-packages/schema-bench/src/cli/utils/index.ts
frontend/internal-packages/schema-bench/package.json
frontend/internal-packages/schema-bench/src/langsmith/schemaEvaluator.ts
📚 Learning: 2025-08-27T03:01:10.501Z
Learnt from: FunamaYukina
PR: liam-hq/liam#3185
File: frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/extractSchemaForTable.test.ts:53-54
Timestamp: 2025-08-27T03:01:10.501Z
Learning: For the liam project extensions feature: Phase 1 focuses on adding extensions field to Schema interface without ERD display functionality. Extensions are handled similarly to enums in extractSchemaForTable - returning empty objects since they're not displayed on ERD yet. Display/rendering of extensions will be implemented in later phases.
Applied to files:
.changeset/add-liam-schema-format.md
📚 Learning: 2025-09-02T05:53:14.356Z
Learnt from: CR
PR: liam-hq/liam#0
File: AGENTS.md:0-0
Timestamp: 2025-09-02T05:53:14.356Z
Learning: Before pushing, run formatting, linting, and tests: pnpm fmt && pnpm lint && pnpm test
Applied to files:
frontend/internal-packages/schema-bench/package.json
📚 Learning: 2025-09-02T05:53:14.356Z
Learnt from: CR
PR: liam-hq/liam#0
File: AGENTS.md:0-0
Timestamp: 2025-09-02T05:53:14.356Z
Learning: Applies to frontend/internal-packages/** : Infra and tooling (e2e, configs, storybook, agent) live under frontend/internal-packages
Applied to files:
CLAUDE.md
🧬 Code graph analysis (15)
frontend/packages/erd-core/src/schemas/hash/types.ts (1)
frontend/packages/erd-core/src/schemas/hash/schemas.ts (1)
hashSchema
(3-9)
frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/Columns/ColumnsItem/ColumnsItem.tsx (1)
frontend/packages/erd-core/src/features/erd/utils/url/getTableColumnElementId.ts (1)
getTableColumnElementId
(3-6)
frontend/packages/schema/src/parser/index.ts (1)
frontend/packages/schema/src/parser/liam/index.ts (1)
processor
(23-37)
frontend/packages/erd-core/src/schemas/hash/schemas.test.ts (1)
frontend/packages/erd-core/src/schemas/hash/schemas.ts (1)
hashSchema
(3-9)
frontend/internal-packages/agent/src/createGraph.ts (1)
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts (1)
convertRequirementsToPrompt
(5-51)
frontend/packages/schema/src/parser/liam/index.test.ts (1)
frontend/packages/schema/src/parser/liam/index.ts (1)
processor
(23-37)
frontend/internal-packages/schema-bench/src/cli/uploadDatasetToLangsmith.ts (6)
frontend/internal-packages/schema-bench/src/langsmith/types.ts (1)
LangSmithDatasetConfig
(12-15)frontend/internal-packages/schema-bench/src/cli/utils/loadJsonFiles.ts (1)
loadJsonFiles
(17-71)frontend/internal-packages/schema-bench/src/cli/utils/parseArgs.ts (1)
parseArgs
(17-37)frontend/internal-packages/schema-bench/src/cli/utils/workspace.ts (1)
getWorkspacePath
(14-16)frontend/internal-packages/schema-bench/src/cli/utils/selectTargetDatasets.ts (1)
selectTargetDatasets
(13-28)frontend/internal-packages/schema-bench/src/cli/utils/filterAndResolveDatasets.ts (1)
filterAndResolveDatasets
(10-30)
frontend/packages/schema/src/parser/index.test.ts (1)
frontend/packages/schema/src/parser/index.ts (1)
parse
(12-42)
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts (1)
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts (1)
convertRequirementsToPrompt
(5-51)
frontend/internal-packages/schema-bench/src/cli/executeLiamDbShared.ts (1)
frontend/internal-packages/schema-bench/src/cli/utils/loadJsonFiles.ts (1)
loadJsonFiles
(17-71)
frontend/internal-packages/schema-bench/src/langsmith/schemaEvaluator.ts (1)
frontend/internal-packages/schema-bench/src/evaluate/evaluate.ts (1)
evaluate
(220-284)
frontend/apps/app/components/CommonLayout/GlobalNav/GlobalNav.tsx (1)
frontend/packages/ui/src/icons/index.ts (1)
Link
(40-40)
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.test.ts (1)
frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.ts (1)
getTableLinkHref
(3-7)
frontend/internal-packages/schema-bench/src/cli/executeOpenaiUnified.ts (1)
frontend/internal-packages/schema-bench/src/cli/utils/loadJsonFiles.ts (1)
loadJsonFiles
(17-71)
frontend/internal-packages/schema-bench/src/cli/evaluateWithLangsmith.ts (6)
frontend/internal-packages/schema-bench/src/langsmith/types.ts (2)
LangSmithInput
(3-6)LangSmithOutput
(8-10)frontend/internal-packages/schema-bench/src/langsmith/schemaEvaluator.ts (1)
schemaEvaluator
(20-61)frontend/internal-packages/schema-bench/src/cli/utils/parseArgs.ts (1)
parseArgs
(17-37)frontend/internal-packages/schema-bench/src/cli/utils/workspace.ts (1)
getWorkspacePath
(14-16)frontend/internal-packages/schema-bench/src/cli/utils/selectTargetDatasets.ts (1)
selectTargetDatasets
(13-28)frontend/internal-packages/schema-bench/src/cli/utils/filterAndResolveDatasets.ts (1)
filterAndResolveDatasets
(10-30)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: frontend-lint
- GitHub Check: frontend-ci
- GitHub Check: Supabase Preview
- GitHub Check: agent-deep-modeling
- GitHub Check: PGTap Tests
- GitHub Check: security-review
🔇 Additional comments (35)
frontend/internal-packages/agent/src/createGraph.ts (2)
1-1
: LGTM! Using the library's type guard.The import of
isHumanMessage
from@langchain/core/messages
is the idiomatic approach and replaces manual type checks.
27-33
: LGTM! Clean extraction of user input.The logic correctly extracts the original user request from the first
HumanMessage
and passes it to the prompt converter. The use of the.text
property and the fallback to an empty string are both appropriate.frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts (2)
5-9
: LGTM! Clean function signature update.The addition of the
userInput
parameter extends the function's capability to include the original user request in the prompt, improving context for downstream consumers.
33-50
: LGTM! Improved prompt formatting.The refactoring improves readability by:
- Extracting
testCasesSection
to a named variable- Using markdown headers (##) for better structure
- Ordering sections as Session Goal → Original User Request → Test Cases
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts (1)
59-269
: LGTM! Comprehensive test coverage.All test cases have been correctly updated to:
- Pass the new
userInput
parameter- Include the "## Original User Request" section in snapshots
- Verify the new prompt structure across various scenarios (empty testcases, empty goal, schemaIssues filtering)
The test coverage remains thorough and validates the new functionality.
frontend/internal-packages/db/supabase/database.types.ts (1)
763-766
: Nol2_normalize
usage found—type change safe
Only SQL grants reference this function; no code invocations detected.frontend/apps/app/components/PublicLayout/PublicGlobalNav/PublicGlobalNav.tsx (1)
8-8
: LGTM!The
enableHover={false}
prop correctly disables hover expansion for the public navigation, which is appropriate since public users don't need the expanded navigation features.frontend/apps/app/components/CommonLayout/GlobalNav/GlobalNav.tsx (2)
8-8
: Note potential naming confusion.While this import is correct, be aware that
@liam-hq/ui
also exports aLink
icon. The distinct import paths prevent actual conflicts, but developers might find this confusing when both are used in the same file.
32-39
: Ensure visible keyboard focus styles on.item
No:focus
or:focus-visible
rules were found in Item.module.css; please verify and add clear focus indicators for keyboard navigation.frontend/packages/erd-core/src/schemas/index.ts (1)
1-1
: LGTM!The export addition is straightforward and follows the existing pattern for re-exporting modules.
frontend/packages/cli/src/cli/erdCommand/runPreprocess.test.ts (1)
75-75
: LGTM!The test expectation correctly includes the new "liam" format in the supported formats list, aligning with the broader Liam schema format support introduced in this PR.
frontend/packages/cli/src/cli/index.test.ts (1)
54-54
: LGTM!The format description correctly includes "liam" in the list of supported formats, consistent with the Liam schema format support added in this PR.
frontend/packages/erd-core/src/features/erd/components/ERDContent/components/TableNode/TableDetail/Columns/ColumnsItem/ColumnsItem.tsx (1)
10-10
: LGTM!The refactoring to use the centralized
getTableColumnElementId
utility improves code maintainability by eliminating duplication. The function signature and usage are correct.Also applies to: 33-33
frontend/internal-packages/schema-bench/package.json (1)
9-9
: LGTM!The additions support LangSmith integration for schema benchmarking:
@liam-hq/neverthrow
dependency provides workspace-specific neverthrow utilities- New scripts follow the existing pattern for tsx execution
Also applies to: 34-35
frontend/packages/erd-core/src/features/erd/components/ERDRenderer/CommandPalette/CommandPaletteOptions/TableOptions.tsx (1)
6-6
: LGTM!The refactoring to use the centralized
getTableLinkHref
utility improves code maintainability by eliminating duplication. The import and usage are correct.frontend/packages/schema/src/parser/index.ts (1)
37-40
: LGTM!The new 'liam' format case follows the established pattern for dynamic imports and processor invocation, maintaining consistency with existing format handlers.
.changeset/add-liam-schema-format.md (1)
1-8
: LGTM!The changeset documentation clearly describes the new Liam Schema format support and specifies appropriate version bumps.
However, note the inconsistency between the PR objectives (which mention "agent package improvements and prompt format changes") and the actual changes (which introduce Liam schema format support, URL utilities, and hash modules). Consider updating the PR description to accurately reflect the implemented changes.
frontend/packages/erd-core/src/features/erd/utils/index.ts (1)
8-8
: LGTM!The new URL utilities re-export is correctly structured and follows the existing barrel export pattern.
frontend/internal-packages/schema-bench/src/cli/utils/index.ts (1)
5-5
: LGTM!The re-export target update from
loadInputFiles
toloadJsonFiles
aligns with the broader refactoring mentioned in the PR objectives.frontend/packages/erd-core/src/schemas/hash/index.ts (1)
1-2
: LGTM!The barrel exports correctly surface the hash schema and types for public consumption.
frontend/packages/erd-core/src/features/erd/utils/url/index.ts (1)
1-2
: LGTM!The barrel exports correctly expose the URL utility functions following the established module pattern.
frontend/packages/schema/src/parser/supportedFormat/schema.ts (1)
9-9
: LGTM!The addition of 'liam' to the supported formats picklist correctly extends the type and aligns with the new Liam schema parsing capability introduced in this PR.
frontend/packages/schema/src/parser/liam/index.ts (2)
1-10
: LGTM!The imports and
parseJson
wrapper are correct. UsingResult.fromThrowable
provides type-safe error handling for JSON parsing.
23-37
: LGTM!The processor function correctly chains JSON parsing and schema validation. Error handling is appropriate with a sensible default fallback.
frontend/packages/erd-core/src/schemas/hash/types.ts (1)
1-4
: LGTM!The Hash type correctly derives from the hashSchema using valibot's InferOutput utility.
frontend/internal-packages/schema-bench/src/cli/executeOpenaiUnified.ts (3)
3-3
: LGTM!Adding
join
import improves path handling for cross-platform compatibility.
14-14
: LGTM!The refactor from
loadInputFiles
toloadJsonFiles
with the updated path structure is consistent and improves code organization. Usingjoin
for path composition ensures cross-platform compatibility.Also applies to: 56-61
78-82
: LGTM!Renaming
input
todata
in the batch processing aligns with the updatedloadJsonFiles
API and improves naming consistency.frontend/packages/schema/src/parser/liam/index.test.ts (3)
5-65
: LGTM!The test thoroughly validates the successful parsing of a Liam schema with tables and columns. Using test builders (aSchema, aTable, aColumn) makes the assertions clear and maintainable.
67-90
: LGTM!Good coverage for enum handling in the schema.
92-112
: LGTM!Error handling tests properly validate that both invalid JSON and invalid schema structures are caught and return appropriate error responses with the default empty schema.
frontend/internal-packages/schema-bench/src/cli/executeLiamDbShared.ts (3)
3-3
: LGTM!Adding
join
import for improved path handling.
8-8
: LGTM!The refactor to
loadJsonFiles
with proper path composition is consistent with the broader CLI refactoring effort. The legacy format support in the normalize function maintains backward compatibility.Also applies to: 55-60
79-83
: LGTM!Consistent renaming of
input
todata
in batch processing aligns with the updated API.frontend/packages/erd-core/src/features/erd/utils/url/getTableLinkHref.ts (1)
3-7
: All uses of getTableLinkHref are in browser-only React components and JSDOM-based tests, so there’s no SSR break.
// Filter out executor options (--xxx) for parseArgs | ||
const datasetArgs = process.argv.filter((arg) => !arg.startsWith('--')) | ||
|
||
// Parse dataset flags using existing utility | ||
const cliOptions = parseArgs(datasetArgs) | ||
|
||
// Parse executor and evaluation options | ||
const options = parseExecutorAndOptions(process.argv) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t drop --all
/ --<dataset>
flags
Filtering out every argument that starts with --
also strips --all
and any --<dataset>
tokens that parseArgs
used to accept, so the CLI now exits with “No datasets found” for the standard --all
workflow. Keep those flags and only remove the executor-specific options.
- const datasetArgs = process.argv.filter((arg) => !arg.startsWith('--'))
+ const datasetArgs = process.argv.filter(
+ (arg) =>
+ !arg.startsWith('--openai') &&
+ !arg.startsWith('--liamdb') &&
+ !arg.startsWith('--num-repetitions=') &&
+ !arg.startsWith('--max-concurrency=')
+ )
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
// Filter out executor options (--xxx) for parseArgs | |
const datasetArgs = process.argv.filter((arg) => !arg.startsWith('--')) | |
// Parse dataset flags using existing utility | |
const cliOptions = parseArgs(datasetArgs) | |
// Parse executor and evaluation options | |
const options = parseExecutorAndOptions(process.argv) | |
// Filter out executor options (--xxx) for parseArgs | |
const datasetArgs = process.argv.filter( | |
(arg) => | |
!arg.startsWith('--openai') && | |
!arg.startsWith('--liamdb') && | |
!arg.startsWith('--num-repetitions=') && | |
!arg.startsWith('--max-concurrency=') | |
) | |
// Parse dataset flags using existing utility | |
const cliOptions = parseArgs(datasetArgs) | |
// Parse executor and evaluation options | |
const options = parseExecutorAndOptions(process.argv) |
🤖 Prompt for AI Agents
In frontend/internal-packages/schema-bench/src/cli/evaluateWithLangsmith.ts
around lines 128-136, the current filter removes every token starting with `--`,
which strips valid dataset flags like `--all` and `--<dataset>`; instead call
parseExecutorAndOptions(process.argv) first, extract the set of
executor/evaluation option names it recognizes, and then build datasetArgs by
filtering process.argv to remove only those executor-specific flags (and their
values) while preserving `--all` and any `--<dataset>` tokens so parseArgs
receives the intended dataset flags.
const getOrCreateDataset = (client: Client, datasetName: string) => { | ||
return fromPromise(client.readDataset({ datasetName })).orElse(() => | ||
fromPromise(client.createDataset(datasetName)), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix dataset creation call
Client#createDataset
expects an options object (e.g. { name }
). Passing the bare string sends malformed JSON to LangSmith and the first upload for a missing dataset fails with a 422. Wrap the dataset name in the expected object.
- return fromPromise(client.createDataset(datasetName))
+ return fromPromise(client.createDataset({ name: datasetName }))
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In frontend/internal-packages/schema-bench/src/cli/uploadDatasetToLangsmith.ts
around lines 25 to 28, the call to client.createDataset incorrectly passes the
datasetName string directly which produces malformed JSON; change the call to
pass an options object with the name field (e.g. client.createDataset({ name:
datasetName })) so the API receives the expected payload and the initial dataset
creation no longer fails with a 422.
frontend/packages/erd-core/src/features/erd/utils/url/getTableColumnElementId.ts
Show resolved
Hide resolved
ANON_KEY_LINE=$(echo "$STATUS_OUTPUT" | grep "^ANON_KEY=") | ||
|
||
# Clean up and extract just the key | ||
# Remove "anon key: " prefix | ||
ANON_KEY=$(echo "$ANON_KEY_LINE" | sed 's/.*anon key: \(.*\)/\1/' | tr -d ' ') | ||
# Clean up and extract just the key value | ||
# Remove ANON_KEY=" prefix and trailing " | ||
ANON_KEY=$(echo "$ANON_KEY_LINE" | sed 's/^ANON_KEY="\(.*\)"$/\1/') | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle unquoted env output when stripping ANON_KEY.
sed 's/^ANON_KEY="\(.*\)"$/\1/'
only works if the CLI wraps the value in double quotes. When supabase status -o env
emits ANON_KEY=<value>
(no quotes, which is the default), the substitution fails and the original string ANON_KEY=<value>
propagates to .env
, producing NEXT_PUBLIC_SUPABASE_ANON_KEY=ANON_KEY=<value>
. That breaks downstream consumers expecting just the key. Please strip the prefix without assuming quotes, e.g.:
-ANON_KEY=$(echo "$ANON_KEY_LINE" | sed 's/^ANON_KEY="\(.*\)"$/\1/')
+ANON_KEY=${ANON_KEY_LINE#ANON_KEY=}
+ANON_KEY=${ANON_KEY%\"}
+ANON_KEY=${ANON_KEY#\"}
SERVICE_KEY_LINE=$(echo "$STATUS_OUTPUT" | grep "^SERVICE_ROLE_KEY=") | ||
|
||
# Clean up and extract just the key | ||
# Remove "service_role key: " prefix | ||
SERVICE_KEY=$(echo "$SERVICE_KEY_LINE" | sed 's/.*service_role key: \(.*\)/\1/' | tr -d ' ') | ||
# Clean up and extract just the key value | ||
# Remove SERVICE_ROLE_KEY=" prefix and trailing " | ||
SERVICE_KEY=$(echo "$SERVICE_KEY_LINE" | sed 's/^SERVICE_ROLE_KEY="\(.*\)"$/\1/') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t assume the service key is quoted.
sed 's/^SERVICE_ROLE_KEY="\(.*\)"$/\1/'
fails when the env output is SERVICE_ROLE_KEY=<value>
(no quotes, which is what supabase status -o env
returns). The prefix survives, and .env
ends up with SUPABASE_SERVICE_ROLE_KEY=SERVICE_ROLE_KEY=<value>
, making the key unusable. Strip the prefix in a quote-agnostic way, for example:
-SERVICE_KEY=$(echo "$SERVICE_KEY_LINE" | sed 's/^SERVICE_ROLE_KEY="\(.*\)"$/\1/')
+SERVICE_KEY=${SERVICE_KEY_LINE#SERVICE_ROLE_KEY=}
+SERVICE_KEY=${SERVICE_KEY%\"}
+SERVICE_KEY=${SERVICE_KEY#\"}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
SERVICE_KEY_LINE=$(echo "$STATUS_OUTPUT" | grep "^SERVICE_ROLE_KEY=") | |
# Clean up and extract just the key | |
# Remove "service_role key: " prefix | |
SERVICE_KEY=$(echo "$SERVICE_KEY_LINE" | sed 's/.*service_role key: \(.*\)/\1/' | tr -d ' ') | |
# Clean up and extract just the key value | |
# Remove SERVICE_ROLE_KEY=" prefix and trailing " | |
SERVICE_KEY=$(echo "$SERVICE_KEY_LINE" | sed 's/^SERVICE_ROLE_KEY="\(.*\)"$/\1/') | |
SERVICE_KEY_LINE=$(echo "$STATUS_OUTPUT" | grep "^SERVICE_ROLE_KEY=") | |
# Clean up and extract just the key value | |
# Remove SERVICE_ROLE_KEY= prefix and any surrounding quotes | |
SERVICE_KEY=${SERVICE_KEY_LINE#SERVICE_ROLE_KEY=} | |
SERVICE_KEY=${SERVICE_KEY%\"} | |
SERVICE_KEY=${SERVICE_KEY#\"} |
🤖 Prompt for AI Agents
In scripts/extract-supabase-service-key.sh around lines 8–12, the current sed
expects SERVICE_ROLE_KEY to be quoted and therefore leaves the prefix when
supabase outputs an unquoted value; change the extraction to be quote-agnostic
by stripping the SERVICE_ROLE_KEY= prefix first and then removing any
surrounding quotes if present (or use bash parameter expansion to remove the
prefix and then trim leading/trailing double quotes); also handle the case of an
empty match and fail early with a clear error if the key is missing.
- Import isHumanMessage from @langchain/core/messages - Replace msg._getType() === 'human' with isHumanMessage() helper - Replace content as string with message.text property 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Change prompt format to use ## markdown headers - Reorder sections: Session Goal → Original User Request → Test Cases - Update all test snapshots to match new format 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR improves the DB Agent prompt format by adding the original user request alongside analyzed requirements. This enhancement allows the agent to reference the user's exact words, providing better context for generating responses.
- Add user input extraction from HumanMessage and pass it to prompt generator
- Update prompt format to use markdown headers for better structure
- Use LangGraph's
isHumanMessage()
helper andmessage.text
property for cleaner code
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
File | Description |
---|---|
convertAnalyzedRequirementsToPrompt.ts | Add userInput parameter and restructure prompt with markdown headers including Original User Request section |
convertAnalyzedRequirementsToPrompt.test.ts | Update all test cases to include userInput parameter and verify new markdown format |
createGraph.ts | Extract user input from HumanMessage using LangGraph helper and pass to prompt generator |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
frontend/internal-packages/agent/src/createGraph.ts
(2 hunks)frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
(4 hunks)frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
(2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Name utility files in camelCase (e.g., mergeSchema.ts)
Files:
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
frontend/internal-packages/agent/src/createGraph.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use TypeScript/TSX across the codebase
**/*.{ts,tsx}
: Use runtime type validation with valibot for external data validation
Prefer early returns for readability
Write simple, direct code without backward compatibility shims; update all call sites together
Use const-assigned arrow functions instead of function declarations for small utilities (e.g., const toggle = () => {})
Follow existing import patterns and tsconfig path aliases
Files:
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
frontend/internal-packages/agent/src/createGraph.ts
frontend/internal-packages/**
📄 CodeRabbit inference engine (AGENTS.md)
Infra and tooling (e2e, configs, storybook, agent) live under frontend/internal-packages
Files:
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
frontend/internal-packages/agent/src/createGraph.ts
**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
Write unit tests with filenames ending in .test.ts or .test.tsx colocated near source
Files:
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts
🧬 Code graph analysis (2)
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts (1)
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts (1)
convertRequirementsToPrompt
(5-51)
frontend/internal-packages/agent/src/createGraph.ts (1)
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts (1)
convertRequirementsToPrompt
(5-51)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Supabase Preview
- GitHub Check: Supabase Preview
- GitHub Check: Supabase Preview
🔇 Additional comments (6)
frontend/internal-packages/agent/src/createGraph.ts (2)
1-1
: LGTM: Clean import addition.The addition of
isHumanMessage
aligns with the PR objective to use LangGraph's helper function for cleaner code.
31-35
: LGTM: Correct parameter passing.The function call correctly passes the extracted
userInput
as the second parameter, matching the updated function signature inconvertAnalyzedRequirementsToPrompt.ts
.frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.ts (3)
5-9
: LGTM: Clean parameter addition.The addition of the
userInput
parameter aligns with the PR objective to include the original user request in the DB Agent prompt.
33-38
: LGTM: Good readability improvement.Extracting the test cases section into a separate variable improves code readability and makes the final prompt construction clearer.
40-50
: LGTM: Well-structured prompt format.The new prompt format successfully:
- Uses Markdown headers (##) for better structure as per PR objectives
- Includes the "Original User Request" section populated from
userInput
- Maintains clear separation between sections
- Properly trims the final output
frontend/internal-packages/agent/src/db-agent/utils/convertAnalyzedRequirementsToPrompt.test.ts (1)
59-270
: LGTM: Comprehensive test coverage maintained.All test cases have been correctly updated to:
- Pass the new
userInput
parameter toconvertRequirementsToPrompt
- Verify the inclusion of the "## Original User Request" section in the output
- Maintain comprehensive coverage across various scenarios (empty testcases, empty goal, schemaIssues filtering)
The test suite thoroughly validates the new functionality while preserving existing test coverage.
// Extract user input from the first HumanMessage | ||
const userInput = | ||
state.messages.find((msg) => isHumanMessage(msg))?.text || '' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify the .text
property on HumanMessage.
The code uses .text
to extract the message content from a HumanMessage. LangGraph messages typically expose a .content
property instead. Please verify that .text
is the correct property to use here.
Run the following script to check the property usage across the codebase:
Additionally, search the web to confirm the correct property:
🏁 Script executed:
#!/bin/bash
# Description: Check how HumanMessage content is accessed elsewhere in the codebase
# Search for other usages of HumanMessage content extraction
rg -n --type=ts -A2 -B2 'HumanMessage.*\.(text|content)' frontend/internal-packages/agent/
Length of output: 467
🌐 Web query:
What property should be used to access the text content of a HumanMessage in LangChain/LangGraph TypeScript? Is it .text or .content?
💡 Result:
Use the .content property. LangChain message classes (including HumanMessage) expose their payload as .content (which may be a string or structured content blocks); there is no top-level .text property on HumanMessage. [1][2]
Sources:
[1] LangChain JS Messages docs — "Messages have a content attribute...".
[2] LangChain concepts / message types — "All messages have a role, content, and response_metadata property."
Replace .text
with .content
on HumanMessage
In frontend/internal-packages/agent/src/createGraph.ts:29, change ?.text
to ?.content
when extracting userInput
.
🤖 Prompt for AI Agents
In frontend/internal-packages/agent/src/createGraph.ts around lines 27 to 29,
the code extracts the user's input from the first HumanMessage using ?.text
which is incorrect; update the property access to ?.content so it reads the
message body correctly (i.e., find the first HumanMessage and use its content
field, falling back to an empty string).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍👍
Issue
Why is this change needed?
This PR adds the original user request to the DB Agent prompt, allowing the agent to reference the user's exact words alongside the analyzed requirements.
Changes:
HumanMessage
and pass it to the prompt generatorisHumanMessage()
helper andmessage.text
property for cleaner codeAll 205 tests passing ✅
Summary by CodeRabbit