-
Notifications
You must be signed in to change notification settings - Fork 175
feat(schema): Add support for Liam Schema format JSON files #3708
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add 'liam' to supported format schema - Implement liam processor that validates JSON against schemaSchema - Add comprehensive tests for liam format parser - No conversion needed - Liam format is the internal target format Fixes #5802 Co-Authored-By: hirotaka.miyagi@route06.co.jp <h.miyagi.cnw@gmail.com>
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
🦋 Changeset detectedLatest commit: 3276e9b The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Updates to Preview Branch (devin/1759816934-support-liam-schema-format) ↗︎
Tasks are run on every commit but only new migration files are pushed.
View logs for this Workflow Run ↗︎. |
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the Note Free review on us!CodeRabbit is offering free reviews until Wed Oct 08 2025 to showcase some of the refinements we've made. Comment |
🤖 Agent Deep Modeling ExecutionProcessing time: 4m 12s Command Output
RUN v3.2.4 /home/runner/work/liam/liam/frontend/internal-packages/agent (node:7619) ExperimentalWarning: WASI is an experimental feature and might change at any time ✅ [INFO] 2025-10-07T08:00:36.361Z ⏺ (pm): Purpose: Save analyzed requirements artifact; Inputs: goal summary and categorized testcase list with titles and SQL operation types. 🧠 Thinking (pm)... Structuring BRD Requirements I’ll start by reviewing the user input and our previous conversation, but it doesn't seem like there's much to check. My aim is to transform any ambiguities into clear requirements. I’ll extract and structure these into the expected BRD format. It’s essential to save the analyzed requirements with the right tool. While the tool wants a summary rather than the entire BRD, I need to prepare it well, keeping verbosity moderate and avoiding heavy formatting. Bullet lists are fine! Planning BRD Sections I'm laying out a plan for the sections of the BRD. I need to clarify the scope and assumptions for the "simple" user management system (UMS). Defining business goals and success criteria is crucial, along with identifying actors and user roles. I'll specify data entities, validation rules, and outline core workflows like registration, login, and password management. It's also important to define the access control model, audit requirements, and non-functional needs while ensuring I include test case categories with both positive and negative scenarios. I'll aim for around six clear bullets in the checklist! Overview and Scope I’m outlining the Overview and Scope for managing users in internal apps. The purpose is to support CRUD operations on users, roles, authentication, and password policies. It will include user registration, where login is done via email and password. Roles will be defined as Admin, Manager, and User, with checks on permissions per resource. Account statuses cover active, pending, disabled, and locked, with the option for password reset via token. Multi-factor authentication (MFA) can be an optional feature later on. Out of scope are things like SSO, social logins, OAuth providers, and complex organizational hierarchies, along with UI design specifics. I’ll also consider the actors and their roles, including End User, Admin, and Support/Manager, with a permissions matrix for each role. Data Model for DB Agent I'm outlining the data model for the database agent. There are a few key entities: for users, I'll track attributes like id (UUID), email, password hash, names, phone, and status (such as pending, active, disabled, or locked), along with timestamps for various actions. The roles entity will include id, unique name, description, and a flag indicating if it’s a system role. We could use a junction table for user_roles to manage assignments. I might include a permissions entity to handle role-permission mapping, but keep it simple for now. I’ll also consider sessions or auth tokens for managing user login sessions, using either a session table or JWT without the database. Overall, I'll include features that are essential while keeping future scalability in mind. Designing Functional and Non-Functional Requirements I need to ensure optional server-side sessions support logout-all functionality. For simplicity, I’ll incorporate refresh tokens or session support for features like "force logout" and auditing. I'll define various tokens such as email verification and password reset. In terms of functional requirements, I need to cover user registration, authentication, and role management, along with account lockout mechanisms and user search features. Non-functional requirements focus on security measures, privacy policies, availability targets, and performance benchmarks. Next, I have to outline acceptance criteria, data validations, reporting metrics, API endpoints, and test case categories. Each category will define test cases that incorporate different SQL operations for thorough coverage. Finally, I will save this structured information as requirements in the tool. Outlining the BRD Sections I need to draft the Business Requirements Document (BRD) with concise sections, including an executive summary, scope, assumptions, success metrics, actors and roles, data entities with attributes, and business rules. Additionally, I'll outline functional and non-functional requirements, data lifecycle, audit logging, access controls, and any future considerations like MFA or SSO. For formatting, I'll stick to simple headings and bullet lists without markdown. I’ll also plan test cases categorized by user registration, email verification, authentication, password management, profile management, roles, and other relevant areas. This should cover everything efficiently! Refining Test Cases and BRD Details I've counted and refined the test cases, landing at a total of 29, which feels manageable yet comprehensive! I grouped them by categories like User Registration, Email Verification, Authentication, and so forth. Now, for the BRD, I'll ensure to include the necessary details such as data entities, attributes, unique indexes, constraints, and more. Additionally, I'll wrap the purpose and inputs around tool usage statements for clarity before any significant tool calls, particularly for saving the analyzed requirements. This will help keep everything organized and ensure accuracy while moving forward! Drafting Database Structures and Requirements I'm outlining the database structures including various entities like Role, UserRole, and different token types for email verification and password resets. Each entity needs specific attributes, including constraints and indices. I've also set out some rules for roles and access control within the system. I’ll ensure the rules clarify actions for Admin, Manager, and User roles. Next, I'll finalize the test case mappings related to user registration and creation. I want to ensure they reflect the functionality correctly for effective QA verification! Creating Business Requirements Document (BRD) I'm preparing the BRD for a simple user management system. I'll start by including a checklist of key points, just around six bullets. The BRD will cover essential sections, such as goals, assumptions, success metrics, actors, and workflows. I'll make sure to outline the data model overview and validation rules too, along with functional and non-functional requirements. Finally, I'll catalog test cases by categories with the necessary SQL operation types, then save everything using the appropriate tool and confirm successful completion! 🔧 saveRequirementsToArtifactTool(goal: "Prepare a product...", testcases: {9 keys}) ⎿ unknown: Requirements saved successfully to artifact 🔧 schemaDesignTool() ⏺ (qa): { 🔧 saveTestcase() auth_tokens a WHERE a.user_id = t.id AND a.is_used = FALSE AND a.expires_at 🔧 saveTestcase() > now())\n 🔧 saveTestcase()RETURNING\n users.id,\n users.last_login_at,\n users.failed_login_attempts,\n users.lockout 🔧 saveTestcase()_until_at,\n users.updated_at;\n\n-- Notes for real test 🔧 saveTestcase() harness:\n-- - Bind the actual login email (normalized) and verify the row was updated.\n-- - The EXISTS clause ensures we only update when there is at least 🔧 saveTestcase() one non-used, non-expired 🔧 saveTestcase() token for the 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() user as part of login flow; adjust per business rule if tokens are not required at this moment.\n-- - If your business rule differs, you can remove the auth_tokens check and simply update by user id from provided credentials 🔧 saveTestcase().\n" 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): { 🔧 saveTestcase() 🔧 saveTestcase()_algorithm_version\n NULL -- deleted_at\n )\n RETURNING id, email, email_normalized, status, created_at\n)\nSELECT\n id AS user_id,\n email AS email_address,\n email_normalized AS email_normalized,\n status AS current_status,\n created_at AS created_at\nFROM new_user;\n\n-- Notes:\n-- - Replace 'password_hash' with the actual hashed password using your current hashing algorithm.\n-- - If you enforce locale/timezone constraints via CHECK constraints or referenced tables, adjust values accordingly.\n-- - If you require a role assignment upon creation (e.g., default 'user' role), insert into user_roles and log via audit_logs as appropriate.\n" 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase() ⏺ (qa): {"sql":"-- Test Case: Changing email clears verification and requires re-verification\n-- Scenario: When a user updates their email, the system should\n-- 1) store the new email as email ((node:7619) Warning: The 'import-in-the-middle' hook has already been initialized 🔧 saveTestcase() 🔧 saveTestcase() 🔧 saveTestcase()n-- - The new_email is provided as <new_email> (text)\n-- - email_normalized must be lowercased\n-- - No immediate audit token issuance is handled here\n\nWITH updated AS (\n UPDATE users\n SET\n email = COALESCE(new_email, email),\n email_normalized = LOWER(COALESCE(new_email, email)),\n is_email_verified = FALSE,\n updated_at = NOW()\n WHERE id = $1\n RETURNING id, email, email_normalized, is_email_verified, updated_at\n)\nSELECT * FROM updated;\n"}(node:7619) Warning: The 'import-in-the-middle' hook has already been initialized ⏺ (qa): {"sql":"-- Test Case: Password Management - Reset password with valid token updates hash and invalidates token\n-- Goal: Given a valid password reset token for a user, update the user's password_hash, advance password_algorithm_version, clear failed_login_attempts, set last_login_at, invalidate the token, and create an audit_log entry.\n\n-- Assumptions:\n-- - A valid token exists in auth_tokens with type = 'password_reset', is_used = false, expires_at > now(), and user_id references users.id\n-- - The new password hash is provided by the application (parameterized as :new_password_hash)\n-- - The actor performing the reset may be null (system-initiated) or provided as :actor_id\n\n-- The operation is executed in a single transaction for atomicity.\n\nBEGIN;\n\n-- Step 1: Validate and lock the token, identify the target user\nWITH t AS (\n SELECT a.id AS token_id, a.user_id\n FROM auth_tokens AS a\n WHERE a.id = :token_id\n AND a.type = 'password_reset'\n AND a.is_used = FALSE\n AND a.expires_at > now()\n FOR UPDATE\n)\n-- Step 2: Update the user's password_hash and related fields\nUPDATE users AS u\nSET\n password_hash = :new_password_hash,\n password_algorithm_version = password_algorithm_version + 1,\n last_login_at = now(),\n failed_login_attempts = 0,\n updated_at = now()\nFROM t\nWHERE u.id = t.user_id\nRETURNING u.id AS user_id, t.token_id;\n\n-- Step 3: Invalidate the token after successful update\nUPDATE auth_tokens AS at\nSET is_used = TRUE\nFROM t\nWHERE at.id = t.token_id;\n\n-- Step 4: Create an audit log entry for the password reset event\nINSERT INTO audit_logs (action, detail, actor_id, created_at, target_user_id)\nSELECT\n 'password_reset' AS action,\n jsonb_build_object('user_id', t.user_id, 'token_id', t.token_id, 'method', 'password_hash_update') AS detail,\n :actor_id AS actor_id,\n now() AS created_at,\n t.user_id AS target_user_id\nFROM t\nWHERE EXISTS (SELECT 1 FROM t);\n\nCOMMIT;\n"} ⎿ unknown: 18/29 test cases passed, 11 failed ❌ Test Case: Reject user creation with duplicate email (case-insensitive)Error:
|
- Update format option description test to expect 'liam' format - Update error message test to include 'liam' in expected error text - Fixes CI failure in frontend-ci check Co-Authored-By: hirotaka.miyagi@route06.co.jp <h.miyagi.cnw@gmail.com>
Co-Authored-By: hirotaka.miyagi@route06.co.jp <h.miyagi.cnw@gmail.com>
8cceb3e
to
2010001
Compare
Replace try-catch error handling with neverthrow's Result pattern for cleaner functional error composition using Result.fromThrowable and explicit Result handling. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for Liam Schema format JSON files to the ERD page parser, enabling users to import schema files that are already in Liam's internal format without requiring conversion. Users can specify this format using the ?format=liam
query parameter.
Key changes:
- Added 'liam' as a supported format with direct JSON validation against the existing schema structure
- Created a new liam processor that performs JSON parsing and validation without data transformation
- Updated CLI tests and documentation to include the new format
Reviewed Changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated no comments.
Show a summary per file
File | Description |
---|---|
frontend/packages/schema/src/parser/supportedFormat/schema.ts | Added 'liam' to the supported format schema |
frontend/packages/schema/src/parser/liam/index.ts | New liam processor implementation with JSON parsing and validation |
frontend/packages/schema/src/parser/liam/index.test.ts | Comprehensive test suite for the liam processor |
frontend/packages/schema/src/parser/index.ts | Added routing for 'liam' format in main parser |
frontend/packages/schema/src/parser/index.test.ts | Integration test for liam format parsing |
frontend/packages/cli/src/cli/index.test.ts | Updated CLI test to include 'liam' in format description |
frontend/packages/cli/src/cli/erdCommand/runPreprocess.test.ts | Updated error message test to include 'liam' format |
.changeset/add-liam-schema-format.md | Changeset for version bumping with minor bump for schema package |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Issue
Why is this change needed?
This PR adds support for Liam Schema format JSON files in the ERD page parser, enabling users to directly import schema files that are already in Liam's internal format without requiring conversion.
https://liam-app-git-devin-1759816934-support-liam-schema-format-liambx.vercel.app/erd/p/github.com/liam-hq/liam/blob/1789684aca9e708d0c496b2c37b581b1dca34bef/pm-testcases-case-001.json?format=liam
format=liam
is required. Since we haven't modified the auto-detection logic this time, the tbls parser will be applied by default when JSON is passed.We had to consider how to handle
json
and write user documentation. Currently, we intend to proceed with this as a developer feature without specific documentation.Changes
Core Implementation
supportedFormatSchema
)frontend/packages/schema/src/parser/liam/index.ts
) that:schemaSchema
using ValibotKey Design Decision
The Liam format processor requires no data transformation - it only performs JSON parsing and validation. This assumes Liam format files are already in the target Schema structure (tables/enums/extensions as Records).
Testing
Versioning
@liam-hq/schema
(new feature)@liam-hq/cli
(test updates)Human Review Checklist
{ tables: {}, enums: {}, extensions: {} }
on errors matches other parsers' behaviorAdditional validation:
?format=liam
query parameter works as expectedLink to Devin run: https://app.devin.ai/sessions/4c88b00aecdc4ee7961d8f159d2d86b7
Requested by: @MH4GF