You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/modernizr/00-preparation/index.en.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ Confirm the settings and initialize the connection:
53
53
54
54
## Step 4: Validating the Integration
55
55
56
-
Verify the Bedrock connection is functioning correctly by sending the below test prompt to Cline. If you encounter an error asking you to please wait before trying again, press "proceed anyway" to retry the request.
56
+
Verify the Bedrock connection is functioning correctly by sending the below test prompt to Cline. If you are throttled, press "proceed anyway" to retry the request.
57
57
58
58
```terminal
59
59
Hello and Welcome to this modernization project, can you confirm you can read and list all the files in the workspace?
Copy file name to clipboardExpand all lines: content/modernizr/01-modernization/index.en.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ The e-commerce application follows a standard three-tier architecture:
22
22
2.**Frontend Application** - React-based user interface for customer interactions
23
23
3.**MySQL Database** - Relational database storing all application data
24
24
25
-
You'll need to start both the backend and frontend services to establish a baseline for analysis.
25
+
While your application environment is already preconfigured, you'll still need to start both the backend and frontend services to establish a baseline for analysis.
Copy file name to clipboardExpand all lines: content/modernizr/01-modernization/workflow-00.en.md
+17-17Lines changed: 17 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,51 +8,51 @@ chapter: false
8
8
9
9
## Your 7-Step Journey to Database Modernization
10
10
11
-
Think of database modernization like renovating a house while people are still living in it. You can't just tear everything down and start over - you need a careful, step-by-step plan that keeps everything working while you upgrade piece by piece. That's exactly what our modernization workflow does!
11
+
Think of database modernization like renovating a house while people are still living in it. You can't just tear everything down and start over — you need a careful, step-by-step plan that keeps everything working while you upgrade piece by piece. That's exactly what our modernization workflow does!
12
12
13
-
Our process consists of seven carefully designed stages, each one building on the previous step. It's like following a recipe - each ingredient needs to be added at the right time and in the right order to get the perfect result.
13
+
Our process consists of seven carefully designed stages, each one building on the previous step. It's like following a recipe — each ingredient needs to be added at the right time and in the right order to get the perfect result.
## Stage 1: Understanding What You Have - Database Detective Work
17
+
## Stage 1: Understanding What You Have — Database Detective Work
18
18
19
-
The first stage is like being a detective investigating the current system. We need to understand everything about how the existing MySQL database works before we can improve it. This involves connecting to the database, examining how fast different operations run, looking at the structure of all the tables, and studying the application code to understand exactly how data flows through the system.
19
+
The first stage is like being a detective investigating the current system. We need to understand everything about how the existing MySQL database works before we can improve it. This involves connecting to the database, examining what predicates are used in queries and what data is returned, how fast different operations run, looking at the structure of all the tables, and studying the application code to understand exactly how data flows through the system.
20
20
21
-
Think of it like a mechanic who needs to understand every part of your current engine before they can recommend which parts to upgrade. We use a specialized AI tool (the MySQL MCP server) to help us gather all this information systematically.
21
+
Think of it like doing a walkthrough with a designer before your remodel. They need to understand your taste and how you use your space to fully capture the requirements for the project. We use a specialized AI tool (the MySQL MCP server) to help us gather all this information systematically.
22
22
23
-
## Stage 2: Designing the New Blueprint - Creating Your DynamoDB Model
23
+
## Stage 2: Designing the New Blueprint — Creating Your DynamoDB Model
24
24
25
-
This is where the real design work happens! Using all the information we gathered in Stage 1, we create a completely new data model designed specifically for DynamoDB. This stage is highly interactive - you'll work closely with the AI to make important decisions about how to structure your data.
25
+
This is where the real design work happens! Using all the information we gathered in Stage 1, we create a completely new data model designed specifically for DynamoDB. This stage is highly interactive — you'll work closely with the AI to make important decisions about how to structure your data.
26
26
27
27
It's like working with an architect to design your dream house renovation. The AI provides technical expertise and suggestions, but you need to guide the process and make the final decisions about what works best for your specific needs. This collaboration ensures the new design fits your application's requirements.
28
28
29
-
## Stage 3: Building the Bridge - Creating a Database Abstraction Layer
29
+
## Stage 3: Building the Bridge — Creating a Database Abstraction Layer
30
30
31
31
Now we create a special "bridge" layer in your application code that can talk to both the old MySQL database and the new DynamoDB system at the same time. This follows AWS best practices and ensures you can switch between systems safely without breaking anything.
32
32
33
-
Think of this like installing a smart electrical panel in your house that can work with both the old wiring and new smart home devices. Everything continues to work normally while you prepare for the upgrade.
33
+
Think of this like rennovating the guest room before you rennovate the primary bedroom — you'll always have somewhere to sleep. Similarly with a database abstraction layer. everything continues to work normally while you prepare for the upgrade.
34
34
35
-
## Stage 4: Testing the Connection - Validating DynamoDB Integration
35
+
## Stage 4: Testing the Connection — Validating DynamoDB Integration
36
36
37
-
In this stage, we set up a local version of DynamoDB and test our bridge layer to make sure everything works correctly. It's like doing a test run of your renovated house systems before you actually move in permanently.
37
+
In this stage, we set up a local version of DynamoDB and test our bridge layer to make sure everything works correctly. It's like the city inspector making sure your renovated house systems are up to code before you complete the project.
38
38
39
39
We validate that all the connections work properly and that data flows correctly through both systems. This gives us confidence that everything is ready for the next phase.
40
40
41
-
## Stage 5: Running Both Systems - Application Refactoring and Dual Writes
41
+
## Stage 5: Running Both Systems — Application Refactoring and Dual Writes
42
42
43
43
This is the most complex stage, where your application learns to write data to both databases simultaneously. We use a method called "test-driven development," which means we write tests first to ensure everything works correctly, then modify the code to pass those tests.
44
44
45
-
During this stage, we also create a special admin control panel that lets you monitor and control the modernization process. It's like having a control room where you can watch both the old and new systems running side by side and manage the transition safely.
45
+
During this stage, we also create a special admin control panel that lets you monitor and control the modernization process. You can watch both the old and new systems running side by side and manage the transition safely.
46
46
47
-
## Stage 6: Moving to the Cloud - Deploying the cloud Infrastructure
47
+
## Stage 6: Moving to the Cloud — Deploying the cloud Infrastructure
48
48
49
-
Once everything is tested and working locally, we deploy your new DynamoDB tables to the actual AWS cloud environment. This is like moving your furniture into your newly renovated house - everything needs to be in the right place and working properly.
49
+
Once everything is tested and working locally, we deploy your new DynamoDB tables to the actual AWS cloud environment. You've finally got your Certificate of Occupancy!
50
50
51
51
The deployment process ensures your cloud infrastructure is set up correctly and ready to handle real traffic.
52
52
53
-
## Stage 7: The Great Migration - Moving Your Data
53
+
## Stage 7: The Great Migration — Moving Your Data
54
54
55
-
The final stage is where we actually move all your existing data from MySQL to DynamoDB. This is carefully controlled and monitored - you decide when you're ready to start using dual writes, and then we gradually migrate all your historical data.
55
+
The final stage is where we actually move all your existing data from MySQL to DynamoDB. This is carefully controlled and monitored — you decide when you're ready to start using dual writes, and then we gradually migrate all your historical data.
56
56
57
57
We use specialized data processing tools (like the AWS Glue MCP Server) to handle this migration safely and efficiently. It's like having professional movers who ensure all your belongings get to the new house safely and end up in exactly the right places.
### Requirements Documentation - Defining Objectives and Success Criteria
15
+
### Requirements Documentation — Defining Objectives and Success Criteria
16
16
17
17
The Requirements Document establishes the foundational context by articulating the business objectives and technical constraints that drive the modernization initiative. This document defines explicit acceptance criteria and success metrics, creating what software engineers refer to as the "definition of done." By providing comprehensive context about the problem domain, the requirements document enables the LLM to understand not just what needs to be built, but why it needs to be built and how to validate that the implementation meets the specified goals.
18
18
19
-
### Design Documentation - Technical Architecture and Implementation Strategy
19
+
### Design Documentation — Technical Architecture and Implementation Strategy
20
20
21
21
The Design Document serves as the technical specification that translates high-level requirements into concrete architectural decisions and implementation strategies. This document defines the specific methodologies, data structures, and system workflows that will be employed throughout the modernization process. It includes detailed implementation guidelines, architectural patterns, and design rationale that provide the LLM with a comprehensive technical blueprint for executing the modernization according to established software engineering principles.
The Tasks Document functions as the bridge between abstract architectural design and concrete implementation by decomposing design specifications into discrete, executable development tasks. This document provides sequenced instructions that reference specific files, tools, and expected deliverables, ensuring the LLM receives actionable directives rather than abstract concepts. The task breakdown transforms architectural decisions into manageable development units that can be systematically executed and validated.
Copy file name to clipboardExpand all lines: content/modernizr/01-modernization/workflow-02.en.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,21 +6,21 @@ weight: 33
6
6
chapter: false
7
7
---
8
8
9
-
Each stage generates "Artifacts" deliverables that will be used in the future stages across the solution, this project works sequentially, where you will see how artifacts from initial stages are re-used. Every stage will create a new folder `stage-xx` where all the artifacts will be stored. In addition another file `xx-working_log.md` will be generated, this file is used by the LLM to keep track of the work that has been done so far, consider it as a notepad, or scratch pad.
9
+
Each stage generates "Artifacts" — deliverables that will be used in the future stages across the solution. This project works sequentially, using output artifacts from initial stages as input for the next. Every stage will create a new folder `stage-xx` where all the artifacts will be stored. In addition another file `xx-working_log.md` will be generated, this file is used by the LLM to keep track of the work that has been done so far, consider it as a notepad, or scratch pad.
10
10
11
11
::alert[If you execute this workshop from scratch (as available in `/clean-start` folder) it will take ~11.5 hours to complete, where the most of its time will be spent in application re-factoring (stages 3, 4 and 5). For simplicity and to streamline the duration of this workshop you will have these steps already completed for you.]{type="info"}
12
12
13
13
# Stage-01 artifacts
14
14
15
-
Let's start exploring the artifacts available for the first stage `stage-01`. This stage is focused on capturing the data that is available from the source database and application backend logic. We use the MySQL MCP server to understand table structure, constraints and data, The MySQL query logs, to identify the data velocity and finally we explore the application logic to capture all the access patterns that we will need to modernize.
15
+
Let's start exploring the artifacts available for the first stage `stage-01`. This stage is focused on capturing the data that is available from the source database and application backend logic. We use the MySQL MCP server to understand table structure, constraints and data. Next we use the MySQL query logs to identify the data . Finally, we explore the application logic to capture all the access patterns that we will need to modernize.
DynamoDB is all about application access patterns that we need to support, how the data in your application will be using the DynamoDB tables. The secret to DynamoDB data modelling is to store the data exactly in the format your application will consume it, while working with different entities if you happen to have some.
19
+
DynamoDB is all about application the access patterns that we need to support. The secret to DynamoDB data modelling is to store data exactly in the format your application will consume it, structing your data in a way that can be read as efficiently as possible with the smallest number of queries.
20
20
21
-
Open the files available in the `stage-01` folder, familiarize with them and understand in detail the current application access patterns, remember this is the application logic that you will need to support in the modernized application.
21
+
Open the files available in the `stage-01` folder, familiarize with them and understand in detail the current application access patterns. This is the application logic that you will need to support in the modernized application.
22
22
23
-
-`01_1_API_access_patterns.md`- This file should be your primary source of information, it contains an analysis of the backend code, it should contain at the end a numbered list of 48 different application access patterns! You can also learn more about this project by reading the README available in the front end folder `frontend/README.md` It contains a description of which patters requires authentication and a quick explanation on how to execute the API calls.
24
-
-`01_2_mysql_log_analysis.md`- This file is the MySQL log analysis, it contains a description of how many times some access patterns were detected, we run a small load test to simulate some traffic, and capture meaningful information, please notice the load test does not capture 100% of the application access patterns with the logs. If you plan to use a similar approach for your modrnization process, you need to enable the logs for a period of time that allows you to capture most of your traffic, however you should keep in mindthere might be some application access patters that might not be covered, since the API endpoints might not have enough traffic to be collected during the test.
25
-
-`01_3_table_structure_analysis.md`- Uses the MySQL MCP server to explore table structure and identify table contraints, relationships and data format.
23
+
-`01_1_API_access_patterns.md`— This file should be your primary source of information. It contains an analysis of the backend code. When the LLM finishes creating it, it should contain a numbered list of 48 different application access patterns! If you want to better understand these access patterns you can learn more about this project by reading the README available in the front end folder `frontend/README.md` It contains a description of which patters requires authentication and a quick explanation on how to execute the API calls.
24
+
-`01_2_mysql_log_analysis.md`— This file is the MySQL log analysis, containing a description of how many times different access patterns were detected. We run a small load test to simulate traffic and capture data on the results. Please note that the load test does not capture 100% of the application access patterns in the logs. If you plan to use a similar approach for your modrnization process, you should use logs that capture live traffic for a period of time sufficient to capture all required patterns. Keep in mind, though, that there still might be some application access patters that were not captured if they weren't exercised during the loging window.
25
+
-`01_3_table_structure_analysis.md`— Uses the MySQL MCP server to explore table structure and identify table contraints, relationships, and data format.
0 commit comments