Skip to content

Commit dfb4ffc

Browse files
authored
Merge branch 'master' into fix/update-lsql-and-ldms-instructions
2 parents b254ec9 + 7cdcd7e commit dfb4ffc

File tree

6 files changed

+186
-10
lines changed

6 files changed

+186
-10
lines changed

README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,5 +23,10 @@ Make a pull request with changes. PRs will be automatically checked to make sure
2323

2424
On merge to master, a GitHub action will deploy the assets to amazon-dynamodb-labs.com and verify the build to ensure the markdown and other files are correctly formatted. From there, a maintainer must manually pull the changes and push to https://catalog.workshops.aws/dynamodb-labs/en-US
2525

26+
#### Internal maintainer: sync changes to internal copy of this repo
27+
This public repo is pushed to the internal repo in workshop studio using a combination of rsync (we assume you are on macOS) and git. The file `sync.sh` copies the source files to the WS repo folder, and after that you follow the internal README.md to complete the sync.
28+
1. Run sync.sh to sync the public repo to the amazon-dynamodb-immersion-day repo. e.g. `./sync.sh -d /Users/$USER/workspace/amazon-dynamodb-immersion-day` . Choose y to sync the files.
29+
2. Change into the directory for amazon-dynamodb-immersion-day (the workshop studio version), open README.md, and follow the instructions there to git add and push the changes internally. Note that some assets, specifically the LEDA central account resources, are only authored on the internal repo, and they have a separate set of commands to push updates because they are assets that must be in a special S3 bucket owned by WS.
30+
2631
## License
2732
This project is licensed under the Apache-2.0 License.

content/rdbms-migration/migration-chapter00.en.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,17 @@ weight: 10
66
---
77
In this module, you will create an environment to host the MySQL database on Amazon EC2. This instance will be used to host source database and simulate on-premise side of migration architecture.
88
All the resources to configure source infrastructure are deployed via [Amazon CloudFormation](https://aws.amazon.com/cloudformation/) template.
9-
There are two CloudFormation templates used in this exercise which will deploy following resources.
9+
There are two CloudFormation templates used in this exercise which deploy the following resources.
1010

11-
CloudFormation MySQL Template Resources:
12-
- OnPrem VPC: Source VPC will represent an on-premise source environment in the N. Virginia region. This VPC will host source MySQL database on Amazon EC2
13-
- Amazon EC2 MySQL Database: Amazon EC2 Amazon Linux 2 AMI with MySQL installed and running
14-
- Load IMDb dataset: The template will create IMDb database on MySQL and load IMDb public dataset files into database. You can learn more about IMDb dataset inside [Explore Source Model](/hands-on-labs/rdbms-migration/migration-chapter03)
11+
CloudFormation MySQL Template Resources (**Already deployed**):
12+
- **OnPrem VPC**: Source VPC will represent an on-premise source environment in the workshop region. This VPC will host source MySQL database on Amazon EC2
13+
- **Amazon EC2 MySQL Database**: Amazon EC2 Amazon Linux 2 AMI with MySQL installed and running
14+
- **Load IMDb dataset**: The template will create IMDb database on MySQL and load IMDb public dataset files into database. You can learn more about IMDb dataset inside [Explore Source Model](/hands-on-labs/rdbms-migration/migration-chapter03)
1515

1616

1717

18-
CloudFormation DMS Instance Resources:
19-
- DMS VPC: Migration VPC on in the N. Virginia region. This VPC will host DMS replication instance.
20-
- Replication Instance: DMS Replication instance that will facilitate database migration from source MySQL server on EC2 to Amazon DynamoDB
18+
CloudFormation DMS Instance Resources (**Needs deploying**):
19+
- **DMS VPC**: Migration VPC in the workshop region. This VPC will host DMS replication instance.
20+
- **Replication Instance**: DMS Replication instance that will facilitate database migration from source MySQL server on EC2 to Amazon DynamoDB
2121

2222
![Final Deployment Architecture](/static/images/migration-environment.png)

content/rdbms-migration/migration-chapter04.en.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ You can often query the data from multiple tables and assemble at the presentati
1010
To support high-traffic queries with ultra-low latency, designing a schema to take advantage of a NoSQL system generally makes technical and economic sense.
1111

1212
To start designing a target data model in Amazon DynamoDB that will scale efficiently, you must identify the common access patterns. For the IMDb use case we have identified a set of access patterns as described below:
13+
1314
![Final Deployment Architecture](/static/images/migration32.png)
1415

1516
A common approach to DynamoDB schema design is to identify application layer entities and use denormalization and composite key aggregation to reduce query complexity.

content/relational-migration/setup/index2.en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ The Lambda source code project has been setup as follows
2525
* **chalicelib/dynamodb_calls.py**
2626

2727

28-
1. Next, let's deploy the Chalice application stack. This step may take a few minutes to complete.
28+
1. Next, let's deploy the Chalice application stack. This step may take a few minutes to complete.
2929
```bash
3030
chalice deploy --stage relational
3131
```

design-patterns/cloudformation/C9.yaml

Lines changed: 61 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -85,8 +85,45 @@ Mappings:
8585
options:
8686
UserDataURL: "https://amazon-dynamodb-labs.com/assets/UserDataC9.sh"
8787
version: "1"
88+
# AWS Managed Prefix Lists for EC2 InstanceConnect
89+
AWSRegions2PrefixListID:
90+
ap-south-1:
91+
PrefixList: pl-0fa83cebf909345ca
92+
eu-north-1:
93+
PrefixList: pl-0bd77a95ba8e317a6
94+
eu-west-3:
95+
PrefixList: pl-0f2a97ab210dbbae1
96+
eu-west-2:
97+
PrefixList: pl-067eefa539e593d55
98+
eu-west-1:
99+
PrefixList: pl-0839cc4c195a4e751
100+
ap-northeast-3:
101+
PrefixList: pl-086543b458dc7add9
102+
ap-northeast-2:
103+
PrefixList: pl-00ec8fd779e5b4175
104+
ap-northeast-1:
105+
PrefixList: pl-08d491d20eebc3b95
106+
ca-central-1:
107+
PrefixList: pl-0beea00ad1821f2ef
108+
sa-east-1:
109+
PrefixList: pl-029debe66aa9d13b3
110+
ap-southeast-1:
111+
PrefixList: pl-073f7512b7b9a2450
112+
ap-southeast-2:
113+
PrefixList: pl-0e1bc5673b8a57acc
114+
eu-central-1:
115+
PrefixList: pl-03384955215625250
116+
us-east-1:
117+
PrefixList: pl-0e4bcff02b13bef1e
118+
us-east-2:
119+
PrefixList: pl-03915406641cb1f53
120+
us-west-1:
121+
PrefixList: pl-0e99958a47b22d6ab
122+
us-west-2:
123+
PrefixList: pl-047d464325e7bf465
124+
88125
Resources:
89-
#LADV Role
126+
#LADV Role
90127
DDBReplicationRole:
91128
Type: AWS::IAM::Role
92129
Properties:
@@ -742,6 +779,11 @@ Resources:
742779
IpProtocol: tcp
743780
FromPort: 3306
744781
ToPort: 3306
782+
- Description: "Allow Instance Connect"
783+
FromPort: 22
784+
ToPort: 22
785+
IpProtocol: tcp
786+
SourcePrefixListId: !FindInMap [AWSRegions2PrefixListID, !Ref 'AWS::Region', PrefixList]
745787
Tags:
746788
- Key: Name
747789
Value: MySQL-SecurityGroup
@@ -802,6 +844,24 @@ Resources:
802844
mysql -u root "-p${DbMasterPassword}" -e "GRANT ALL PRIVILEGES ON *.* TO '${DbMasterUsername}'"
803845
mysql -u root "-p${DbMasterPassword}" -e "FLUSH PRIVILEGES"
804846
mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE app_db;"
847+
## Setup MySQL Tables
848+
cd /var/lib/mysql-files/
849+
curl -O https://www.amazondynamodblabs.com/static/rdbms-migration/rdbms-migration.zip
850+
unzip -q rdbms-migration.zip
851+
chmod 775 *.*
852+
mysql -u root "-p${DbMasterPassword}" -e "CREATE DATABASE imdb;"
853+
mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_akas (titleId VARCHAR(200), ordering VARCHAR(200),title VARCHAR(1000), region VARCHAR(1000), language VARCHAR(1000), types VARCHAR(1000),attributes VARCHAR(1000),isOriginalTitle VARCHAR(5),primary key (titleId, ordering));"
854+
mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_basics (tconst VARCHAR(200), titleType VARCHAR(1000),primaryTitle VARCHAR(1000), originalTitle VARCHAR(1000), isAdult VARCHAR(1000), startYear VARCHAR(1000),endYear VARCHAR(1000),runtimeMinutes VARCHAR(1000),genres VARCHAR(1000),primary key (tconst));"
855+
mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_crew (tconst VARCHAR(200), directors VARCHAR(1000),writers VARCHAR(1000),primary key (tconst));"
856+
mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_principals (tconst VARCHAR(200), ordering VARCHAR(200),nconst VARCHAR(200), category VARCHAR(1000), job VARCHAR(1000), characters VARCHAR(1000),primary key (tconst,ordering,nconst));"
857+
mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.title_ratings (tconst VARCHAR(200), averageRating float,numVotes integer,primary key (tconst));"
858+
mysql -u root "-p${DbMasterPassword}" -e "CREATE TABLE imdb.name_basics (nconst VARCHAR(200), primaryName VARCHAR(1000),birthYear VARCHAR(1000), deathYear VARCHAR(1000), primaryProfession VARCHAR(1000), knownForTitles VARCHAR(1000),primary key (nconst));"
859+
mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_ratings.tsv' IGNORE INTO TABLE imdb.title_ratings FIELDS TERMINATED BY '\t';"
860+
mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_basics.tsv' IGNORE INTO TABLE imdb.title_basics FIELDS TERMINATED BY '\t';"
861+
mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_crew.tsv' IGNORE INTO TABLE imdb.title_crew FIELDS TERMINATED BY '\t';"
862+
mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_principals.tsv' IGNORE INTO TABLE imdb.title_principals FIELDS TERMINATED BY '\t';"
863+
mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/name_basics.tsv' IGNORE INTO TABLE imdb.name_basics FIELDS TERMINATED BY '\t';"
864+
mysql -u root "-p${DbMasterPassword}" -e "LOAD DATA INFILE '/var/lib/mysql-files/title_akas.tsv' IGNORE INTO TABLE imdb.title_akas FIELDS TERMINATED BY '\t';"
805865
Tags:
806866
- Key: Name
807867
Value: MySQL-Instance

sync.sh

Lines changed: 110 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,110 @@
1+
#!/bin/bash
2+
3+
usage() {
4+
echo "Usage: $0 [--dry-run] -d DEST_REPO"
5+
echo " --dry-run Show what would be synced without making changes"
6+
echo " -d, --dest Destination repository path"
7+
echo
8+
echo "Example:"
9+
echo " $0 --dry-run -d /Users/$USER/workspace/amazon-dynamodb-immersion-day"
10+
exit 1
11+
}
12+
13+
# Get the directory where the script is located
14+
SOURCE_REPO="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
15+
16+
# Parse command line arguments
17+
DRY_RUN=false
18+
DEST_REPO=""
19+
20+
while [[ $# -gt 0 ]]; do
21+
case $1 in
22+
--dry-run)
23+
DRY_RUN=true
24+
shift
25+
;;
26+
-d|--dest)
27+
DEST_REPO="$2"
28+
shift 2
29+
;;
30+
*)
31+
usage
32+
;;
33+
esac
34+
done
35+
36+
# Validate required parameters
37+
if [ -z "$DEST_REPO" ]; then
38+
usage
39+
fi
40+
41+
# Define source and destination directory pairs
42+
src_dirs=(
43+
"content"
44+
"static"
45+
)
46+
47+
dest_dirs=(
48+
"content"
49+
"static"
50+
)
51+
52+
# Define source and destination file pairs
53+
src_files=(
54+
"design-patterns/cloudformation/C9.yaml"
55+
)
56+
57+
dest_files=(
58+
"static/ddb.yaml"
59+
)
60+
61+
# Function to perform sync
62+
perform_sync() {
63+
local rsync_options="-avz"
64+
65+
if [ "$DRY_RUN" = true ]; then
66+
rsync_options="$rsync_options --dry-run"
67+
echo "Performing dry run..."
68+
else
69+
echo "Performing actual sync..."
70+
fi
71+
72+
echo "Source repository: $SOURCE_REPO"
73+
echo "Destination repository: $DEST_REPO"
74+
75+
# Sync directories
76+
for i in "${!src_dirs[@]}"; do
77+
echo "Syncing directory: ${src_dirs[$i]}/ -> ${dest_dirs[$i]}/"
78+
mkdir -p "$DEST_REPO/${dest_dirs[$i]}"
79+
rsync $rsync_options "$SOURCE_REPO/${src_dirs[$i]}/" "$DEST_REPO/${dest_dirs[$i]}/"
80+
done
81+
82+
# Sync individual files
83+
for i in "${!src_files[@]}"; do
84+
echo "Syncing file: ${src_files[$i]} -> ${dest_files[$i]}"
85+
dest_dir=$(dirname "$DEST_REPO/${dest_files[$i]}")
86+
mkdir -p "$dest_dir"
87+
rsync $rsync_options "$SOURCE_REPO/${src_files[$i]}" "$DEST_REPO/${dest_files[$i]}"
88+
done
89+
echo "Great! Now follow instructions in the amazon-dynamodb-immersion-day README.md document to complete the sync."
90+
}
91+
92+
# Verify destination repository exists
93+
if [ ! -d "$DEST_REPO" ]; then
94+
echo "Error: Destination repository does not exist: $DEST_REPO"
95+
exit 1
96+
fi
97+
98+
# Execute sync
99+
if [ "$DRY_RUN" = true ]; then
100+
perform_sync
101+
else
102+
read -p "This will perform an actual sync. Are you sure? (y/n) " -n 1 -r
103+
echo
104+
if [[ $REPLY =~ ^[Yy]$ ]]; then
105+
perform_sync
106+
else
107+
echo "Sync cancelled."
108+
exit 1
109+
fi
110+
fi

0 commit comments

Comments
 (0)