You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/best_practices/environment_variables.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,11 +17,11 @@ The best practice for handling environment variables is to validate & parse them
17
17
In case of misconfiguration, a validation exception is raised with all the relevant exception details.
18
18
19
19
## **Open source**
20
+
20
21
The code in this post has been moved to an open source project you can use:
21
22
22
23
The [AWS Lambda environment variables modeler](https://github.com/ran-isenberg/aws-lambda-env-modeler){:target="_blank" rel="noopener"}
23
24
24
-
25
25
## **Blog Reference**
26
26
27
27
Read more about the importance of validating environment variables and how this utility works. Click [**HERE**](https://www.ranthebuilder.cloud/post/aws-lambda-cookbook-environment-variables){:target="_blank" rel="noopener"}
Copy file name to clipboardExpand all lines: docs/best_practices/monitoring.md
+2-4Lines changed: 2 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,8 @@ description: Monitoring
8
8
9
9
## **Key Concepts**
10
10
11
-
Utilizing AWS CloudWatch dashboards enables centralized monitoring of API Gateway, Lambda functions, and DynamoDB, providing real-time insights into their performance and operational health. By aggregating metrics, logs, and alarms, CloudWatch facilitates swift issue diagnosis and analysis across your serverless applications. Additionally, setting up alarms ensures immediate alerts during anomalous activities, enabling proactive issue mitigation.
11
+
Utilizing AWS CloudWatch dashboards enables centralized monitoring of API Gateway, Lambda functions, and DynamoDB, providing real-time insights into their performance and operational health.
12
+
By aggregating metrics, logs, and alarms, CloudWatch facilitates swift issue diagnosis and analysis across your serverless applications. Additionally, setting up alarms ensures immediate alerts during anomalous activities, enabling proactive issue mitigation.
12
13
13
14
## **Service Architecture**
14
15
@@ -54,7 +55,6 @@ As for DynamoDB tables, we have the primary database and the idempotency table f
54
55
55
56
Personas that use this dashboard: developers, SREs.
56
57
57
-
58
58
### **Alarms**
59
59
60
60
Having visibility and information is one thing, but being proactive and knowing beforehand that a significant error is looming is another. A CloudWatch
@@ -74,7 +74,6 @@ For latency-related issues, we define the following alarm:
74
74
75
75
For P90, P50 metrics, follow this [explanation.](https://www.dnv.com/article/terminology-explained-p10-p50-and-p90-202611#:~:text=Proved%20(P90)%3A%20The%20lowest,equal%20or%20exceed%20P10%20estimate.){:target="_blank" rel="noopener"}
76
76
77
-
78
77
For internal server errors rate, we define the following alarm:
79
78

80
79
@@ -89,7 +88,6 @@ We use the open-source [cdk-monitoring-constructs](https://github.com/cdklabs/cd
89
88
90
89
You can view find the monitoring CDK construct [here](https://github.com/ran-isenberg/aws-lambda-handler-cookbook/blob/main/cdk/service/monitoring.py).
91
90
92
-
93
91
## **Further Reading**
94
92
95
93
If you wish to learn more about this concept and go over details on the CDK code, check out my [blog post](https://www.ranthebuilder.cloud/post/how-to-effortlessly-monitor-serverless-applications-with-cloudwatch-part-one).
*[poetry](https://pypi.org/project/poetry/){target="_blank"} - Make sure to run ``poetry config --local virtualenvs.in-project true`` so all dependencies are installed in the project '.venv' folder.
11
11
* For Windows based machines, use the Makefile_windows version (rename to Makefile). Default Makefile is for Mac/Linux.
-[**Learn How to Write AWS Lambda Functions with Three Architecture Layers**](https://www.ranthebuilder.cloud/post/learn-how-to-write-aws-lambda-functions-with-architecture-layers){:target="_blank" rel="noopener"}
62
64
63
65
While the code examples are written in Python, the principles are valid to any supported AWS Lambda handler programming language.
The GitHub CI/CD pipeline includes the following steps.
9
9
10
10
The pipelines uses environment secrets (under the defined environment 'dev', 'staging' and 'production') for code coverage and for the role to deploy to AWS.
11
11
12
-
When you clone this repository be sure to define the environments in your [repo settings](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment) and add two environment secrets:
12
+
When you clone this repository be sure to define the environments in your [repo settings](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment) and create a secret per environment:
13
13
14
-
1. AWS_ROLE - to role to assume for your GitHub worker as defined [here](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services) .
15
-
2. CODECOV_TOKEN - for [code coverage integration](https://app.codecov.io/).
14
+
- AWS_ROLE - to role to assume for your GitHub worker as defined [here](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services)
16
15
17
-
### Makefile Commands
16
+
### **Makefile Commands**
18
17
19
18
All steps can be run locally using the makefile. See details below:
20
19
@@ -34,9 +33,40 @@ All steps can be run locally using the makefile. See details below:
34
33
- Code coverage tests - run `make coverage-tests` in the IDE after CDK dep
35
34
- Update GitHub documentation branch
36
35
37
-
### Other Capabilities
36
+
### **Other Capabilities**
38
37
39
38
- Automatic Python dependencies update with Dependabot
40
39
- Easy to use makefile allows to run locally all commands in the GitHub actions
41
40
- Run local docs server, prior to push in pipeline - run `make docs` in the IDE
42
41
- Prepare PR, run all checks with one command - run `make pr` in the IDE
42
+
43
+
## **Environments & Pipelines**
44
+
45
+
All GitHub workflows are stored under `.github/workflows` folder.
46
+
47
+
The two most important ones are `pr-serverless-service` and `main-serverless-service`.
48
+
49
+
### **pr-serverless-service**
50
+
51
+
<imgalt="alt_text"src="../media/cicd_pr.png" />
52
+
53
+
`pr-serverless-service` runs for every pull request you open. It expects you defined a GitHub environment by the name `dev` and that it includes a secret by the name of `AWS_ROLE`.
54
+
55
+
It includes two jobs: 'quality_standards' and 'tests' where a failure in 'quality_standards' does not trigger 'tests'. Both jobs MUST pass in order to to be able to merge.
56
+
57
+
'quality_standards' includes all linters, pre-commit checks and units tests and 'tests' deploys the service to AWS, runs code coverage checks, security checks and E2E tests. Stack is destroyed at the end. Stack has a 'dev' prefix as part of its name.
58
+
59
+
Once merged, `main-serverless-service` will run.
60
+
61
+
### **main-serverless-service**
62
+
63
+
<imgalt="alt_text"src="../media/cicd_main.png" />
64
+
65
+
`main-serverless-service` runs for every MERGED pull request that runs on the main branch. It expects you defined a GitHub environments by the name `staging` and `production` and that both includes a secret by the name of `AWS_ROLE`.
66
+
67
+
It includes three jobs: 'staging', 'production' and 'publish_github_pages'.
68
+
69
+
'staging' does not run any of the 'quality_standards' checks, since they were already checked before the code was merged. It runs just coverage tests and E2E tests. Stack is not deleted. Stack has a 'staging' prefix as part of its name.
70
+
Any failure in staging will stop the pipeline and production environment will not get updated with the new code.
71
+
72
+
'production' does not run any of the 'quality_standards' checks, since they were already checked before the code was merged. It does not run any test at the moment. Stack is not deleted. Stack has a 'production' prefix as part of its name.
0 commit comments