diff --git a/python/cloudfront-v2-logging/README.md b/python/cloudfront-v2-logging/README.md new file mode 100644 index 000000000..0c7591129 --- /dev/null +++ b/python/cloudfront-v2-logging/README.md @@ -0,0 +1,202 @@ +# CloudFront V2 Logging with AWS CDK (Python) + +This project demonstrates how to set up Amazon CloudFront with the new CloudFront Standard Logging V2 feature using AWS CDK in Python. The example shows how to configure multiple logging destinations for CloudFront access logs, including: + +1. Amazon CloudWatch Logs +2. Amazon S3 (with Parquet format) +3. Amazon Kinesis Data Firehose (with JSON format) + +## Architecture + +![CloudFront V2 Logging Architecture](./architecture.drawio.png) + +The project deploys the following resources: + +- An S3 bucket to host a simple static website +- A CloudFront distribution with Origin Access Control (OAC) to serve the website +- A logging S3 bucket with appropriate lifecycle policies +- CloudFront Standard Logging V2 configuration with multiple delivery destinations +- Kinesis Data Firehose delivery stream +- CloudWatch Logs group +- Necessary IAM roles and permissions + +## Prerequisites + +- [AWS CLI](https://aws.amazon.com/cli/) configured with appropriate credentials +- [AWS CDK](https://aws.amazon.com/cdk/) installed (v2.x) +- Python 3.6 or later +- Node.js 14.x or later (for CDK) + +## Setup + +1. Create and activate a virtual environment: + +```bash +python3 -m venv .venv +source .venv/bin/activate # On Windows: .venv\Scripts\activate.bat +``` + +2. Install the required dependencies: + +```bash +pip install -r requirements.txt +``` + +3. Synthesize the CloudFormation template: + +```bash +cdk synth +``` + +4. Deploy the stack: + +```bash +cdk deploy +``` + +You can customize the log retention periods by providing parameters: + +```bash +cdk deploy --parameters LogRetentionDays=90 --parameters CloudWatchLogRetentionDays=60 +``` + +5. After deployment, the CloudFront distribution domain name will be displayed in the outputs. You can access your website using this domain. + +## How It Works + +This example demonstrates CloudFront Standard Logging V2, which provides more flexibility in how you collect and analyze CloudFront access logs: + +- **CloudWatch Logs**: Logs are delivered in JSON format for real-time monitoring and analysis +- **S3 (Parquet)**: Logs are delivered in Parquet format with Hive-compatible paths for efficient querying with services like Amazon Athena +- **Kinesis Data Firehose**: Logs are streamed in JSON format, allowing for real-time processing and transformation + +The CDK stack creates all necessary resources and configures the appropriate permissions for log delivery. + +## Example Log Outputs + +### CloudWatch Logs (JSON format) +```json +{ + "timestamp": "2023-03-15T20:12:34Z", + "c-ip": "192.0.2.100", + "time-to-first-byte": 0.002, + "sc-status": 200, + "sc-bytes": 2326, + "cs-method": "GET", + "cs-uri-stem": "/index.html", + "cs-protocol": "https", + "cs-host": "d111111abcdef8.cloudfront.net", + "cs-user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36", + "cs-referer": "https://www.example.com/", + "x-edge-location": "IAD79-C2", + "x-edge-request-id": "tLAGM_r7TyiRgwgk_4U5Xb-vv4JHOjzGCh61ER9nM_2UFY8hTKdEoQ==" +} +``` + +### S3 Parquet Format +The Parquet format is a columnar storage format that provides efficient compression and encoding schemes. The logs are stored in a Hive-compatible directory structure: + +``` +s3://your-logging-bucket/s3_delivery/EDFDVBD6EXAMPLE/2023/03/15/20/ +``` + +### Kinesis Data Firehose (JSON format) +Firehose delivers logs in JSON format with a timestamp-based prefix: + +``` +s3://your-logging-bucket/firehose_delivery/year=2023/month=03/day=15/delivery-stream-1-2023-03-15-20-12-34-a1b2c3d4.json.gz +``` + +## Querying Logs with Athena + +You can use Amazon Athena to query the Parquet logs stored in S3. Here's an example query to get started: + +```sql +CREATE EXTERNAL TABLE IF NOT EXISTS cloudfront_logs ( + `timestamp` string, + `c-ip` string, + `time-to-first-byte` float, + `sc-status` int, + `sc-bytes` bigint, + `cs-method` string, + `cs-uri-stem` string, + `cs-protocol` string, + `cs-host` string, + `cs-user-agent` string, + `cs-referer` string, + `x-edge-location` string, + `x-edge-request-id` string +) +PARTITIONED BY ( + `distributionid` string, + `year` string, + `month` string, + `day` string, + `hour` string +) +STORED AS PARQUET +LOCATION 's3://your-logging-bucket/s3_delivery/'; + +-- Update partitions +MSCK REPAIR TABLE cloudfront_logs; + +-- Example query to find the top requested URLs +SELECT cs_uri_stem, COUNT(*) as request_count +FROM cloudfront_logs +WHERE year='2023' AND month='03' AND day='15' +GROUP BY cs_uri_stem +ORDER BY request_count DESC +LIMIT 10; +``` + +## Troubleshooting + +### Common Issues + +1. **Logs not appearing in CloudWatch** + - Check that the CloudFront distribution is receiving traffic + - Verify the IAM permissions for the log delivery service + - Check CloudWatch service quotas if you have high traffic volumes + +2. **Parquet files not appearing in S3** + - Verify bucket permissions allow the log delivery service to write + - Check for any errors in CloudTrail related to log delivery + +3. **Firehose delivery errors** + - Check the Firehose error prefix in S3 for error logs + - Verify IAM role permissions for Firehose + - Monitor Firehose metrics in CloudWatch + +### Useful Commands + +- Check CloudFront distribution status: + ```bash + aws cloudfront get-distribution --id + ``` + +- List log files in S3: + ```bash + aws s3 ls s3://your-logging-bucket/s3_delivery/ --recursive + ``` + +- View CloudWatch logs: + ```bash + aws logs get-log-events --log-group-name --log-stream-name + ``` + +## Cleanup + +To avoid incurring charges, delete the deployed resources when you're done: + +```bash +cdk destroy +``` + +## Security Considerations + +This example includes several security best practices: + +- S3 buckets are configured with encryption, SSL enforcement, and public access blocking +- CloudFront uses Origin Access Control (OAC) to secure S3 content +- IAM permissions follow the principle of least privilege +- Logging bucket has appropriate lifecycle policies to manage log retention diff --git a/python/cloudfront-v2-logging/app.py b/python/cloudfront-v2-logging/app.py new file mode 100644 index 000000000..35c1719f6 --- /dev/null +++ b/python/cloudfront-v2-logging/app.py @@ -0,0 +1,54 @@ +#!/usr/bin/env python3 +import os +import aws_cdk as cdk +from aws_cdk import Aspects +from cdk_nag import AwsSolutionsChecks, NagSuppressions + +from cloudfront_v2_logging.cloudfront_v2_logging_stack import CloudfrontV2LoggingStack + +app = cdk.App() +stack = CloudfrontV2LoggingStack(app, "CloudfrontV2LoggingStack") + +# Add CDK-NAG to check for best practices +Aspects.of(app).add(AwsSolutionsChecks()) + +# Add suppressions at the stack level +NagSuppressions.add_stack_suppressions( + stack, + [ + { + "id": "AwsSolutions-IAM4", + "reason": "Suppressing managed policy warning as permissions are appropriate" + }, + { + "id": "AwsSolutions-L1", + "reason": "Lambda runtime is 3.11 and managed by CDK BucketDeployment construct, and so out of scope for this project" + }, + { + "id": "AwsSolutions-CFR1", + "reason": "Geo restrictions not required for this demo" + }, + { + "id": "AwsSolutions-CFR2", + "reason": "WAF integration not required for this demo" + }, + { + "id": "AwsSolutions-CFR3", + "reason": "Using CloudFront V2 logging instead of traditional access logging" + }, + { + "id": "AwsSolutions-S1", + "reason": "S3 access logging not required for this demo as we're demonstrating CloudFront V2 logging" + }, + { + "id": "AwsSolutions-IAM5", + "reason": "Wildcard permissions are required for PUT actions for the CDK BucketDeployment construct and Firehose role" + }, + { + "id": "AwsSolutions-CFR4", + "reason": "We're making use of the highest currently available viewer certificate. This flag is due to our use of the default viewer certificate which is not an issue in this demonstration case." + } + ] +) + +app.synth() \ No newline at end of file diff --git a/python/cloudfront-v2-logging/architecture.drawio.png b/python/cloudfront-v2-logging/architecture.drawio.png new file mode 100644 index 000000000..a02082e44 Binary files /dev/null and b/python/cloudfront-v2-logging/architecture.drawio.png differ diff --git a/python/cloudfront-v2-logging/cdk.json b/python/cloudfront-v2-logging/cdk.json new file mode 100644 index 000000000..3f7ab4728 --- /dev/null +++ b/python/cloudfront-v2-logging/cdk.json @@ -0,0 +1,86 @@ +{ + "app": "python3 app.py", + "watch": { + "include": [ + "**" + ], + "exclude": [ + "README.md", + "cdk*.json", + "requirements*.txt", + "source.bat", + "**/__init__.py", + "**/__pycache__", + "tests" + ] + }, + "context": { + "@aws-cdk/aws-lambda:recognizeLayerVersion": true, + "@aws-cdk/core:checkSecretUsage": true, + "@aws-cdk/core:target-partitions": [ + "aws", + "aws-cn" + ], + "@aws-cdk-containers/ecs-service-extensions:enableDefaultLogDriver": true, + "@aws-cdk/aws-ec2:uniqueImdsv2TemplateName": true, + "@aws-cdk/aws-ecs:arnFormatIncludesClusterName": true, + "@aws-cdk/aws-iam:minimizePolicies": true, + "@aws-cdk/core:validateSnapshotRemovalPolicy": true, + "@aws-cdk/aws-codepipeline:crossAccountKeyAliasStackSafeResourceName": true, + "@aws-cdk/aws-s3:createDefaultLoggingPolicy": true, + "@aws-cdk/aws-sns-subscriptions:restrictSqsDescryption": true, + "@aws-cdk/aws-apigateway:disableCloudWatchRole": true, + "@aws-cdk/core:enablePartitionLiterals": true, + "@aws-cdk/aws-events:eventsTargetQueueSameAccount": true, + "@aws-cdk/aws-ecs:disableExplicitDeploymentControllerForCircuitBreaker": true, + "@aws-cdk/aws-iam:importedRoleStackSafeDefaultPolicyName": true, + "@aws-cdk/aws-s3:serverAccessLogsUseBucketPolicy": true, + "@aws-cdk/aws-route53-patters:useCertificate": true, + "@aws-cdk/customresources:installLatestAwsSdkDefault": false, + "@aws-cdk/aws-rds:databaseProxyUniqueResourceName": true, + "@aws-cdk/aws-codedeploy:removeAlarmsFromDeploymentGroup": true, + "@aws-cdk/aws-apigateway:authorizerChangeDeploymentLogicalId": true, + "@aws-cdk/aws-ec2:launchTemplateDefaultUserData": true, + "@aws-cdk/aws-secretsmanager:useAttachedSecretResourcePolicyForSecretTargetAttachments": true, + "@aws-cdk/aws-redshift:columnId": true, + "@aws-cdk/aws-stepfunctions-tasks:enableEmrServicePolicyV2": true, + "@aws-cdk/aws-ec2:restrictDefaultSecurityGroup": true, + "@aws-cdk/aws-apigateway:requestValidatorUniqueId": true, + "@aws-cdk/aws-kms:aliasNameRef": true, + "@aws-cdk/aws-autoscaling:generateLaunchTemplateInsteadOfLaunchConfig": true, + "@aws-cdk/core:includePrefixInUniqueNameGeneration": true, + "@aws-cdk/aws-efs:denyAnonymousAccess": true, + "@aws-cdk/aws-opensearchservice:enableOpensearchMultiAzWithStandby": true, + "@aws-cdk/aws-lambda-nodejs:useLatestRuntimeVersion": true, + "@aws-cdk/aws-efs:mountTargetOrderInsensitiveLogicalId": true, + "@aws-cdk/aws-rds:auroraClusterChangeScopeOfInstanceParameterGroupWithEachParameters": true, + "@aws-cdk/aws-appsync:useArnForSourceApiAssociationIdentifier": true, + "@aws-cdk/aws-rds:preventRenderingDeprecatedCredentials": true, + "@aws-cdk/aws-codepipeline-actions:useNewDefaultBranchForCodeCommitSource": true, + "@aws-cdk/aws-cloudwatch-actions:changeLambdaPermissionLogicalIdForLambdaAction": true, + "@aws-cdk/aws-codepipeline:crossAccountKeysDefaultValueToFalse": true, + "@aws-cdk/aws-codepipeline:defaultPipelineTypeToV2": true, + "@aws-cdk/aws-kms:reduceCrossAccountRegionPolicyScope": true, + "@aws-cdk/aws-eks:nodegroupNameAttribute": true, + "@aws-cdk/aws-ec2:ebsDefaultGp3Volume": true, + "@aws-cdk/aws-ecs:removeDefaultDeploymentAlarm": true, + "@aws-cdk/custom-resources:logApiResponseDataPropertyTrueDefault": false, + "@aws-cdk/aws-s3:keepNotificationInImportedBucket": false, + "@aws-cdk/aws-ecs:enableImdsBlockingDeprecatedFeature": false, + "@aws-cdk/aws-ecs:disableEcsImdsBlocking": true, + "@aws-cdk/aws-ecs:reduceEc2FargateCloudWatchPermissions": true, + "@aws-cdk/aws-dynamodb:resourcePolicyPerReplica": true, + "@aws-cdk/aws-ec2:ec2SumTImeoutEnabled": true, + "@aws-cdk/aws-appsync:appSyncGraphQLAPIScopeLambdaPermission": true, + "@aws-cdk/aws-rds:setCorrectValueForDatabaseInstanceReadReplicaInstanceResourceId": true, + "@aws-cdk/core:cfnIncludeRejectComplexResourceUpdateCreatePolicyIntrinsics": true, + "@aws-cdk/aws-lambda-nodejs:sdkV3ExcludeSmithyPackages": true, + "@aws-cdk/aws-stepfunctions-tasks:fixRunEcsTaskPolicy": true, + "@aws-cdk/aws-ec2:bastionHostUseAmazonLinux2023ByDefault": true, + "@aws-cdk/aws-route53-targets:userPoolDomainNameMethodWithoutCustomResource": true, + "@aws-cdk/aws-elasticloadbalancingV2:albDualstackWithoutPublicIpv4SecurityGroupRulesDefault": true, + "@aws-cdk/aws-iam:oidcRejectUnauthorizedConnections": true, + "@aws-cdk/core:enableAdditionalMetadataCollection": true, + "@aws-cdk/aws-lambda:createNewPoliciesWithAddToRolePolicy": true + } +} diff --git a/python/cloudfront-v2-logging/cloudfront_v2_logging/__init__.py b/python/cloudfront-v2-logging/cloudfront_v2_logging/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/python/cloudfront-v2-logging/cloudfront_v2_logging/cloudfront_v2_logging_stack.py b/python/cloudfront-v2-logging/cloudfront_v2_logging/cloudfront_v2_logging_stack.py new file mode 100644 index 000000000..153d4cd58 --- /dev/null +++ b/python/cloudfront-v2-logging/cloudfront_v2_logging/cloudfront_v2_logging_stack.py @@ -0,0 +1,288 @@ +from aws_cdk import ( + Duration, + Stack, + Size, + aws_logs as logs, + aws_cloudfront as cloudfront, + aws_cloudfront_origins as origins, + aws_iam as iam, + aws_s3 as s3, + aws_kinesisfirehose as firehose, + aws_s3_deployment as s3_deployment, + RemovalPolicy, + CfnOutput, + CfnParameter, +) +# Import the destinations module from aws-cdk-lib +from aws_cdk.aws_kinesisfirehose import S3Bucket, Compression +from constructs import Construct + +class CloudfrontV2LoggingStack(Stack): + """ + CloudFront V2 Logging Stack + + This stack demonstrates how to configure CloudFront Standard Logging V2 with multiple + delivery destinations including CloudWatch Logs, S3 (Partitioned Parquet format), and Kinesis + Data Firehose (JSON format). + """ + + def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None: + super().__init__(scope, construct_id, **kwargs) + + # CloudFormation parameters for customization + s3_log_retention_days = CfnParameter( + self, "LogRetentionDays", + type="Number", + default=30, + min_value=1, + max_value=365, + description="Number of days to retain CloudFront logs in S3" + ) + + cloudwatch_log_retention_days = CfnParameter( + self, "CloudWatchLogRetentionDays", + type="Number", + default=30, + description="Number of days to retain CloudFront logs in CloudWatch Logs", + allowed_values=[ + "1", "3", "5", "7", "14", "30", "60", "90", + "120", "150", "180", "365", "400", "545", "731", + "1827", "3653", "0" + ] + ) + + # Create the S3 logging bucket for CloudFront + # This bucket will store logs from the S3 output in Parquet format and also be the target for our Firehose delivery + logging_bucket = s3.Bucket( + self, "CFLoggingBucket", + removal_policy=RemovalPolicy.DESTROY, + encryption=s3.BucketEncryption.S3_MANAGED, + block_public_access=s3.BlockPublicAccess.BLOCK_ALL, + auto_delete_objects=True, + object_ownership=s3.ObjectOwnership.OBJECT_WRITER, # Enable ACLs for log delivery + enforce_ssl=True, + lifecycle_rules=[ + s3.LifecycleRule( + expiration=Duration.days(s3_log_retention_days.value_as_number), # Configurable log retention + ) + ] + ) + + # Create the main S3 bucket for your application + # This bucket will host the static website content + main_bucket = s3.Bucket( + self, "OriginBucket", + removal_policy=RemovalPolicy.DESTROY, + encryption=s3.BucketEncryption.S3_MANAGED, + block_public_access=s3.BlockPublicAccess.BLOCK_ALL, + enforce_ssl=True, # Enforce SSL for all requests + auto_delete_objects=True # Clean up objects when stack is deleted + ) + + # Deploy the static website content to the S3 bucket with improved options + s3_deployment.BucketDeployment( + self, "DeployWebsite", + sources=[s3_deployment.Source.asset("website")], # Directory containing your website files + destination_bucket=main_bucket, + content_type="text/html", # Set content type for HTML files + cache_control=[s3_deployment.CacheControl.max_age(Duration.days(7))], # Cache for 7 days + prune=False + ) + + # Create CloudWatch Logs group with configurable retention + log_group = logs.LogGroup( + self, + "DistributionLogGroup", + retention=self._get_log_retention(cloudwatch_log_retention_days.value_as_number) + ) + + # Create Kinesis Firehose delivery stream to buffer and deliver logs to S3 using L2 construct + # Define S3 destination for Firehose with dynamic prefixes + s3_destination = S3Bucket( + bucket=logging_bucket, + data_output_prefix="firehose_delivery/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/", + error_output_prefix="errors/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/!{firehose:error-output-type}/", + buffering_interval=Duration.seconds(300), # Buffer for 5 minutes + buffering_size=Size.mebibytes(5), # Or until 5MB is reached + compression=Compression.HADOOP_SNAPPY # Compress data for efficiency + ) + + # Create Kinesis Firehose delivery stream using L2 construct + firehose_stream = firehose.DeliveryStream( + self, "LoggingFirehose", + delivery_stream_name="cloudfront-logs-stream", + destination=s3_destination, + encryption=firehose.StreamEncryption.aws_owned_key() + ) + + # Grant permissions for the delivery service to write logs to the S3 bucket + logging_bucket.add_to_resource_policy( + iam.PolicyStatement( + sid="AllowCloudFrontLogDelivery", + actions=["s3:PutObject"], + principals=[iam.ServicePrincipal("delivery.logs.amazonaws.com")], + resources=[f"{logging_bucket.bucket_arn}/*"], + conditions={ + "StringEquals": { + "aws:SourceAccount": Stack.of(self).account + } + } + ) + ) + + # Add GetBucketAcl permission required by the log delivery service + logging_bucket.add_to_resource_policy( + iam.PolicyStatement( + sid="AllowCloudFrontLogDeliveryAcl", + actions=["s3:GetBucketAcl"], + principals=[iam.ServicePrincipal("delivery.logs.amazonaws.com")], + resources=[logging_bucket.bucket_arn], + conditions={ + "StringEquals": { + "aws:SourceAccount": Stack.of(self).account + } + } + ) + ) + + # Create CloudFront distribution with S3BucketOrigin + distribution = cloudfront.Distribution( + self, "LoggedDistribution", + comment="CloudFront distribution with STD Logging V2 Configuration Examples", + default_behavior=cloudfront.BehaviorOptions( + origin=origins.S3BucketOrigin.with_origin_access_control(main_bucket), + viewer_protocol_policy=cloudfront.ViewerProtocolPolicy.REDIRECT_TO_HTTPS, + cache_policy=cloudfront.CachePolicy.CACHING_OPTIMIZED, + compress=True + ), + default_root_object="index.html", + minimum_protocol_version=cloudfront.SecurityPolicyProtocol.TLS_V1_2_2021, # Uses TLS 1.2 as minimum + http_version=cloudfront.HttpVersion.HTTP2, + enable_logging=False # We're using CloudFront V2 logging instead of traditional logging + ) + + # SECTION: CLOUDFRONT STANDARD LOGGING V2 CONFIGURATION + + # 1. Create the delivery source for CloudFront distribution logs + # This defines the CloudFront distribution as the source of logs + distribution_delivery_source = logs.CfnDeliverySource( + self, + "DistributionDeliverySource", + name="distribution-source", + log_type="ACCESS_LOGS", + resource_arn=Stack.of(self).format_arn( + service="cloudfront", + region="", # CloudFront is a global service + resource="distribution", + resource_name=distribution.distribution_id + ) + ) + + # 2. CLOUDWATCH LOGS DESTINATION + + # Create a CloudWatch delivery destination + cf_distribution_delivery_destination = logs.CfnDeliveryDestination( + self, + "CloudWatchDeliveryDestination", + name="cloudwatch-logs-destination", + destination_resource_arn=log_group.log_group_arn, + output_format="json" + ) + + # Create the CloudWatch Logs delivery configuration + cf_delivery = logs.CfnDelivery( + self, + "CloudwatchDelivery", + delivery_source_name=distribution_delivery_source.name, + delivery_destination_arn=cf_distribution_delivery_destination.attr_arn + ) + cf_delivery.node.add_dependency(distribution_delivery_source) + cf_delivery.node.add_dependency(cf_distribution_delivery_destination) + + # 3. S3 PARQUET DESTINATION + # Configure S3 as a delivery destination with Parquet format + s3_distribution_delivery_destination = logs.CfnDeliveryDestination( + self, + "S3DeliveryDestination", + name="s3-destination", + destination_resource_arn=logging_bucket.bucket_arn, + output_format="parquet", + ) + + # Create the S3 delivery configuration with Hive-compatible paths + s3_delivery = logs.CfnDelivery( + self, + "S3Delivery", + delivery_source_name=distribution_delivery_source.name, + delivery_destination_arn=s3_distribution_delivery_destination.attr_arn, + s3_enable_hive_compatible_path=True, # Enable Hive-compatible paths for Athena + s3_suffix_path="s3_delivery/{DistributionId}/{yyyy}/{MM}/{dd}/{HH}" + ) + s3_delivery.node.add_dependency(distribution_delivery_source) + s3_delivery.node.add_dependency(s3_distribution_delivery_destination) + s3_delivery.node.add_dependency(cf_delivery) # Make S3 delivery depend on CloudWatch delivery + + # 4. KINESIS DATA FIREHOSE DESTINATION + # Configure Firehose as a delivery destination for CloudFront logs + firehose_delivery_destination = logs.CfnDeliveryDestination( + self, "FirehoseDeliveryDestination", + name="cloudfront-logs-destination", + destination_resource_arn=firehose_stream.delivery_stream_arn, + output_format="json" + ) + + # Create the Firehose delivery configuration + delivery = logs.CfnDelivery( + self, + "Delivery", + delivery_source_name=distribution_delivery_source.name, + delivery_destination_arn=firehose_delivery_destination.attr_arn + ) + delivery.node.add_dependency(distribution_delivery_source) + delivery.node.add_dependency(firehose_delivery_destination) + delivery.node.add_dependency(s3_delivery) # Make Firehose delivery depend on S3 delivery + + # Output the CloudFront distribution domain name for easy access + CfnOutput( + self, "DistributionDomainName", + value=distribution.distribution_domain_name, + description="CloudFront distribution domain name" + ) + + # Output the S3 bucket name where logs are stored + CfnOutput( + self, "LoggingBucketName", + value=logging_bucket.bucket_name, + description="S3 bucket for CloudFront logs" + ) + + # Output the CloudWatch log group name and retention period + CfnOutput( + self, "CloudWatchLogGroupName", + value=f"{log_group.log_group_name} (retention: {cloudwatch_log_retention_days.value_as_number} days)", + description="CloudWatch log group for CloudFront logs" + ) + + def _get_log_retention(self, days): + """Convert numeric days to logs.RetentionDays enum value""" + retention_map = { + 0: logs.RetentionDays.INFINITE, + 1: logs.RetentionDays.ONE_DAY, + 3: logs.RetentionDays.THREE_DAYS, + 5: logs.RetentionDays.FIVE_DAYS, + 7: logs.RetentionDays.ONE_WEEK, + 14: logs.RetentionDays.TWO_WEEKS, + 30: logs.RetentionDays.ONE_MONTH, + 60: logs.RetentionDays.TWO_MONTHS, + 90: logs.RetentionDays.THREE_MONTHS, + 120: logs.RetentionDays.FOUR_MONTHS, + 150: logs.RetentionDays.FIVE_MONTHS, + 180: logs.RetentionDays.SIX_MONTHS, + 365: logs.RetentionDays.ONE_YEAR, + 400: logs.RetentionDays.THIRTEEN_MONTHS, + 545: logs.RetentionDays.EIGHTEEN_MONTHS, + 731: logs.RetentionDays.TWO_YEARS, + 1827: logs.RetentionDays.FIVE_YEARS, + 3653: logs.RetentionDays.TEN_YEARS + } + return retention_map.get(int(days), logs.RetentionDays.ONE_MONTH) diff --git a/python/cloudfront-v2-logging/requirements.txt b/python/cloudfront-v2-logging/requirements.txt new file mode 100644 index 000000000..75a8f6ffc --- /dev/null +++ b/python/cloudfront-v2-logging/requirements.txt @@ -0,0 +1,3 @@ +aws-cdk-lib==2.211.0 +constructs>=10.0.0,<11.0.0 +cdk-nag>=2.0.0 diff --git a/python/cloudfront-v2-logging/website/index.html b/python/cloudfront-v2-logging/website/index.html new file mode 100644 index 000000000..82107655b --- /dev/null +++ b/python/cloudfront-v2-logging/website/index.html @@ -0,0 +1,10 @@ + + + + Welcome + + +

Welcome to my CloudFront-enabled website!

+

This is a simple page served via CloudFront from S3. Requests to this distribution will be logged to multiple outputs via CloudFront Standard Logging V2

+ + \ No newline at end of file