π Read this in other languages: PortuguΓͺs (Brasil)
- Overview
- Architecture
- Prerequisites
- Project Structure
- Building the Project
- AWS Console Configuration
- Testing the System
- Resource Cleanup
- Next Steps
This project demonstrates a complete serverless architecture on AWS for automatic image processing. When an image is uploaded to S3, it is automatically processed (resizing, optimization) using Lambda Functions.
- Image upload to S3
- Asynchronous processing via Lambda
- Automatic thumbnail generation
- Automatic retry on failure (via SQS)
- Dead Letter Queue (DLQ) for problematic messages
- Logs on CloudWatch
- Upload: User uploads
photo.jpgto bucketimage-processor-in - Event: S3 sends notification to SQS queue
product-image-queue - Trigger: Lambda is triggered automatically by SQS
- Processing: Lambda downloads image, creates thumbnail and optimized versions
- Result: Lambda saves processed images to bucket
image-processor-out - Retry: If it fails, SQS retries automatically (up to 3 times)
- DLQ: Messages that failed 3 times go to
product-image-queue-dlq
For a detailed sequence diagram showing the error handling and retry mechanism, see: Error Flow Diagram
- Go (Golang) (Installation)
- AWS CLI (Installation)
- AWS Account (Free Tier works!)
# Configure credentials
aws configure
# AWS Access Key ID: AKIAIOSFODNN7EXAMPLE
# AWS Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
# Default region name: us-east-1
# Default output format: json
# Test
aws s3 lsaws-image-processor/
βββ cmd/
β βββ lambda/
β βββ main.go # Lambda function entry point
βββ internal/
β βββ aws/
β β βββ s3_client.go # Client for S3 operations
β βββ processor/
β βββ image_processor.go # Main image processing logic
βββ scripts/
β βββ build.sh # Build script
βββ docs/
β βββ architecture-flow.puml # PlantUML architecture diagram
β βββ architecture-flow.pt-br.puml # PlantUML architecture diagram (Portuguese)
β βββ architecture-flow-error.puml # Error flow diagram
β βββ architecture-flow-error.pt-br.puml # Error flow diagram (Portuguese)
βββ go.mod # Go project dependencies
βββ go.sum # Dependency checksum
βββ .gitignore # Files ignored by Git
βββ README.md # English project documentation
βββ README.pt-br.md # Brazilian Portuguese project documentation
Before configuring the AWS resources, you need to build the Lambda function. This project includes a build script that automates the entire process.
The build script (scripts/build.sh) will:
- Compile the Go code for Linux (required for Lambda)
- Create a ZIP file with the compiled binary
- Leave the artifact
lambda.zipat the project root
To build the project:
# Make the script executable (first time only)
chmod +x scripts/build.sh
# Run the build script
./scripts/build.shAfter running the script:
- The
lambda.zipfile will be created at the root of the project - This ZIP file is the artifact that you'll upload to AWS Lambda (see Step 3 in AWS Console Configuration)
Prerequisites for building:
- Go must be installed
- The
ziputility must be installed (install with:sudo apt install zip unzipon Ubuntu/Debian)
π Note: S3 bucket names must be globally unique. Add your name, initials or random number.
- Go to: https://console.aws.amazon.com/s3/
- Click "Create bucket"
Configuration:
- Bucket name:
image-processor-in-{your-name}(ex:image-processor-in-giovano) - Region:
us-east-1(or your preference) - Block Public Access:
- β Keep blocked (recommended for security)
- Versioning: Enabled (optional, but recommended)
- Click "Create bucket"
Configuration:
- Bucket name:
image-processor-out-{your-name}(ex:image-processor-out-giovano) - Region: Same region as source bucket (
us-east-1) - Block Public Access:
β οΈ Uncheck if you want to serve thumbnails publicly- β OR keep blocked and use CloudFront later
- Click "Create bucket"
π Buckets Created:
β
Source Bucket: image-processor-in-{your-name}
β
Destination Bucket: image-processor-out-{your-name}
- Go to: https://console.aws.amazon.com/sqs/
- Click "Create queue"
Configuration:
- Name:
product-image-queue - Type: Standard
- Configuration:
- Visibility timeout:
300 seconds(5 min) - Message retention period:
86400 seconds(1 day) - Receive message wait time:
20 seconds(long polling)
- Visibility timeout:
- Dead-letter queue: (configure after creating DLQ)
- Click "Create queue"
Configuration:
- Name:
product-image-queue-dlq - Type: Standard
- Configuration: Use defaults
- Click "Create queue"
- Go back to queue
product-image-queue - Click "Edit"
- In section Dead-letter queue:
- β Enabled
- Choose existing queue:
product-image-queue-dlq - Maximum receives:
3
- Click "Save"
π Queues Created:
β
Main Queue: product-image-queue
β
Dead Letter Queue: product-image-queue-dlq
-
Access Lambda Console
- Go to: https://console.aws.amazon.com/lambda/
- Click "Create function"
-
Basic Configuration
- Option: Author from scratch
- Function name:
ProcessImageFunction - Runtime: Amazon Linux 2023 (custom runtime)
- Architecture: x86_64
- Click "Create function"
-
Upload Code
- In "Code source" section
- Click "Upload from" β ".zip file"
- Select
lambda.zip(compiled binary) - Click "Save"
-
Configure Handler
- In "Runtime settings" section β Edit
- Handler:
bootstrap
π‘ Explanation: For custom runtime, the executable MUST be named "bootstrap".
-
Function Settings
-
General configuration β Edit:
- Memory:
512 MB - Timeout:
1 min - Ephemeral storage:
512 MB
- Memory:
-
Environment variables β Edit β Add environment variable:
Key: OUTPUT_BUCKET Value: image-processor-out-{your-name}
-
-
Configure Permissions (IAM Role)
- In tab "Configuration" β "Permissions"
- Click on Role name (will open IAM console)
- "Add permissions" β "Attach policies"
- Add:
- β
AmazonS3FullAccess - β
AWSLambdaSQSQueueExecutionRole - β
CloudWatchLogsFullAccess
- β
Before creating the event notification, S3 needs permission to send messages to the queue:
- Go to SQS Console
- Select queue
product-image-queue - Tab "Access policy" β Edit
- Replace all content with the policy below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sqs:SendMessage",
"Resource": "arn:aws:sqs:us-east-1:YOUR-ACCOUNT-ID:product-image-queue",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:s3:::image-processor-in-{your-name}"
}
}
}
]
}- Replace
YOUR-ACCOUNT-IDwith your Account ID (12 digits) - Click Save
π‘ How to find your Account ID:
Option 1 - AWS Console:
- Click on your name in the upper right corner
- The 12-digit number appears
Option 2 - AWS CLI:
aws sts get-caller-identity --query Account --output text
- Go to bucket
image-processor-in-{your-name} - Tab "Properties"
- Scroll to "Event notifications"
- Click "Create event notification"
Configuration:
- Event name:
ImageUploadedEvent - Prefix: (leave empty to process all images)
- Suffix:
.jpg(optional - only JPGs, or leave empty for all types) - Event types:
- β
s3:ObjectCreated:Put - β
s3:ObjectCreated:Post - β
s3:ObjectCreated:CompleteMultipartUpload
- β
- Destination: SQS queue
- SQS queue:
product-image-queue - Click "Save changes"
β If everything is correct, you will see a success message!
- Function:
ProcessImageFunction
- Click "Add trigger"
- Select a source: SQS
- SQS queue:
product-image-queue - Batch size:
10(processes up to 10 messages at a time) - Batch window:
5seconds (optional) - Expand "Additional settings":
- β Report batch item failures (IMPORTANT!)
- β Enable trigger
- Click "Add"
- On the Lambda page, the diagram should appear:
SQS (product-image-queue) β Lambda (ProcessImageFunction)
π Complete Flow Configured:
S3 (image-processor-in-{your-name})
β event
SQS (product-image-queue)
β trigger
Lambda (ProcessImageFunction)
β output
S3 (image-processor-out-{your-name})
# Upload a test image
aws s3 cp test-image.jpg s3://image-processor-in-{your-name}/test-image.jpg
# Check messages in queue
aws sqs get-queue-attributes \
--queue-url https://sqs.us-east-1.amazonaws.com/ACCOUNT-ID/product-image-queue \
--attribute-names ApproximateNumberOfMessages
# View Lambda logs
aws logs tail /aws/lambda/ProcessImageFunction --follow
# Check result in output bucket
aws s3 ls s3://image-processor-out-{your-name}/ --recursive
# View created thumbnails
aws s3 ls s3://image-processor-out-{your-name}/thumbnails/
aws s3 ls s3://image-processor-out-{your-name}/medium/Upload via S3 Console:
- Go to bucket
image-processor-in-{your-name} - Click "Upload"
- Drag a JPG image
- Click "Upload"
- Wait for processing (~5-10 seconds)
- Check bucket
image-processor-out-{your-name} - Should have folders
thumbnails/andmedium/with processed images
# View logs in real time
aws logs tail /aws/lambda/ProcessImageFunction --follow --format short
# Search errors
aws logs filter-pattern "ERROR" \
--log-group-name /aws/lambda/ProcessImageFunction \
--start-time $(date -u -d '10 minutes ago' +%s)000
# Metrics
aws cloudwatch get-metric-statistics \
--namespace AWS/Lambda \
--metric-name Invocations \
--dimensions Name=FunctionName,Value=ProcessImageFunction \
--start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 300 \
--statistics SumSome AWS resources generate costs even when not in use:
- S3: Charges for storage (GB/month)
- Lambda: Usually free if not invoked
- SQS: Usually free on Free Tier
- CloudWatch Logs: Charges for log storage
It's important to follow the order to avoid dependency errors:
1. Disable Triggers and Event Notifications
2. Delete Lambda Function
3. Delete SQS Queues
4. Empty and Delete S3 Buckets
5. Delete IAM Roles and Policies (optional)
6. Delete CloudWatch Log Groups
Currently, this project requires manual configuration through the AWS Console. As a next step, Infrastructure as Code (IaC) can be implemented using Terraform to provision all AWS resources directly.
To improve performance and reduce latency when serving processed images, a CloudFront distribution can be configured to deliver content from the output S3 bucket. This enables global caching and faster image delivery to end users.
β If this project was useful, consider giving it a star on GitHub!