A comprehensive backend system demonstrating enterprise-level technologies: Node.js, Redis, MySQL, Message Queues, and Clean Architecture.
What it does:
- Employees can check-in/check-out for work
- Managers get notified when employees are late
- Real-time dashboard showing who's currently in the office
- Attendance reports with intelligent caching
Technologies demonstrated:
- Node.js + Express - REST API server
- MySQL - Employee and attendance data storage
- Redis - Session management, caching, real-time data
- Bull Queue - Background email processing
- NGINX - Reverse proxy and load balancing
- Clean Architecture - Proper separation of concerns
attendance-system/
βββ package.json # Dependencies and scripts
βββ server.js # Main Express server
βββ nginx.conf # NGINX reverse proxy config
βββ .env.example # Environment template
βββ database/
β βββ schema.sql # Database schema and sample data
βββ src/
β βββ config/
β β βββ database.js # MySQL connection pool
β β βββ redis.js # Redis client configuration
β βββ api/
β β βββ controllers/ # HTTP request/response handling
β β β βββ authController.js # Authentication endpoints
β β β βββ attendanceController.js # Attendance endpoints
β β βββ middleware/ # Cross-cutting concerns
β β β βββ auth.js # Authentication middleware
β β βββ routes/ # API route definitions
β β βββ auth.js # Authentication routes
β β βββ attendance.js # Attendance routes
β βββ services/ # Business logic layer
β β βββ authService.js # Authentication business logic
β β βββ attendanceService.js # Attendance business logic
β βββ workers/
β βββ email-worker.js # Background job processor
βββ tests/
βββ *.postman_collection.json # API testing collection
Think of it like a restaurant analogy:
// Just lists what's available - no logic
router.post("/checkin", authenticate, attendanceController.checkIn);
- Like: Restaurant menu - just lists dishes
- Does: Defines available endpoints
- Contains: No business logic, just routing
const authenticate = async (req, res, next) => {
// Check if customer has valid ID before entering
if (!token) return res.status(401).json({ error: "No ID" });
next(); // Let them proceed
};
- Like: Security guard checking IDs at restaurant entrance
- Does: Validates, authenticates, processes requests
- Runs: Before reaching controller (cross-cutting concerns)
async checkIn(req, res) {
try {
const result = await attendanceService.checkIn(userId, userName);
res.json(result); // Serve the dish
} catch (error) {
res.status(500).json({ error: 'Kitchen problem' }); // Handle complaints
}
}
- Like: Waiter taking orders and serving food
- Does: Handles HTTP requests/responses, error handling
- Contains: No business logic - just coordinates between request and service
async checkIn(userId, userName) {
// Cook the meal - complex business logic
const today = this.getTodayDate();
if (await this.hasCheckedInToday(userId, today)) {
throw new Error('Already checked in today');
}
await this.recordCheckIn(userId, today, checkInTime);
return { message: 'Checked in successfully' };
}
- Like: Chef cooking in kitchen
- Does: Business logic, database operations, calculations
- Contains: Pure business logic, no HTTP concerns
This is Layered Architecture (also called N-Tier Architecture):
βββββββββββββββββββββββ
β Routes β β Presentation Layer (API endpoints)
βββββββββββββββββββββββ€
β Middleware β β Cross-cutting concerns (auth, validation)
βββββββββββββββββββββββ€
β Controllers β β Application Layer (orchestration)
βββββββββββββββββββββββ€
β Services β β Business Logic Layer (domain rules)
βββββββββββββββββββββββ€
β Database/Redis β β Data Access Layer (persistence)
βββββββββββββββββββββββ
- Each layer has ONE responsibility
- Easy to modify without breaking others
- Test business logic (services) without HTTP
- Mock services for controller testing
- Services can be used by different controllers
- Middleware reused across routes
- Clear structure - developers know where to find/add code
- Changes in one layer don't affect others
1. Client: POST /api/attendance/checkin
2. Route: Matches endpoint
3. Middleware: authenticate β validates token
4. Controller: checkIn β handles HTTP request
5. Service: checkIn β business logic (check duplicates, save to DB)
6. Controller: Returns HTTP response
This is Enterprise-level Clean Architecture used in production systems!
# Required services
- Node.js 16+
- MySQL 8.0+
- Redis 6.0+
# 1. Clone and setup
git clone https://github.com/yourusername/attendance-system.git
cd attendance-system
npm install
# 2. Setup environment
cp .env.example .env
# Edit .env with your database credentials
# 3. Create database
mysql -u root -p
CREATE DATABASE attendance_db;
exit
# 4. Import schema
mysql -u root -p attendance_db < database/schema.sql
# 5. Start services
redis-server # Terminal 1
npm run dev # Terminal 2
npm run worker # Terminal 3
# Optional: Start with NGINX (production setup)
sudo nginx -t # Test NGINX config
sudo nginx -s reload # Reload NGINX config
- Technology: Redis for session storage
- Why: Fast session lookup, automatic expiration
- Endpoint:
POST /api/auth/login
- Technology: MySQL for persistence + Redis for caching
- Why: Prevents duplicate check-ins, tracks real-time office presence
- Endpoints:
POST /api/attendance/checkin
,POST /api/attendance/checkout
- Technology: Redis Sets for fast membership queries
- Why: Instant lookup of who's currently in office
- Endpoint:
GET /api/attendance/current
- Technology: Bull Queue with Redis
- Why: Non-blocking email processing for late arrivals
- Trigger: Automatic when check-in time > 9:00 AM
- Technology: Redis with TTL (Time To Live)
- Why: Expensive database queries cached for 1 hour
- Endpoint:
GET /api/attendance/reports
POST /api/auth/login # Login and get session token
POST /api/auth/logout # Invalidate session
POST /api/attendance/checkin # Check into office
POST /api/attendance/checkout # Check out of office
GET /api/attendance/current # Who's currently in office
GET /api/attendance/reports # Attendance reports (cached)
GET /health # Service health check
curl -X POST http://localhost:3000/api/auth/login \
-H "Content-Type: application/json" \
-d '{"email":"alice@company.com"}'
Response:
{
"token": "token_1703473200000_2_abc123",
"user": {
"id": 2,
"name": "Alice Dev",
"email": "alice@company.com",
"department": "IT"
}
}
curl -X POST http://localhost:3000/api/attendance/checkin \
-H "Authorization: token_1703473200000_2_abc123"
Response:
{
"message": "Checked in successfully",
"checkInTime": "2024-01-15T08:45:00.000Z",
"isLate": false
}
curl http://localhost:3000/api/attendance/current \
-H "Authorization: token_1703473200000_2_abc123"
Response:
{
"employeesInOffice": [
{ "id": 2, "name": "Alice Dev", "department": "IT" },
{ "id": 3, "name": "Bob Designer", "department": "Design" }
]
}
-- Employees table
CREATE TABLE employees (
id INT PRIMARY KEY AUTO_INCREMENT,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
department VARCHAR(50) NOT NULL,
manager_id INT,
FOREIGN KEY (manager_id) REFERENCES employees(id)
);
-- Attendance records
CREATE TABLE attendance (
id INT PRIMARY KEY AUTO_INCREMENT,
employee_id INT NOT NULL,
date DATE NOT NULL,
check_in_time DATETIME,
check_out_time DATETIME,
UNIQUE(employee_id, date),
FOREIGN KEY (employee_id) REFERENCES employees(id)
);
Use Case | Redis Data Type | Example Key | Purpose |
---|---|---|---|
Sessions | String | session:token123 |
User authentication |
Check-in Cache | String | checkin:user1:2024-01-15 |
Prevent duplicates |
Office Presence | Set | employees_in_office |
Real-time tracking |
Report Cache | String | report:2024-01-01:2024-01-31 |
Query optimization |
// Late notification email
emailQueue.add("late-notification", {
employeeId: 123,
employeeName: "Alice",
minutesLate: 15,
});
# Development
npm run dev # Start with nodemon (auto-restart)
npm start # Production start
npm run worker # Start background job processor
# Testing
npm test # Run test suite
npm run test:watch # Watch mode for development
- Session lookups: Redis (sub-millisecond)
- Employee data: 1-hour cache TTL
- Report queries: 1-hour cache for expensive joins
- Real-time data: Redis Sets for O(1) membership tests
- Email notifications: Non-blocking queue processing
- Report generation: Async processing for large datasets
- Graceful shutdown: Proper cleanup of connections
After building this project, you'll understand:
- Service Layer Pattern
- Controller Pattern
- Middleware Pattern
- Dependency Injection
- Session management without database hits
- Caching strategies for performance
- Real-time data with Sets
- TTL (Time To Live) for automatic cleanup
- Asynchronous job processing
- Background email notifications
- Queue monitoring and failure handling
- Authentication middleware patterns
- Error handling and validation
- Database connection pooling
- Environment-based configuration
# Server Configuration
PORT=3000
NODE_ENV=production
# Database
DB_HOST=your-db-host
DB_USER=your-db-user
DB_PASSWORD=your-secure-password
DB_NAME=attendance_db
# Redis
REDIS_HOST=your-redis-host
REDIS_PASSWORD=your-redis-password
# Application
WORK_START_TIME=09:00
LATE_THRESHOLD_MINUTES=15
# 1. Copy NGINX configuration
sudo cp nginx.conf /etc/nginx/sites-available/attendance-system
sudo ln -s /etc/nginx/sites-available/attendance-system /etc/nginx/sites-enabled/
# 2. Test configuration
sudo nginx -t
# 3. Start multiple Node.js instances
npm start # Instance 1 (port 3000)
PORT=3001 npm start # Instance 2 (port 3001)
PORT=3002 npm start # Instance 3 (port 3002)
# 4. Start NGINX
sudo systemctl restart nginx
- Environment variables configured
- Database connections secured
- Redis authentication enabled
- Rate limiting configured (NGINX + Express)
- Health checks implemented
- NGINX reverse proxy configured
- SSL certificates installed
- Load balancing tested
- Logging configured
- Error monitoring setup
Imagine building an attendance system for a large office with 1000+ employees. Here's how our enterprise system components work together:
graph TB
Client[CLIENT<br/>Mobile/Web<br/>- Login<br/>- Check-in<br/>- Dashboard]
NGINX[NGINX<br/>Reverse Proxy<br/>- Load Balancing<br/>- SSL Termination<br/>- Rate Limiting]
subgraph API["NODE.js API (Express)"]
Routes[Routes<br/>API Endpoints]
Middleware[Middleware<br/>Authentication]
Controllers[Controllers<br/>Request Handlers]
Services[Services<br/>Business Logic]
end
subgraph Redis["REDIS (Caching Layer)"]
Sessions[Sessions<br/>session:token123]
RealTime[Real-time Data<br/>employees_in_office]
Cache[Cache Layer<br/>report:2025-08-15]
DupePrev[Duplicate Prevention<br/>checkin:user1:today]
end
subgraph Queue["BULL QUEUE (Job Processing)"]
JobQueue[Jobs Queue<br/>late-notification<br/>daily-report<br/>overtime-calc]
end
MySQL[(MySQL Database<br/>Permanent Storage<br/>- employees<br/>- attendance)]
Worker[Background Worker<br/>Job Processor<br/>1. Pick job from queue<br/>2. Send email notifications<br/>3. Log results<br/>4. Mark job complete]
Client -.HTTPS.-> NGINX
NGINX -.Load Balance.-> API
API -.Query.-> MySQL
API -.Cache/Session.-> Redis
API -.Queue Jobs.-> Queue
Queue -.Process.-> Worker
Worker -.Email.-> Client
Routes --> Middleware
Middleware --> Controllers
Controllers --> Services
graph TD
Start["Employee Check-In Request<br/>Time: 09:15 AM"]
subgraph RealTime ["REAL TIME PROCESSING"]
Validate["VALIDATION<br/>Authentication<br/>Request parsing<br/>Middleware checks"]
BusinessLogic["BUSINESS LOGIC<br/>Duplicate check<br/>Late detection<br/>Data persistence"]
Respond["API RESPONSE<br/>Success status<br/>Late indicator<br/>Timestamp"]
end
subgraph Parallel ["PARALLEL OPERATIONS (Simultaneous)"]
MySQL["MySQL Database<br/>PERMANENT STORAGE<br/><br/>INSERT attendance<br/>record<br/><br/>Status: Saved"]
Redis["Redis Cache<br/>FAST ACCESS<br/><br/>SET checkin cache<br/>UPDATE office presence<br/><br/>Status: Updated"]
Queue["Job Queue<br/>BACKGROUND TASKS<br/><br/>ADD notification job<br/>late-employee-alert<br/><br/>Status: Queued"]
end
subgraph Background ["BACKGROUND PROCESSING (Asynchronous)"]
WorkerPick["Job Processing<br/>Retrieve queued job<br/>Process employee data<br/>Prepare notification"]
SendEmail["Email Notification<br/>Recipient: Manager<br/>Subject: Late arrival<br/>Content: Employee details"]
Complete["Job Completion<br/>Mark as processed<br/>Log execution result<br/>Clean up resources"]
end
Start --> Validate
Validate --> BusinessLogic
BusinessLogic --> Respond
BusinessLogic --> MySQL
BusinessLogic --> Redis
BusinessLogic --> Queue
Queue --> WorkerPick
WorkerPick --> SendEmail
SendEmail --> Complete
graph LR
subgraph Simple["SIMPLE SYSTEM (Sequential Processing)"]
S1[Authentication<br/>50ms] --> S2[Database Save<br/>100ms]
S2 --> S3[Email Processing<br/>3000ms]
S3 --> S4[Office Status<br/>200ms]
S4 --> S5[User Response<br/>TOTAL: 3350ms]
end
subgraph Enterprise["ENTERPRISE SYSTEM (Parallel + Caching)"]
E1[Authentication<br/>50ms] --> E2[Database Save<br/>50ms]
E2 --> E3[Cache Update<br/>1ms]
E3 --> E4[Queue Job<br/>1ms]
E4 --> E5[User Response<br/>TOTAL: 102ms]
E4 -.Background Process.-> BG[Email Processing<br/>Non-blocking operation]
end
Simple -.Performance Comparison.-> Enterprise
upstream attendance_backend {
server localhost:3000;
server localhost:3001;
server localhost:3002;
}
location /api/ {
limit_req zone=api burst=20;
proxy_pass http://attendance_backend;
}
Function:
- Load Balancing (distribute requests across multiple Node.js instances)
- SSL Termination (handle HTTPS encryption/decryption)
- Rate Limiting (additional DDoS protection)
- Static File Serving (serve frontend assets efficiently)
- Security Headers (additional HTTP security layer)
Analogy: Like the main security desk at a corporate building - checks everyone coming in, directs them to the right floor/office, and handles multiple elevators for efficiency
employees: Employee master data (name, email, department)
attendance: Daily attendance records (permanent history)
Function:
- Store permanent data (survives system restarts)
- Master source of truth for all operations
- Historical records for compliance & reports
Analogy: Like the company's main filing cabinet with all official documents
session:token123 β Currently logged in users
checkin:user1:2025-08-15 β Today's check-in cache
employees_in_office β Who's in office right now
report:2025-08-01:2025-08-31 β Cached monthly reports
Function:
- Session storage (who's logged in)
- Prevent duplicates (no double check-ins)
- Real-time tracking (who's in office now)
- Caching (frequently requested reports)
Analogy: Like the office whiteboard for quick info that changes often
emailQueue.add("late-notification", {
employeeName: "Alice",
minutesLate: 15,
});
Function:
- Background processing (send emails without making user wait)
- Non-blocking (user gets instant response)
- Reliable (retry if email fails)
Analogy: Like a task box for the secretary - handle tasks later, don't make visitors wait
// Background worker processes emails
emailQueue.process("late-notification", async (job) => {
console.log(`Send email: ${job.data.employeeName} was late`);
});
Function:
- Process background jobs (handle queued tasks)
- Email notifications (notify managers about late employees)
- Separate process (runs independently from main API)
Analogy: Like a dedicated staff member who only handles emails and notifications
graph TB
subgraph Simple["π« SIMPLE SYSTEM (Sequential Processing)"]
S1[User Request<br/>Check-in at 9:15 AM] --> S2[Authentication<br/>β±οΈ 50ms]
S2 --> S3[Database Query<br/>Check duplicates<br/>β±οΈ 200ms]
S3 --> S4[Save Record<br/>MySQL INSERT<br/>β±οΈ 100ms]
S4 --> S5[Send Email<br/>β Blocking operation<br/>β±οΈ 3000ms]
S5 --> S6[Office Status<br/>Complex DB query<br/>β±οΈ 500ms]
S6 --> S7[User Response<br/>π΄ TOTAL: 3850ms]
S8[Problems:<br/>- Single server bottleneck<br/>- User waits for email<br/>- Crashes at 100+ users<br/>- No load balancing]
end
subgraph Enterprise["β
ENTERPRISE SYSTEM (Parallel + Optimized)"]
E1[User Request<br/>Check-in at 9:15 AM] --> E2[NGINX Load Balancer<br/>Route to available instance<br/>β±οΈ 2ms]
E2 --> E3[Authentication<br/>Redis session lookup<br/>β±οΈ 1ms]
E3 --> E4[Duplicate Check<br/>Redis cache hit<br/>β±οΈ 1ms]
E4 --> E5[Save Record<br/>MySQL with connection pool<br/>β±οΈ 50ms]
E5 --> E6[Update Cache<br/>Redis real-time data<br/>β±οΈ 1ms]
E6 --> E7[Queue Email Job<br/>Background processing<br/>β±οΈ 1ms]
E7 --> E8[User Response<br/>π’ TOTAL: 56ms]
E6 -.Background Process.-> BG[Email Worker<br/>Processes jobs async<br/>β¨ Non-blocking]
E9[Benefits:<br/>- Multiple server instances<br/>- Background processing<br/>- Handles 1000+ users<br/>- Sub-second response]
end
Simple -.68x Faster.-> Enterprise
Scenario | Simple System | Enterprise System (with NGINX) |
---|---|---|
1 user check-in | 3.8 seconds | 55ms |
100 users simultaneously | System crashes | Handles smoothly |
1000+ concurrent users | Server overload | Load balanced across instances |
"Who's in office?" | 500ms DB query | 1ms Redis lookup |
Monthly reports | 2-5 seconds | Instant (cached) |
Email notifications | Blocks user | Background process |
SSL/HTTPS | Manual setup | NGINX handles automatically |
# Check NGINX status and active connections
sudo nginx -t # Test configuration
sudo systemctl status nginx # Service status
# Monitor access logs (see load balancing in action)
sudo tail -f /var/log/nginx/attendance-access.log
# You'll see requests distributed across instances:
# 127.0.0.1:3000 GET /api/attendance/checkin
# 127.0.0.1:3001 GET /api/attendance/current
# 127.0.0.1:3002 POST /api/auth/login
# Connect to Redis and explore
redis-cli
KEYS * # See all cached data
SMEMBERS employees_in_office # Who's currently in office
GET session:your_token_here # Your login session data
# Watch worker terminal when someone checks in late
npm run worker
# You'll see:
# "Processing late notification for Alice (15 minutes late)"
# "EMAIL SENT: Alice was 15 minutes late at 2025-08-15T09:15:00Z"
-- Heavy database query (slow)
SELECT e.name FROM employees e
JOIN attendance a ON e.id = a.employee_id
WHERE a.date = CURDATE() AND a.check_out_time IS NULL;
-- vs Redis lookup (lightning fast)
-- redis> SMEMBERS employees_in_office
This enterprise architecture shows you understand how to build scalable, production-ready systems that can handle real-world load! π
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.