Skip to content

tedkimdev/poison-queue-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Poison Queue CLI

A command-line interface tool built in Rust for managing poison messages in Apache Kafka. This tool helps analyze and manage problematic messages that repeatedly fail processing, ensuring system reliability and preventing message processing bottlenecks.

Dead Letter Queue (DLQ) Message Structure

Each message sent to the DLQ is a JSON object with all operational context stored in the metadata field. This allows consumers or inspectors to handle messages without relying on Kafka headers.

Example DLQ Message

{
  "correlationId": "dafc5816-268a-442c-9981-1937ed1a4b9f",
  "id": "200103dc-25c9-4678-ae80-f3f1a5833366",
  "metadata": {
    "failureReason": "Max retries",
    "movedToDlqAt": "2025-10-10T10:48:58Z",
    "originalTopic": "user-events",
    "retryCount": 5,
    "source": "test"
  },
  "payload": {
    "action": "test",
    "data": {
      "email": "missing_email",
      "name": "name"
    },
    "userId": "user-123"
  }
}
  • Metadata contains all operational info: correlation IDs, retry counts, failure reasons, source, original topic, and DLQ timestamps.
  • Payload contains only business data
  • Kafka headers are optional: since all necessary info is in metadata, headers do not need to be inspected for DLQ management.

DLQ Message Handling

  • Messages are processed in offset order (oldest first).
  • After processing a message (archiving/reprocessing), the consumer commits its offset.
  • Committing ensures all earlier messages in the same partition are considered handled.
  • This guarantees no messages are skipped and avoids inconsistencies when archiving or reprocessing.

Optional Consideration: Database Tracking for DLQ

Kafka DLQ messages are committed sequentially per partition, which means handling a message in the middle can unintentionally skip earlier messages.

One way to handle messages individually is to maintain a database table that tracks each DLQ message’s status, for example:

  • pending — message not yet processed
  • archived — message safely stored in archive topic
  • reprocessed — message sent back to the original topic
  • discarded — message intentionally removed

This approach allows selective processing, auditing, and safe reprocessing.

Note: This is just a conceptual approach; it is not implemented in this project.

Running Locally

1. Start Kafka Infrastructure

docker-compose up -d

2. Creating test topics

docker-compose exec kafka kafka-topics --bootstrap-server localhost:9092 --create --topic user-events --partitions 1 --replication-factor 1 2>/dev/null

docker-compose exec kafka kafka-topics --bootstrap-server localhost:9092 --create --topic dlq-user-events --partitions 1 --replication-factor 1 2>/dev/null

2.1 Creating archive topic

docker-compose exec kafka kafka-topics --bootstrap-server localhost:9092 --create --topic dlq-archive --partitions 1 --replication-factor 1 2>/dev/null

3. Send sample normal messages

./send-message.sh user-events normal

4. Send a sample poison message

./send-message.sh user-events poison

5. Send a sample DLQ message

./send-message.sh dlq-user-events dlq

6. List topics

docker-compose exec kafka kafka-topics --bootstrap-server localhost:9092 --list

7. Show messages in dlq-user-events

docker-compose exec kafka kafka-console-consumer --bootstrap-server localhost:9092 --topic dlq-user-events --from-beginning --max-messages 10

docker-compose exec kafka kafka-console-consumer --bootstrap-server localhost:9092 --topic dlq-user-events --from-beginning | jq .

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published