Skip to content

Commit d50abca

Browse files
Andrew SchofieldAndrew Schofield
authored andcommitted
Add config for distributed mode and secret config
1 parent 659cc98 commit d50abca

File tree

4 files changed

+52
-9
lines changed

4 files changed

+52
-9
lines changed

Dockerfile

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,10 @@ RUN addgroup --gid 5000 --system esgroup && \
88

99
COPY --chown=esuser:esgroup --from=builder /opt/kafka/bin/ /opt/kafka/bin/
1010
COPY --chown=esuser:esgroup --from=builder /opt/kafka/libs/ /opt/kafka/libs/
11-
COPY --chown=esuser:esgroup --from=builder /opt/kafka/config/ /opt/kafka/config/
11+
COPY --chown=esuser:esgroup --from=builder /opt/kafka/config/connect-distributed.properties /opt/kafka/config/
12+
COPY --chown=esuser:esgroup --from=builder /opt/kafka/config/connect-log4j.properties /opt/kafka/config/
1213
RUN mkdir /opt/kafka/logs && chown esuser:esgroup /opt/kafka/logs
13-
COPY --chown=esuser:esgroup target/kafka-connect-mq-sink-1.1.0-jar-with-dependencies.jar /opt/kafka/libs/
14+
COPY --chown=esuser:esgroup target/kafka-connect-mq-sink-1.1.1-jar-with-dependencies.jar /opt/kafka/libs/
1415

1516
WORKDIR /opt/kafka
1617

README.md

Lines changed: 29 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ The connector is supplied as source code which you can easily build into a JAR f
55

66
**Note**: A source connector for IBM MQ is also available on [GitHub](https://github.com/ibm-messaging/kafka-connect-mq-source).
77

8+
89
## Contents
910

1011
- [Building the connector](#building-the-connector)
@@ -19,6 +20,7 @@ The connector is supplied as source code which you can easily build into a JAR f
1920
- [Issues and contributions](#issues-and-contributions)
2021
- [License](#license)
2122

23+
2224
## Building the connector
2325
To build the connector, you must have the following installed:
2426
* [git](https://git-scm.com/)
@@ -57,6 +59,8 @@ To run the connector, you must have:
5759

5860
The connector can be run in a Kafka Connect worker in either standalone (single process) or distributed mode. It's a good idea to start in standalone mode.
5961

62+
### Running in standalone mode
63+
6064
You need two configuration files, one for the configuration that applies to all of the connectors such as the Kafka bootstrap servers, and another for the configuration specific to the MQ sink connector such as the connection information for your queue manager. For the former, the Kafka distribution includes a file called `connect-standalone.properties` that you can use as a starting point. For the latter, you can use `config/mq-sink.properties` in this repository.
6165

6266
The connector connects to MQ using either a client or a bindings connection. For a client connection, you must provide the name of the queue manager, the connection name (one or more host/port pairs) and the channel name. In addition, you can provide a user name and password if the queue manager is configured to require them for client connections. If you look at the supplied `config/mq-sink.properties`, you'll see how to specify the configuration required. For a bindings connection, you must provide provide the name of the queue manager and also run the Kafka Connect worker on the same system as the queue manager.
@@ -67,26 +71,42 @@ To run the connector in standalone mode from the directory into which you instal
6771
bin/connect-standalone.sh connect-standalone.properties mq-sink.properties
6872
```
6973

74+
### Running in distributed mode
75+
76+
You need an instance of Kafka Connect running in distributed mode. The Kafka distribution includes a file called `connect-distributed.properties` that you can use as a starting point, or follow [Running with Docker](#running-with-docker) or [Deploying to Kubernetes](#deploying-to-kubernetes).
77+
78+
To start the MQ connector, you can use `config/mq-sink.json` in this repository after replacing all placeholders and use a command like this:
79+
80+
``` shell
81+
curl -X POST -H "Content-Type: application/json" http://localhost:8083/connectors \
82+
--data "@./config/mq-sink.json"
83+
```
84+
85+
7086
## Running with Docker
7187

72-
This repository includes a Dockerfile to run Kafka Connect in distributed mode. It also adds in the MQ Sink Connector as an available connector plugin. It uses the default connect-distributed.properties file, to provide a custom one add a `COPY` line to the Dockerfile with your customised connect-distributed.properties file.
88+
This repository includes a Dockerfile to run Kafka Connect in distributed mode. It also adds in the MQ Sink Connector as an available connector plugin. It uses the default connect-distributed.properties and connect-log4j.properties files.
7389

7490
1. `mvn clean package`
7591
1. `docker build -t kafkaconnect-with-mq-sink:0.0.1 .`
7692
1. `docker run -p 8083:8083 kafkaconnect-with-mq-sink:0.0.1`
7793

94+
**NOTE:** To provide custom properties files create a folder called `config` containing the `connect-distributed.properties` and `connect-log4j.properties` files and use a Docker volume to make them available when running the container:
95+
`docker run -v $(pwd)/config:/opt/kafka/config -p 8083:8083 kafkaconnect:0.0.1`
96+
97+
7898
## Deploying to Kubernetes
7999

80100
This repository includes a Kubernetes yaml file called `kafka-connect.yaml`. This will create a deployment to run Kafka Connect in distributed mode and a service to access the deployment.
81101

82-
The deployment assumes the existence of two ConfigMaps; `connect-distributed-config` and `connect-log4j-config`. These can be created using the default files in your Kafka install, however it is easier to edit them later if comment and whitespace is trimmed before creation.
102+
The deployment assumes the existence of a Secret called `connect-distributed-config` and a ConfigMap called `connect-log4j-config`. These can be created using the default files in your Kafka install, however it is easier to edit them later if comments and whitespaces are trimmed before creation.
83103

84-
### Creating Kafka Connect configuration ConfigMaps
104+
### Creating Kafka Connect configuration Secret and ConfigMap
85105

86-
Create ConfigMap for Kafka Connect configuration:
106+
Create Secret for Kafka Connect configuration:
87107
1. `cp kafka/config/connect-distributed.properties connect-distributed.properties.orig`
88108
1. `sed '/^#/d;/^[[:space:]]*$/d' < connect-distributed.properties.orig > connect-distributed.properties`
89-
1. `kubectl -n <namespace> create configmap connect-distributed-config --from-file=connect-distributed.properties`
109+
1. `kubectl -n <namespace> create secret generic connect-distributed-config --from-file=connect-distributed.properties`
90110

91111
Create ConfigMap for Kafka Connect Log4j configuration:
92112
1. `cp kafka/config/connect-log4j.properties connect-log4j.properties.orig`
@@ -101,6 +121,7 @@ Create ConfigMap for Kafka Connect Log4j configuration:
101121
1. `kubectl -n <namespace> apply -f kafka-connect.yaml`
102122
1. `curl <serviceIP>:<servicePort>/connector-plugins` to see the MQ Sink connector available to use
103123

124+
104125
## Data formats
105126
Kafka Connect is very flexible but it's important to understand the way that it processes messages to end up with a reliable system. When the connector encounters a message that it cannot process, it stops rather than throwing the message away. Therefore, you need to make sure that the configuration you use can handle the messages the connector will process.
106127

@@ -267,15 +288,18 @@ mq.password=${file:mq-secret.properties:secret-key}
267288

268289
To use a file for the `mq.password` in Kubernetes, you create a Secret using the file as described in [the Kubernetes docs](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
269290

291+
270292
## Troubleshooting
271293

272294
### Unable to connect to Kafka
273295

274296
You may receive an `org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed` error when trying to run the MQ sink connector using SSL to connect to your Kafka cluster. In the case that the error is caused by the following exception: `Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching XXXXX found.`, Java may be replacing the IP address of your cluster with the corresponding hostname in your `/etc/hosts` file. For example, to push Docker images to a custom Docker repository, you may add an entry in this file which corresponds to the IP of your repository e.g. `123.456.78.90 mycluster.icp`. To fix this, you can comment out this line in your `/etc/hosts` file.
275297

298+
276299
## Support
277300
A commercially supported version of this connector is available for customers with a support entitlement for [IBM Event Streams](https://www.ibm.com/cloud/event-streams).
278301

302+
279303
## Issues and contributions
280304
For issues relating specifically to this connector, please use the [GitHub issue tracker](https://github.com/ibm-messaging/kafka-connect-mq-sink/issues). If you do submit a Pull Request related to this connector, please indicate in the Pull Request that you accept and agree to be bound by the terms of the [IBM Contributor License Agreement](CLA.md).
281305

config/mq-sink.json

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
{
2+
"name": "mq-sink",
3+
"config":
4+
{
5+
"connector.class": "com.ibm.eventstreams.connect.mqsink.MQSinkConnector",
6+
"tasks.max": "1",
7+
"topics": "<TOPIC>",
8+
9+
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
10+
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
11+
12+
"mq.queue.manager": "<QUEUE_MANAGER>",
13+
"mq.connection.name.list": "<CONNECTION_NAME_LIST>",
14+
"mq.channel.name": "<CHANNEL_NAME>",
15+
"mq.queue": "<QUEUE>",
16+
"mq.message.builder": "com.ibm.eventstreams.connect.mqsink.builders.DefaultMessageBuilder"
17+
}
18+
}

kafka-connect.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,8 +54,8 @@ spec:
5454
subPath: connect-log4j.properties
5555
volumes:
5656
- name: connect-config
57-
configMap:
58-
name: connect-distributed-config
57+
secret:
58+
secretName: connect-distributed-config
5959
- name: connect-log4j
6060
configMap:
6161
name: connect-log4j-config

0 commit comments

Comments
 (0)