Skip to content
This repository was archived by the owner on Apr 16, 2024. It is now read-only.

Commit f882ccb

Browse files
authored
[DK-2795] Tests (#8)
* refactor to kafkaclient interface * test add new topic * tests for change replication factor * Tests for partitions * generic config tests * the rest of configs tests * rename * Move test data to separate file * test address * test adding/removing config options * scaffold e2e tests * increase timeout * don't run controller tests twice in e2e * fix kustomization * offset * wait logs * ns * order * 1 rec * setup kafka * Remove tests that are too flaky in CI environments * 5.5.0 * 0.5.0 * add create-topic test * describe * more logs * lowercase * fix kubebuilder annotations * change svc * kafka-client * assert * fix * fix2 * remove echo * wait for kafka pods to be up * fix * add partitions * fix * 5m * requeue on conn error * fix * assert * assert * echo * remove tail * log * fix log * remove log * test change cleanup policy * fix * test delete topic * catch error code * fix * Dk 2795 testcontainers (#7) * replace mocks for kafka server with testcontainers * remove unneeded ns * Reorder steps * revert * cleanup * go 16 * fixes (#9) * try again without waiting for pods to be up * disable uneeded kafka components * fix break line * longer timeout * two timeouts * move e2e tests * fix paths * path * bash * remove tty * remove * various * go version * go 17 * remove branch * 17
1 parent 6b34b8e commit f882ccb

38 files changed

+2406
-171
lines changed

.github/workflows/e2e.yaml

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
name: e2e
2+
3+
on:
4+
pull_request:
5+
push:
6+
branches:
7+
- main
8+
9+
jobs:
10+
kind:
11+
runs-on: ubuntu-latest
12+
steps:
13+
- name: Checkout
14+
uses: actions/checkout@v2
15+
- name: Restore Go cache
16+
uses: actions/cache@v1
17+
with:
18+
path: ~/go/pkg/mod
19+
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
20+
restore-keys: |
21+
${{ runner.os }}-go-
22+
- name: Setup Go
23+
uses: actions/setup-go@v2
24+
with:
25+
go-version: 1.17.x
26+
- name: Setup Kubernetes
27+
uses: engineerd/setup-kind@v0.5.0
28+
with:
29+
version: v0.11.1
30+
image: kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6
31+
- name: Setup Kustomize
32+
uses: fluxcd/pkg/actions/kustomize@main
33+
- name: Setup envtest
34+
uses: fluxcd/pkg/actions/envtest@main
35+
with:
36+
version: "1.19.2"
37+
- name: Setup Helm
38+
uses: fluxcd/pkg/actions/helm@main
39+
- name: Run controller tests
40+
run: make test
41+
- name: Check if working tree is dirty
42+
run: |
43+
go version
44+
if [[ $(git diff --stat) != '' ]]; then
45+
git --no-pager diff
46+
echo 'run make test and commit changes'
47+
exit 1
48+
fi
49+
- name: Build container image
50+
run: make docker-build-without-tests IMG=test/k8skafka-controller:latest BUILD_PLATFORMS=linux/amd64 BUILD_ARGS=--load
51+
- name: Load test image
52+
run: kind load docker-image test/k8skafka-controller:latest
53+
- name: Deploy controller
54+
run: make deploy IMG=test/k8skafka-controller:latest
55+
- name: Setup Kafka
56+
env:
57+
KAFKA_VERSION: ${{ '0.5.0' }}
58+
run: |
59+
kubectl create ns kafka
60+
helm repo add confluentinc https://confluentinc.github.io/cp-helm-charts/
61+
helm upgrade --wait -i kafka confluentinc/cp-helm-charts \
62+
--version $KAFKA_VERSION \
63+
--namespace kafka \
64+
--set cp-schema-registry.enabled=false \
65+
--set cp-kafka-rest.enabled=false \
66+
--set cp-kafka-connect.enabled=false \
67+
--set cp-ksql-server.enabled=false \
68+
--set cp-control-center.enabled=false
69+
- name: Setup Kafka client
70+
run: |
71+
kubectl -n kafka apply -f ./config/testdata/test-kafka-client.yaml
72+
kubectl -n kafka wait --for=condition=ready pod -l app=kafka-client
73+
- name: Run Kafka e2e tests
74+
run: ./scripts/tests/e2e/test_suite.sh
75+
shell: bash
76+
- name: Logs
77+
run: |
78+
kubectl -n k8skafka-system wait --for=condition=ready pod -l app=k8skafka-controller && kubectl -n k8skafka-system logs deploy/k8skafka-controller
79+
- name: Debug failure
80+
if: failure()
81+
run: |
82+
kubectl -n kube-system describe pods
83+
kubectl -n k8skafka-system describe pods
84+
kubectl -n k8skafka-system get kafkatopic -oyaml
85+
kubectl -n k8skafka-system describe kafkatopic
86+
kubectl -n k8skafka-system get all
87+
kubectl -n k8skafka-system logs deploy/k8skafka-controller
88+
kubectl -n kafka get all

Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Build the manager binary
2-
FROM golang:1.15 as builder
2+
FROM golang:1.17 as builder
33

44
WORKDIR /workspace
55
# Copy the Go Modules manifests

Jenkinsfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ podTemplate(label: 'k8skafka-controller',
55
containers: [
66
containerTemplate(
77
name: 'golang',
8-
image: 'bitnami/golang:1.15',
8+
image: 'bitnami/golang:1.16',
99
ttyEnabled: true
1010
),
1111
containerTemplate(

Makefile

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ all: test manager
1616

1717
# Run tests
1818
test: generate fmt vet manifests
19-
go test ./... -coverprofile cover.out
19+
go test ./... -test.v -coverprofile cover.out
2020

2121
# Build manager binary
2222
manager: generate fmt vet
@@ -59,6 +59,9 @@ generate: controller-gen
5959
docker-build: test
6060
docker build . -t ${IMG}
6161

62+
docker-build-without-tests: generate fmt vet manifests
63+
docker build . -t ${IMG}
64+
6265
# Push the docker image
6366
docker-push:
6467
docker push ${IMG}

api/v1beta1/kafkatopic_types.go

Lines changed: 20 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ package v1beta1
1818

1919
import (
2020
apimeta "k8s.io/apimachinery/pkg/api/meta"
21+
"k8s.io/apimachinery/pkg/api/resource"
2122
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
2223
)
2324

@@ -58,13 +59,13 @@ type KafkaTopicConfig struct {
5859
// The default policy ("delete") will discard old segments when their retention time or size limit has been reached.
5960
// The "compact" setting will enable log compaction on the topic.
6061
// +optional
61-
CleanupPolicy *CleanupPolicy `json:"cleanupPolicy,omitempty"`
62+
CleanupPolicy *string `json:"cleanupPolicy,omitempty"`
6263

6364
// Final compression type for a given topic.
6465
// Supported are standard compression codecs: 'gzip', 'snappy', 'lz4', 'zstd').
6566
// It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.
6667
// +optional
67-
CompressionType *CompressionType `json:"compressionType,omitempty"`
68+
CompressionType *string `json:"compressionType,omitempty"`
6869

6970
// The amount of time to retain delete tombstone markers for log compacted topics. Specified in milliseconds.
7071
// This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0
@@ -134,7 +135,7 @@ type KafkaTopicConfig struct {
134135
// Define whether the timestamp in the message is message create time or log append time.
135136
// The value should be either `CreateTime` or `LogAppendTime`
136137
// +optional
137-
MessageTimestampType *MessageTimestampType `json:"messageTimestampType,omitempty"`
138+
MessageTimestampType *string `json:"messageTimestampType,omitempty"`
138139

139140
// This configuration controls how frequently the log compactor will attempt to clean the log (assuming LogCompaction is enabled).
140141
// By default we will avoid cleaning a log where more than 50% of the log has been compacted.
@@ -144,12 +145,16 @@ type KafkaTopicConfig struct {
144145
// (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the MinCompactionLagMs duration,
145146
// or (ii) if the log has had dirty (uncompacted) records for at most the MaxCompactionLagMs period.
146147
// +optional
147-
MinCleanableDirtyRatio *int64 `json:"minCleanableDirtyRatio,omitempty"`
148+
MinCleanableDirtyRatio *resource.Quantity `json:"minCleanableDirtyRatio,omitempty"`
148149

149150
// The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.
150151
// +optional
151152
MinCompactionLagMs *int64 `json:"minCompactionLagMs,omitempty"`
152153

154+
// The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.
155+
// +optional
156+
MaxCompactionLagMs *int64 `json:"maxCompactionLagMs,omitempty"`
157+
153158
// When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful.
154159
// If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).
155160
// When used together, MinInsyncReplicas and acks allow you to enforce greater durability guarantees.
@@ -194,16 +199,12 @@ type KafkaTopicConfig struct {
194199
UncleanLeaderElectionEnable *bool `json:"uncleanLeaderElectionEnable,omitempty"`
195200
}
196201

197-
type CleanupPolicy string
198-
199202
const (
200203
CleanupPolicyDelete = "delete"
201204
CleanupPolicyCompact = "compact"
202205
CleanupPolicyDeleteCompact = "delete,compact"
203206
)
204207

205-
type CompressionType string
206-
207208
const (
208209
CompressionTypeGZIP = "gzip"
209210
CompressionTypeSnappy = "snappy"
@@ -213,8 +214,6 @@ const (
213214
CompressionTypeProducer = "producer"
214215
)
215216

216-
type MessageTimestampType string
217-
218217
const (
219218
MessageTimestampTypeCreateTime = "CreateTime"
220219
MessageTimestampTypeLogAppendTime = "LogAppendTime"
@@ -319,14 +318,14 @@ func (in *KafkaTopic) GetReplicationFactor() int64 {
319318
return *in.Spec.ReplicationFactor
320319
}
321320

322-
func (in *KafkaTopic) GetCleanupPolicy() *CleanupPolicy {
321+
func (in *KafkaTopic) GetCleanupPolicy() *string {
323322
if in.Spec.KafkaTopicConfig == nil {
324323
return nil
325324
}
326325
return in.Spec.KafkaTopicConfig.CleanupPolicy
327326
}
328327

329-
func (in *KafkaTopic) GetCompressionType() *CompressionType {
328+
func (in *KafkaTopic) GetCompressionType() *string {
330329
if in.Spec.KafkaTopicConfig == nil {
331330
return nil
332331
}
@@ -410,14 +409,14 @@ func (in *KafkaTopic) GetMessageTimestampDifferenceMaxMs() *int64 {
410409
return in.Spec.KafkaTopicConfig.MessageTimestampDifferenceMaxMs
411410
}
412411

413-
func (in *KafkaTopic) GetMessageTimestampType() *MessageTimestampType {
412+
func (in *KafkaTopic) GetMessageTimestampType() *string {
414413
if in.Spec.KafkaTopicConfig == nil {
415414
return nil
416415
}
417416
return in.Spec.KafkaTopicConfig.MessageTimestampType
418417
}
419418

420-
func (in *KafkaTopic) GetMinCleanableDirtyRatio() *int64 {
419+
func (in *KafkaTopic) GetMinCleanableDirtyRatio() *resource.Quantity {
421420
if in.Spec.KafkaTopicConfig == nil {
422421
return nil
423422
}
@@ -431,6 +430,13 @@ func (in *KafkaTopic) GetMinCompactionLagMs() *int64 {
431430
return in.Spec.KafkaTopicConfig.MinCompactionLagMs
432431
}
433432

433+
func (in *KafkaTopic) GetMaxCompactionLagMs() *int64 {
434+
if in.Spec.KafkaTopicConfig == nil {
435+
return nil
436+
}
437+
return in.Spec.KafkaTopicConfig.MaxCompactionLagMs
438+
}
439+
434440
func (in *KafkaTopic) GetMinInsyncReplicas() *int64 {
435441
if in.Spec.KafkaTopicConfig == nil {
436442
return nil

api/v1beta1/zz_generated.deepcopy.go

Lines changed: 11 additions & 5 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

config/crd/bases/kafka.infra.doodle.com_kafkatopics.yaml

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -125,6 +125,12 @@ spec:
125125
or alternatively the wildcard '*' can be used to throttle all
126126
replicas for this topic.
127127
type: string
128+
maxCompactionLagMs:
129+
description: The maximum time a message will remain ineligible
130+
for compaction in the log. Only applicable for logs that are
131+
being compacted.
132+
format: int64
133+
type: integer
128134
maxMessageBytes:
129135
description: The largest record batch size allowed by Kafka. If
130136
this is increased and there are consumers older than 0.10.2,
@@ -170,6 +176,9 @@ spec:
170176
or `LogAppendTime`
171177
type: string
172178
minCleanableDirtyRatio:
179+
anyOf:
180+
- type: integer
181+
- type: string
173182
description: 'This configuration controls how frequently the log
174183
compactor will attempt to clean the log (assuming LogCompaction
175184
is enabled). By default we will avoid cleaning a log where more
@@ -184,8 +193,8 @@ spec:
184193
(uncompacted) records for at least the MinCompactionLagMs duration,
185194
or (ii) if the log has had dirty (uncompacted) records for at
186195
most the MaxCompactionLagMs period.'
187-
format: int64
188-
type: integer
196+
pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
197+
x-kubernetes-int-or-string: true
189198
minCompactionLagMs:
190199
description: The minimum time a message will remain uncompacted
191200
in the log. Only applicable for logs that are being compacted.

config/crd/kustomization.yaml

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
apiVersion: kustomize.config.k8s.io/v1beta1
2+
kind: Kustomization
3+
resources:
4+
- bases/kafka.infra.doodle.com_kafkatopics.yaml
5+
# +kubebuilder:scaffold:crdkustomizeresource

config/default/kustomization.yaml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
apiVersion: kustomize.config.k8s.io/v1beta1
2+
namespace: k8skafka-system
3+
bases:
4+
- ../crd
5+
- ../rbac
6+
- ../manager
7+
- namespace.yaml

config/default/namespace.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
apiVersion: v1
2+
kind: Namespace
3+
metadata:
4+
labels:
5+
control-plane: controller
6+
name: k8skafka-system

0 commit comments

Comments
 (0)