Skip to content

Improve inserter delivery guarantees #212

@nstylo

Description

@nstylo

I am using the inserter to periodically write batches of messages from a Kafka topic into ClickHouse. I wonder what happens to the data when an error happens on write/commit. It seems like the internal buffer is dropped and the unfinished binary data stream is aborted.
This essentially means, that any data is lost on error. Is this true? What are some best practices to guarantee data delivery or how would I best implement a retry mechanism?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions