Skip to content

docker_logs stops watching files when there is an error in communication with Docker daemon #23847

@ryn9

Description

@ryn9

A note for the community

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Problem

Every so often, docker_logs stops watching files when there is an error in communication with Docker daemon

From the vector logs:

2025-09-24T15:30:51.555071Z ERROR source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Error in communication with Docker daemon. error=RequestTimeoutError error_type="connection_failed" stage="receiving" container_id=Some("80c9afe0ae10635aa4932a2beacb2f2ef367aadbd4fa45433213e64f5ad4ce12") internal_log_rate_limit=true
2025-09-24T15:30:51.555152Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=80c9afe0ae10635aa4932a2beacb2f2ef367aadbd4fa45433213e64f5ad4ce12
2025-09-24T15:30:51.555155Z ERROR source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Internal log [Error in communication with Docker daemon.] is being suppressed to avoid flooding.
2025-09-24T15:30:51.555264Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=0d69d960c68a873d51928a7e1fc7df96f003b58cc09a3e8157a276c330573f69
2025-09-24T15:30:51.555284Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=6637bc22e279a7dbd9090b7bfca9783a1cb842e54cf752fc6d5d056ff7ab80c4
2025-09-24T15:30:51.555314Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=0a4d2d3381604ce2aea548cf145516df5f557dcb8e08676f608e550e72431e3f
2025-09-24T15:30:51.558530Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=cdf0acdf83163a2a3487a8f9f3ad39e6f42a5b3a5deb38e603b24997f87f240f
2025-09-24T15:30:51.565792Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=7088eb2e51c5d2763211e63a2554a897f7386f952b44a9b953fe0ec35b35248a
2025-09-24T15:30:51.565820Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Stopped watching for container logs. container_id=0628c278e25ffad543d60b1de8e7426d0160f5f124bb66ea6590892356658f3b
2025-09-24T15:30:53.560771Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=0a4d2d3381604ce2aea548cf145516df5f557dcb8e08676f608e550e72431e3f
2025-09-24T15:30:53.566436Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=6637bc22e279a7dbd9090b7bfca9783a1cb842e54cf752fc6d5d056ff7ab80c4
2025-09-24T15:30:53.566789Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=cdf0acdf83163a2a3487a8f9f3ad39e6f42a5b3a5deb38e603b24997f87f240f
2025-09-24T15:30:53.566800Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=80c9afe0ae10635aa4932a2beacb2f2ef367aadbd4fa45433213e64f5ad4ce12
2025-09-24T15:30:53.566927Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=0d69d960c68a873d51928a7e1fc7df96f003b58cc09a3e8157a276c330573f69
2025-09-24T15:30:53.573481Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=7088eb2e51c5d2763211e63a2554a897f7386f952b44a9b953fe0ec35b35248a
2025-09-24T15:30:53.579774Z  INFO source{component_kind="source" component_id=docker_logs component_type=docker_logs}: vector::internal_events::docker_logs: Started watching for container logs. container_id=0628c278e25ffad543d60b1de8e7426d0160f5f124bb66ea6590892356658f3b

Its unclear if when the files start getting watched again if watching is picked up where the previously watching left off.

This could lead to missing or duplicate data ingestion.

Additionally - retry_backoff_secs is set to the default in my config (currently the default is 2 seconds), but it appears the connection failure instantly causes the stopping of the watchers.

Perhaps some additional tunables for timeouts, retries, etc.. would make sense?

Configuration

sources:
  docker_logs:
    type: docker_logs

sinks:
  stdout:
    type: console
    inputs:
      - docker_logs
    encoding:
      codec: json

Version

vector 0.50.0 (x86_64-unknown-linux-gnu 9053198 2025-09-23 14:18:50.944442940)

Debug Output


Example Data

No response

Additional Context

No response

References

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    source: docker_logsAnything `docker_logs` source relatedtype: bugA code related bug.

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions