Skip to content

miciav/linux-benchmark-lib

Repository files navigation

Linux Benchmark Library logo

Linux Benchmark Library

Benchmark orchestration for Linux nodes. Repeatable workloads, rich metrics, and clean outputs.

Docs | CLI | API Reference | Releases

Highlights

  • Layered architecture: runner, controller, app, UI, analytics.
  • Workload plugins via entry points and user plugin directory.
  • Remote orchestration with Ansible and run journaling.
  • Organized outputs: results, reports, and exports per run and host.

Quickstart (CLI)

lb config init -i
lb plugin list --enable stress_ng
lb run --remote --run-id demo-run

Dev-only provisioning (requires .lb_dev_cli or LB_ENABLE_TEST_CLI=1):

LB_ENABLE_TEST_CLI=1 lb run --docker --run-id demo-docker

Quickstart (Python)

from lb_controller.api import (
    BenchmarkConfig,
    BenchmarkController,
    RemoteExecutionConfig,
    RemoteHostConfig,
)

config = BenchmarkConfig(
    repetitions=2,
    remote_hosts=[
        RemoteHostConfig(name="node1", address="192.168.1.10", user="ubuntu")
    ],
    remote_execution=RemoteExecutionConfig(enabled=True),
)

controller = BenchmarkController(config)
summary = controller.run(["stress_ng"], run_id="demo-run")
print(summary.per_host_output)

Documentation and artifacts

Installation

uv venv
uv pip install -e .

Extras:

uv pip install -e ".[controller]"  # Ansible + analytics
uv pip install -e ".[dev]"         # test + lint tools
uv pip install -e ".[docs]"        # mkdocs

Switch dependency sets:

bash scripts/switch_mode.sh base
bash scripts/switch_mode.sh controller
bash scripts/switch_mode.sh dev

Project layout

linux-benchmark-lib/
|-- lb_runner/        # Runner (plugins, collectors, local execution helpers)
|-- lb_controller/    # Orchestration and journaling
|-- lb_app/           # Stable API for CLI/UI integrations
|-- lb_ui/            # CLI/TUI implementation
|-- lb_analytics/     # Reporting and analytics
|-- lb_provisioner/   # Docker/Multipass helpers
|-- tests/            # Unit and integration tests
|-- scripts/          # Helper scripts
`-- pyproject.toml

Logging policy

  • Configure logging via lb_common.configure_logging() in your entrypoint.
  • lb_ui configures logging automatically; lb_runner and lb_controller do not.
  • Keep stdout clean for LB_EVENT streaming when integrating custom UIs.

Contributing

See docs/contributing.md for development and testing guidance.