Skip to content

Add representative benchmarks #2

@NthTensor

Description

@NthTensor

We need benchmarks to measure proposed improvements against. We probably need several different test cases, each focusing on a different type of workload.

  • Fork-join: A compute heavy problem which presents an optimal case for divide and conquer.
  • Web-server: A large number of IO-bound futures and asynchronous closures, perhaps simulating a HTTP server.
  • Bevy: A large number of compute-bound synchronous closures added to the injector at intervals with a Zipf distribution over duration.

Most workloads will probably be a mix of these three types of tasks, but it makes sense to benchmark them separately in a standard way. When a given benchmark is applicable to rayon as well (eg, no async jobs) then we should make it possible to run with either and compare the two.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions