Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 30 additions & 58 deletions analysis/notebooks/example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
"metadata": {
"colab": {
"name": "FuzzBench custom analysis",
"provenance": [],
"collapsed_sections": []
"provenance": []
},
"kernelspec": {
"name": "python3",
Expand All @@ -16,18 +15,16 @@
{
"cell_type": "markdown",
"metadata": {
"id": "ksFDTbRv4wJq",
"colab_type": "text"
"id": "ksFDTbRv4wJq"
},
"source": [
"This Colab demonstrates how to use the FuzzBench analysis library to show experiment results that might not be included in the default report. "
"This Colab demonstrates how to use the FuzzBench analysis library to show experiment results that might not be included in the default report."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jHp878kXx7o5",
"colab_type": "text"
"id": "jHp878kXx7o5"
},
"source": [
"# Get the data\n",
Expand All @@ -40,21 +37,18 @@
{
"cell_type": "code",
"metadata": {
"id": "uencDT44n_rH",
"colab_type": "code",
"colab": {}
"id": "uencDT44n_rH"
},
"source": [
"!wget https://www.fuzzbench.com/reports/sample/data.csv.gz"
],
"execution_count": 0,
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "PSk0fl6zxdNL",
"colab_type": "text"
"id": "PSk0fl6zxdNL"
},
"source": [
"# Get the code"
Expand All @@ -63,9 +57,7 @@
{
"cell_type": "code",
"metadata": {
"id": "jpCRiMGSnel9",
"colab_type": "code",
"colab": {}
"id": "jpCRiMGSnel9"
},
"source": [
"# Install requirements.\n",
Expand All @@ -75,14 +67,13 @@
"# Add fuzzbench to PYTHONPATH.\n",
"import sys; sys.path.append('fuzzbench')"
],
"execution_count": 0,
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "HAKEUNgVxKay",
"colab_type": "text"
"id": "HAKEUNgVxKay"
},
"source": [
"# Experiment results"
Expand All @@ -91,9 +82,7 @@
{
"cell_type": "code",
"metadata": {
"id": "nUKg76lKohGX",
"colab_type": "code",
"colab": {}
"id": "nUKg76lKohGX"
},
"source": [
"import pandas\n",
Expand All @@ -106,14 +95,13 @@
"plotter = plotting.Plotter(fuzzer_names)\n",
"results = experiment_results.ExperimentResults(experiment_data, '.', plotter)"
],
"execution_count": 0,
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "F-sulHbYzv2U",
"colab_type": "text"
"id": "F-sulHbYzv2U"
},
"source": [
"## Top level results"
Expand All @@ -123,7 +111,6 @@
"cell_type": "code",
"metadata": {
"id": "1_925XfIslG1",
"colab_type": "code",
"outputId": "68c9731b-eac5-43f6-d61f-ca9c5ec44462",
"colab": {
"base_uri": "https://localhost:8080/",
Expand All @@ -133,7 +120,7 @@
"source": [
"results.summary_table"
],
"execution_count": 4,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand Down Expand Up @@ -613,8 +600,7 @@
{
"cell_type": "markdown",
"metadata": {
"id": "83M0iBha1gwi",
"colab_type": "text"
"id": "83M0iBha1gwi"
},
"source": [
"### Rank by median on benchmarks, then by average *rank*\n"
Expand All @@ -624,7 +610,6 @@
"cell_type": "code",
"metadata": {
"id": "ArkdFarF1DlY",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 333
Expand All @@ -635,7 +620,7 @@
"# The critial difference plot visualizes this ranking\n",
"SVG(results.critical_difference_plot)"
],
"execution_count": 5,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand All @@ -656,7 +641,6 @@
"cell_type": "code",
"metadata": {
"id": "3cqNjyrcs1t5",
"colab_type": "code",
"outputId": "4a6197f6-0fad-41f3-e072-798f58cd3a60",
"colab": {
"base_uri": "https://localhost:8080/",
Expand All @@ -666,7 +650,7 @@
"source": [
"results.rank_by_median_and_average_rank.to_frame()"
],
"execution_count": 6,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand Down Expand Up @@ -792,8 +776,7 @@
{
"cell_type": "markdown",
"metadata": {
"id": "OiL_eV1o1Z5J",
"colab_type": "text"
"id": "OiL_eV1o1Z5J"
},
"source": [
"### Rank by pair-wise statistical test wins on benchmarks, then by average rank\n"
Expand All @@ -803,7 +786,6 @@
"cell_type": "code",
"metadata": {
"id": "mo7zxADose5u",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 521
Expand All @@ -813,7 +795,7 @@
"source": [
"results.rank_by_stat_test_wins_and_average_rank.to_frame()"
],
"execution_count": 7,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand Down Expand Up @@ -939,8 +921,7 @@
{
"cell_type": "markdown",
"metadata": {
"id": "FtlqJC3911wJ",
"colab_type": "text"
"id": "FtlqJC3911wJ"
},
"source": [
"### Rank by median on benchmarks, then by avereage normalized score"
Expand All @@ -950,7 +931,6 @@
"cell_type": "code",
"metadata": {
"id": "vemqVRl318H-",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 521
Expand All @@ -960,7 +940,7 @@
"source": [
"results.rank_by_median_and_average_normalized_score.to_frame()"
],
"execution_count": 8,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand Down Expand Up @@ -1086,8 +1066,7 @@
{
"cell_type": "markdown",
"metadata": {
"id": "aPTUjyAU2FGW",
"colab_type": "text"
"id": "aPTUjyAU2FGW"
},
"source": [
"### Rank by average rank on benchmarks, then by avereage rank"
Expand All @@ -1097,7 +1076,6 @@
"cell_type": "code",
"metadata": {
"id": "XM7fd6ib2Gjj",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 521
Expand All @@ -1107,7 +1085,7 @@
"source": [
"results.rank_by_average_rank_and_average_rank.to_frame()"
],
"execution_count": 9,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand Down Expand Up @@ -1233,8 +1211,7 @@
{
"cell_type": "markdown",
"metadata": {
"id": "q8NaP_ox2Mrz",
"colab_type": "text"
"id": "q8NaP_ox2Mrz"
},
"source": [
"# Benchmark level results"
Expand All @@ -1244,7 +1221,6 @@
"cell_type": "code",
"metadata": {
"id": "iH000cvXuAlv",
"colab_type": "code",
"outputId": "99ef80d0-b6e9-4bb0-daa6-568beaa56e87",
"colab": {
"base_uri": "https://localhost:8080/",
Expand All @@ -1256,7 +1232,7 @@
"benchmarks = {b.name:b for b in results.benchmarks}\n",
"for benchmark_name in benchmarks.keys(): print(benchmark_name)"
],
"execution_count": 10,
"execution_count": null,
"outputs": [
{
"output_type": "stream",
Expand Down Expand Up @@ -1291,7 +1267,6 @@
"cell_type": "code",
"metadata": {
"id": "X62DQuHkulQk",
"colab_type": "code",
"outputId": "43e27357-be2b-4845-df55-a07fb90b2196",
"colab": {
"base_uri": "https://localhost:8080/",
Expand All @@ -1302,7 +1277,7 @@
"sqlite = benchmarks['sqlite3_ossfuzz']\n",
"SVG(sqlite.violin_plot)"
],
"execution_count": 11,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand All @@ -1323,7 +1298,6 @@
"cell_type": "code",
"metadata": {
"id": "lW3X-NnAw2ak",
"colab_type": "code",
"outputId": "8a3bcfe7-b75b-47cb-d717-bfde027fece5",
"colab": {
"base_uri": "https://localhost:8080/",
Expand All @@ -1333,7 +1307,7 @@
"source": [
"SVG(sqlite.coverage_growth_plot)"
],
"execution_count": 12,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand All @@ -1354,7 +1328,6 @@
"cell_type": "code",
"metadata": {
"id": "EOe1M95LwA2I",
"colab_type": "code",
"outputId": "36774705-120a-4a50-9b36-9744cbb7bdae",
"colab": {
"base_uri": "https://localhost:8080/",
Expand All @@ -1364,7 +1337,7 @@
"source": [
"SVG(sqlite.mann_whitney_plot)"
],
"execution_count": 13,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand All @@ -1385,7 +1358,6 @@
"cell_type": "code",
"metadata": {
"id": "1jo-5PlAv7gt",
"colab_type": "code",
"outputId": "1b24b804-cd4e-481f-a2e7-c3dc437336ec",
"colab": {
"base_uri": "https://localhost:8080/",
Expand All @@ -1396,7 +1368,7 @@
"# Show p values\n",
"sqlite.mann_whitney_p_values"
],
"execution_count": 14,
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
Expand Down Expand Up @@ -1760,4 +1732,4 @@
]
}
]
}
}
2 changes: 1 addition & 1 deletion service/experiment-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# will not work with your setup.

trials: 20
max_total_time: 82800 # 23 hours, the default time for preemptible experiments.
max_total_time: 129600 # 36 hours,
cloud_project: fuzzbench
docker_registry: gcr.io/fuzzbench
cloud_compute_zone: us-central1-c
Expand Down
16 changes: 16 additions & 0 deletions service/experiment-requests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,22 @@
# Please add new experiment requests towards the top of this file.
#

- experiment: 2025-10-11-fuzz36-bug
description: ""
type: bug
fuzzers:
- honggfuzz
- afl
- aflfast
- aflplusplus
- aflsmart
- fairfuzz
- mopt
- eclipser
- darwin
- libfuzzer


- experiment: 2023-06-12-aflpp
description: "Benchmark afl++ releases and newmutation"
fuzzers:
Expand Down
1 change: 1 addition & 0 deletions service/gcbrun_experiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
"""Entrypoint for gcbrun into run_experiment. This script will get the command
from the last PR comment containing "/gcbrun" and pass it to run_experiment.py
which will run an experiment."""
# a dummy comment!

import logging
import os
Expand Down