This directory contains the build benchmarking system. It uses gradle-profiler to measure and track build performance over time.
The build benchmark system measures various build scenarios to track performance metrics such as:
- Total execution time: Complete build duration
- Gradle configuration time: Time spent in the configuration phase (task start)
Results are processed to calculate statistical metrics (mean, median, standard deviation, min, max) and can be reported as Pixels for tracking and visualization.
The main Python script that orchestrates the benchmarking process. It:
- Runs gradle-profiler with the scenarios defined in build-benchmark.scenarios
- Processes results from the generated CSV files
- Calculates statistics (mean, median, standard deviation, min, max) for each scenario
- Displays a formatted summary in the terminal
- Optionally reports results to an external API endpoint
The script can be run from anywhere in the repository:
python3 build-benchmarks/run-benchmark.py [OPTIONS]--gradle-user-home PATH: Specify the Gradle user home directory (default:~/.gradle)--github-action-run-id ID: GitHub Action run ID for linking results with CI workflows--git-commit-sha SHA: Git commit SHA for linking results to specific code versions--report-pixel: Enable sending results to the reporting API (requires--github-action-run-idand--git-commit-sha)--skip-profiler: Skip running gradle-profiler (only process existing results)--skip-processing: Skip processing results (only run gradle-profiler)--csv-file PATH: Path to CSV file for processing (only used with--skip-profiler). Defaults tobuild-benchmarks/results/benchmark.csv
Make sure that you have gradle-profiler installed.
If your Gradle installation is not in ~/.gradle, provided that as a parameter as well, see --gradle-user-home PATH above.
Run full benchmark locally:
python3 build-benchmarks/run-benchmark.pyRun benchmark and report to API (this should only be run from a CI in controlled conditions):
python3 build-benchmarks/run-benchmark.py \
--github-action-run-id "12345" \
--git-commit-sha "abc123" \
--report-pixelProcess existing results without re-running:
python3 build-benchmarks/run-benchmark.py --skip-profilerRun profiler only (skip processing):
python3 build-benchmarks/run-benchmark.py --skip-processingTo add a new scenario, edit build-benchmark.scenarios and add a new scenario block. Refer to the gradle-profiler documentation for available options.
Example:
build-new-scenario {
title = "New Scenario"
tasks = ["app:assembleDebug"]
warm-ups = 3
iterations = 5
}
The benchmarks are automatically run via the GitHub Actions workflow defined in .github/workflows/build-benchmark-nightly.yml.
The workflow runs nightly on the develop branch.
It uses the android-large-runner which should not be changed. Doing so, or if the underlying resources available to the runner change, will reset our build metric baselines.
The workflow can be manually triggered from the GitHub Actions UI:
- Navigate to the Actions tab in the GitHub repository
- Select Build Benchmark - Nightly from the workflow list
- Click Run workflow
- Select the branch you want to benchmark (defaults to
developif not specified) - Click Run workflow to start the benchmark
Note: When manually triggered, the workflow will not publish results to the Pixels repository. You can inspect terminal output for see results and compare against baseline or other builds.
Benchmark results are stored in build-benchmarks/results/:
benchmark.csv: Main results file in long format- Additional files may be generated by gradle-profiler