Skip to content

Commit e4df17f

Browse files
feat: Merge benchmarks
* Update tRIBS benchmarks for version 5.3.0 * Add new python scripts for automated benchmarking for users * Update readme files to provide descriptions on how to run benchmark verification
1 parent 57f37ee commit e4df17f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+185033
-126
lines changed

.gitignore

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,8 @@
11
data/.DS_Store
22
.DS_Store
3+
watershed-scale-big-spring/results/test/parallel/*
4+
watershed-scale-big-spring/results/test/serial/*
5+
point-scale-happy-jack/results/test/*
6+
!point-scale-happy-jack/results/test/.gitkeep
7+
!watershed-scale-big-spring/results/test/serial/.gitkeep
8+
!watershed-scale-big-spring/results/test/parallel/.gitkeep

README.html

Lines changed: 0 additions & 102 deletions
This file was deleted.

README.md

Lines changed: 72 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,85 @@
1-
# TIN-Based Real-Time Integrated Basin Simulator point-scale benchmark
2-
This repository hosts the setup for executing a point-scale model of the [Happy Jack SNOTEL Site](https://wcc.sc.egov.usda.gov/nwcc/site?sitenum=969) in Northern Arizona, USA, using the TIN-Based Real-Time Integrated Basin Simulator ([tRIBS](https://tribshms.readthedocs.io/en/latest/)). tRIBS v5.3 (or later) uses CMake as a build system, instructions on downloading and building tRIBS can be found [here](https://tribshms.readthedocs.io/en/latest/man/Model_Execution.html#compilation-instructions). Or alternatively one may use the tRIBS docker image, more information on this can be found [here](https://tribshms.readthedocs.io/en/latest/man/Docker.html#docker).
1+
# tRIBS Official Benchmark Cases
32

4-
## Model Execution
5-
From the command line tRIBS can be executed as follows, assuming the executable is stored in the sub-directory `bin`:
3+
This repository provides a set of official benchmark cases for the tRIBS (TIN-based Real-time Integrated Basin Simulator) distributed hydrological model. Its primary purpose is to allow users to:
64

5+
1. **Validate a new tRIBS installation** by ensuring it can replicate official results.
6+
2. **Provide standardized example datasets** for learning how to set up and run tRIBS simulations.
7+
8+
This repository is designed to be a companion to the main [tRIBS model source code repository](https://github.com/tRIBS-Model/tRIBS).
9+
10+
## Available Benchmarks
11+
12+
This repository contains the input files for the following benchmark simulations:
13+
14+
* **`point-scale-happy-jack/`**: A single-point simulation at the Happy Jack SNOTEL site in Arizona. This case is ideal for testing the model's vertical soil moisture and energy balance components.
15+
* **`watershed-scale-big-spring/`**: A full watershed simulation for the Big Spring basin. This case is designed to test the model's hydrologic routing, spatial processes, and mass balance at the basin scale. It includes input files for both **serial** and **parallel (MPI)** model runs.
16+
17+
## Quickstart: How to Verify Your tRIBS Installation
18+
19+
Follow these steps to run a benchmark and verify that your tRIBS installation is producing correct results.
20+
21+
### Step 1: Prerequisites
22+
23+
Before you begin, ensure you have the following:
24+
25+
* A successful compilation and installation of the tRIBS model.
26+
* A Python 3 environment.
27+
* The [pytRIBS](https://github.com/tRIBS-Model/pytRIBS) Python package and its dependencies.
28+
29+
### Step 2: Run a Benchmark Simulation
30+
31+
Navigate into one of the benchmark directories. Each directory contains a specific `README.md` with detailed instructions on how to execute the tRIBS simulation for that case.
32+
33+
For example:
34+
```bash
35+
cd watershed-scale-big-spring
36+
# Follow the instructions in watershed-scale-big-spring/README.md to run the model
37+
```
38+
The model will generate output files in the results/ subdirectory within the benchmark folder.
39+
40+
### Step 3: Verify Your Results
41+
42+
After the simulation is complete, return to the root directory of this repository. We provide an automated Python script to compare your model's output against the official reference values.
43+
44+
Run the verify.py script, specifying which benchmark you ran.
45+
Example Usage:
46+
47+
To verify your results for the **serial watershed** run:
48+
```bash
49+
python verification/verify.py watershed-scale-serial
750
```
8-
bin/tRIBS <path/to/infile>
51+
To verify your results for the **point scale** run:
52+
```bash
53+
python verification/verify.py watershed-scale-serial
954
```
55+
**Expected Output**
56+
57+
The script will analyze your results and provide a clear, color-coded summary.
1058

11-
In this case the happy jack input files can be found in ```src/in_files/happy_jack.in``` and can be further modified to explore tRIBS functionality. Note: the current input file is setup with relative paths and results will be saved to ```results/test```. Also, any modification in ```data/model``` may require an update of the .in file.
59+
**On success, you will see a message like this:**
60+
```bash
61+
Verifying benchmark 'watershed-scale-serial' against references for tRIBS v5.3.0...
1262

13-
## Benchmark
14-
The benchmark case results are stored as a zip file under ```results/reference```. These results can be visualized in comparison to SNOTEL[^1]<sup>,</sup>[^2] data for the Happy Jack site as demonstrated in the jupyter notebook in ```src/tRIBS_snotel_comparison.ipynb```.
63+
-> Analyzing your model output in 'watershed-scale-big-spring'...
1564

16-
## Directory Structure:
17-
### data
18-
Contains all the necessary data to run tRIBS at Happy Jack and includes Snotel and SWANN data as a calibration/validation set.
19-
### doc
20-
Contains notebooks for running and analyzing this specific benchmark case, along side additional documentation.
21-
### src
22-
Is designed to contain source code for for the tRIBS executable, which can be obtained [here](https://github.com/tribshms/tRIBS).
23-
### bin
24-
Directory for building and storing tRIBS executable, with instructions [here](https://tribshms.readthedocs.io/en/latest/man/Model_Execution.html#compilation-instructions).
25-
### results
26-
Directory for results, with a _reference_ and _test_ sub-directories. The former contains reference outputs from the Happy Jack point tRIBS simulation, while the later is empty and intended to store additional model simulations. Note the main results also contains a zip of the _reference_ sub-directory.
65+
--- Comparison Results ---
66+
Total Change in Storage (mm) | Ref: 9.3000 | Yours: 9.3000 | [PASS]
67+
Total Evapotranspiration (mm) | Ref: 650.8000 | Yours: 650.8000 | [PASS]
68+
Total Precipitation (mm) | Ref: 1250.2000 | Yours: 1250.2000 | [PASS]
69+
Total Runoff (mm) | Ref: 590.1000 | Yours: 590.1000 | [PASS]
70+
--------------------------
71+
72+
Success! All checks passed.
73+
Your tRIBS installation appears to be working correctly for this benchmark.
74+
```
2775

2876

77+
---
2978

30-
## References
79+
## Advanced Analysis and Visualization
3180

32-
[^1]: Sun N, H Yan, M Wigmosta, R Skaggs, R Leung, and Z Hou. 2019. “Regional snow parameters estimation for large-domain hydrological applications in the western United States.” Journal of Geophysical Research: Atmospheres. doi: 10.1029/2018JD030140
81+
The `doc/` directory contains additional Jupyter Notebooks that were used in previous versions for plotting and analysis. These can be used for a more in-depth visual comparison of your results.
3382

34-
[^2]: Yan H, N Sun, M Wigmosta, R Skaggs, Z Hou, and R Leung. 2018. “Next-generation intensity-duration-frequency curves for hydrologic design in snow-dominated environments.” Water Resources Research, 54(2), 1093–1108.
35-
BCQC Data Format
83+
## For Maintainers
3684

85+
The official `reference_values.json` file can be updated by running the `doc/verification/generate_references.py` script. This should only be done after a new set of results has been generated following an official tRIBS model update.

doc/BigSpring.kmz

184 KB
Binary file not shown.

doc/notebooks/Results.ipynb

Lines changed: 163 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,163 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "code",
5+
"execution_count": null,
6+
"id": "1011976b-d400-46b6-a91c-1f72a6ef9c77",
7+
"metadata": {},
8+
"outputs": [],
9+
"source": [
10+
"# required import\n",
11+
"import os\n",
12+
"import geopandas as gpd\n",
13+
"import pandas as pd\n",
14+
"import matplotlib.pyplot as plt\n",
15+
"import matplotlib.font_manager as fm\n",
16+
"import matplotlib as mpl\n",
17+
"import numpy as np\n",
18+
"from matplotlib_scalebar.scalebar import ScaleBar\n",
19+
"\n",
20+
"# helper scripts to read in spatial results using pandas and geopandas\n",
21+
"import read_voi\n",
22+
"\n",
23+
"os.chdir('../..')\n"
24+
]
25+
},
26+
{
27+
"cell_type": "markdown",
28+
"id": "cb743616-7f43-41af-9f99-b8f054d0544b",
29+
"metadata": {},
30+
"source": [
31+
"## Import Results Using Pandas and Geopandas"
32+
]
33+
},
34+
{
35+
"cell_type": "code",
36+
"execution_count": null,
37+
"id": "e984864f-99f9-4949-b9c6-2447c96e16a9",
38+
"metadata": {},
39+
"outputs": [],
40+
"source": [
41+
"par_results = 'results/test/parallel/' # change 2nd directory to test or reference\n",
42+
"ser_results = 'results/test/serial/'"
43+
]
44+
},
45+
{
46+
"cell_type": "code",
47+
"execution_count": null,
48+
"id": "f88a0b89-deae-467c-a1e2-015352951fa9",
49+
"metadata": {},
50+
"outputs": [],
51+
"source": [
52+
"# read in integrated spatial model results using pandas\n",
53+
"int_df_par = read_voi.merge_parallel_spatial_files(f'{par_results}bigsp',35072)\n",
54+
"int_df_ser = pd.read_csv(f'{ser_results}bigsp.35072_00i')\n",
55+
"\n",
56+
"# read in voronoi from serial, they should be identical between but reading in both for demonstrative purposes\n",
57+
"voi_ser,_ = read_voi.read_voi_file(f'{ser_results}bigsp_voi',join=int_df_ser)\n",
58+
"voi_par = read_voi.merge_parallel_voi(f'{par_results}bigsp_voi',join=int_df_par['35072'])"
59+
]
60+
},
61+
{
62+
"cell_type": "markdown",
63+
"id": "e3b04c08-821b-45fd-af2f-4b4eecdf705f",
64+
"metadata": {},
65+
"source": [
66+
"## Plot Spatial Maps of Mean Evapotranspiration Rates Averaged Over The Length of Simulation"
67+
]
68+
},
69+
{
70+
"cell_type": "markdown",
71+
"id": "cf01bf53-b2ca-4f62-98cc-af1cbad0617a",
72+
"metadata": {},
73+
"source": [
74+
"### Parallel"
75+
]
76+
},
77+
{
78+
"cell_type": "code",
79+
"execution_count": null,
80+
"id": "d8f26c70-c5bb-4116-a07f-406eb6543e38",
81+
"metadata": {},
82+
"outputs": [],
83+
"source": [
84+
"cm = 1/2.54 # centimeters in inches\n",
85+
"fig,ax = plt.subplots(figsize=[18*cm,18*cm],layout='constrained')\n",
86+
"low = np.percentile(voi_par['AvET'], 2.5)\n",
87+
"high = np.percentile(voi_par['AvET'], 97.5)\n",
88+
"voi_par.plot(ax=ax,column='AvET',cmap='YlOrBr',legend=True,vmin=low,vmax=high,legend_kwds={'label': r'ET in mm/hr','orientation': 'horizontal',\"shrink\":.5})\n",
89+
"ax.add_artist(ScaleBar(1,location='lower left'))\n",
90+
"plt.title('Parallel, Big Spring, Arizona, USA: Map of Mean Evapotranspiration Rate')\n",
91+
"plt.axis('off')"
92+
]
93+
},
94+
{
95+
"cell_type": "markdown",
96+
"id": "6e71d0c1-477d-4def-ab2c-d31c3670c526",
97+
"metadata": {},
98+
"source": [
99+
"### Serial"
100+
]
101+
},
102+
{
103+
"cell_type": "code",
104+
"execution_count": null,
105+
"id": "4cdba6e1-04d6-4d48-bfc7-9d043bdcb404",
106+
"metadata": {},
107+
"outputs": [],
108+
"source": [
109+
"fig,ax = plt.subplots(figsize=[18*cm,18*cm],layout='constrained')\n",
110+
"low = np.percentile(voi_ser['AvET'], 2.5)\n",
111+
"high = np.percentile(voi_ser['AvET'], 97.5)\n",
112+
"voi_ser.plot(ax=ax,column='AvET',cmap='YlOrBr',legend=True,vmin=low,vmax=high,legend_kwds={'label': r'ET in mm/hr','orientation': 'horizontal',\"shrink\":.5})\n",
113+
"ax.add_artist(ScaleBar(1,location='lower left'))\n",
114+
"plt.title('Serial, Big Spring, Arizona, USA: Map of Mean Evapotranspiration Rate')\n",
115+
"plt.axis('off')"
116+
]
117+
},
118+
{
119+
"cell_type": "markdown",
120+
"id": "4aa52490-5c34-49b4-9f8f-8e963461f007",
121+
"metadata": {},
122+
"source": [
123+
"## Plot Parallel Partitioning\n",
124+
"The figure below shows how individual voronoi cells are assigned or partitioned to a given core. "
125+
]
126+
},
127+
{
128+
"cell_type": "code",
129+
"execution_count": null,
130+
"id": "82a6ac3a-cbe8-4bdd-8799-c57ca516e2c0",
131+
"metadata": {},
132+
"outputs": [],
133+
"source": [
134+
"fig,ax = plt.subplots(figsize=[18*cm,18*cm],layout='constrained')\n",
135+
"voi_par.plot(ax=ax,column='processor',cmap='Set2',legend=True,legend_kwds={'label': 'Core #','orientation': 'horizontal',\"shrink\":.5})\n",
136+
"ax.add_artist(ScaleBar(1,location='lower left'))\n",
137+
"plt.title('Big Spring, Arizona, USA: Partition map')\n",
138+
"plt.axis('off')"
139+
]
140+
}
141+
],
142+
"metadata": {
143+
"kernelspec": {
144+
"display_name": "Python 3 (ipykernel)",
145+
"language": "python",
146+
"name": "python3"
147+
},
148+
"language_info": {
149+
"codemirror_mode": {
150+
"name": "ipython",
151+
"version": 3
152+
},
153+
"file_extension": ".py",
154+
"mimetype": "text/x-python",
155+
"name": "python",
156+
"nbconvert_exporter": "python",
157+
"pygments_lexer": "ipython3",
158+
"version": "3.12.2"
159+
}
160+
},
161+
"nbformat": 4,
162+
"nbformat_minor": 5
163+
}
571 KB
Binary file not shown.

0 commit comments

Comments
 (0)