Forecast Accuracy

“How far off were we?” — Explore hourly error vs. lead time, plus a clean daily Forecast vs Observed comparison.

CI — Python Last commit Airflow Python Postgres Docker GitHub Pages Open Meteo API dbt dbt tests dbt docs ➜
Typical miss (MAE)
Average absolute error across selected hours
Big misses weighted (RMSE)
Large errors count extra
Lean (Bias)
+ means forecast ran high; – means low
Loading…

What am I looking at?

For the selected city and day(s), we compare hourly forecasts to what actually happened. The line shows error vs. “hours ahead” (how far in advance the forecast was made). Toggle “All days” to see multiple days overlaid (click chips to isolate).

Data flow

Open-Meteo API
Docker
Airflow
Postgres
dbt
CSV export
GitHub Pages

Batch DAGs compute MAE, RMSE and Bias by horizon and export docs/data/metrics_history.csv (and metrics_latest.csv) which power this page. dbt builds staging + marts, runs tests, and we publish dbt docs.

Source: github.com/nicolaaswanepoel-hue/CV

Data generated:

dbt status

Latest dbt build & source freshness from the pipeline.

MODELS (OK / TOTAL)
TESTS (PASS / TOTAL)
DOCS
Last run:
Freshness:

Forecast vs Observed — Daily

Hourly forecasts are aggregated per day (mean for temperature & wind, sum for precipitation), then compared to observed.

⬇️ Download compare
Tip: Hover to see daily values. Forecast uses the latest non-negative horizon per hour.