fcScore#
Purpose#
Compute forecast scoring rules for point and density forecasts.
Format#
- sc = fcScore(actual, fc)#
- sc = fcScore(actual, fc, train=y_train)
- sc = fcScore(actual, draws=D)
- Parameters:
actual (hx1 or hxm matrix) – realized values.
fc (struct or matrix) – Optional, a
forecastResultstruct or hxm matrix of point forecasts.train (Nx1 or Nxm matrix) – Optional keyword, training data for MASE normalization.
season (scalar) – Optional keyword, seasonality for MASE. Default = 1.
draws ((n_draws)x(h*m) matrix) – Optional keyword, raw forecast draws for density scores (CRPS, LPS). From dfc.draws of
bvarSvForecast()withstore_draws = 1.quiet (scalar) – Optional keyword, set to 1 to suppress output. Default = 0.
- Returns:
sc (struct) – An instance of a
scoreResultstructure containing RMSE, MASE, SMAPE, CRPS, LPS, energy score, PI coverage, and PI width.
Examples#
Point Forecast Scores#
new;
library timeseries;
data = loadd(getGAUSSHome("pkgs/timeseries/examples/macro.dat"));
result = varFit(data, 4, quiet=1);
fc = varForecast(result, 12, quiet=1);
sc = fcScore(actual, fc, train=data);
print "RMSE:" sc.rmse;
print "MASE:" sc.mase;
print "sMAPE:" sc.smape;
Density Forecast Scores#
new;
library timeseries;
data = loadd(getGAUSSHome("pkgs/timeseries/examples/macro.dat"));
result = bvarSvFit(data, quiet=1);
fctl = svForecastControlCreate();
fctl.mode = "simulate";
fctl.store_draws = 1;
dfc = bvarSvForecast(result, 12, fctl, quiet=1);
sc = fcScore(actual, draws=dfc.draws);
print "CRPS:" sc.crps;
print "LPS:" sc.lps;
Remarks#
Point scores (RMSE, MASE, SMAPE) require point forecasts. Density scores
(CRPS, LPS) require the raw draw matrix. Interval scores (PI coverage, PI
width) require a forecastResult with lower/upper bounds.
MASE requires training data for the naive-forecast normalization. If train is not provided, sc.mase is missing.
Model#
Point scores:
Density scores:
where CRPS (Continuous Ranked Probability Score) is a proper scoring rule for density forecasts and LPS (Log Predictive Score) is the negative log predictive likelihood evaluated at the realized value. References ———-
Gneiting, T. and A.E. Raftery (2007). “Strictly proper scoring rules, prediction, and estimation.” Journal of the American Statistical Association, 102(477), 359-378.
Hyndman, R.J. and A.B. Koehler (2006). “Another look at measures of forecast accuracy.” International Journal of Forecasting, 22(4), 679-688.
Library#
timeseries
Source#
scoring.src
See also
Functions dmTest(), pitTest(), fcMetrics()