hdCompute#
Purpose#
Compute historical decomposition of observed series into structural shock contributions.
Format#
- hd = hdCompute(result)#
- hd = hdCompute(result, n_steps)
- Parameters:
result (struct) – an instance of a
varResultorbvarResultstructure.n_steps (scalar) – Optional, number of MA steps for the decomposition. Default = T-p (full sample).
var_names (Mx1 string array) – Optional keyword, override variable names.
quiet (scalar) – Optional keyword, set to 1 to suppress printed output. Default = 0.
- Returns:
hd (struct) –
An instance of an
hdResultstructure containing:hd.hd
Array of m (T-p)xm matrices.
hd.hd[j]is the contribution of shock j to all variables over time. Column i ofhd.hd[j]is the time series of shock j’s contribution to variable i.hd.shocks
(T-p)xm matrix, structural (Cholesky-rotated) shocks.
hd.initial
(T-p)xm matrix, contribution of initial conditions to each variable.
hd.t_eff
Scalar, effective number of observations (T-p).
hd.m
Scalar, number of variables.
hd.var_names
Mx1 string array, variable names.
Examples#
Full Historical Decomposition#
new;
library timeseries;
data = loadd(getGAUSSHome("pkgs/timeseries/examples/macro.dat"));
result = varFit(data, 4);
struct hdResult hd;
hd = hdCompute(result);
Shock Contributions to a Variable#
new;
library timeseries;
data = loadd(getGAUSSHome("pkgs/timeseries/examples/macro.dat"));
result = varFit(data, 4);
hd = hdCompute(result, quiet=1);
// FFR shock (shock 3) contribution to GDP (variable 1) over time
ffr_to_gdp = hd.hd[3, ., 1];
print "FFR shock contribution to GDP:";
print ffr_to_gdp;
// CPI shock (shock 2) contribution to GDP
cpi_to_gdp = hd.hd[2, ., 1];
Verify Decomposition Sums to Observed#
new;
library timeseries;
data = loadd(getGAUSSHome("pkgs/timeseries/examples/macro.dat"));
result = varFit(data, 4);
hd = hdCompute(result, quiet=1);
// Reconstruct GDP from shock contributions + initial conditions
gdp_reconstructed = hd.initial[., 1];
for j (1, hd.m, 1);
gdp_reconstructed = gdp_reconstructed + hd.hd[j, ., 1];
endfor;
// Compare with observed GDP (should match within numerical precision)
gdp_observed = result.y[result.p+1:rows(result.y), 1];
print "Max reconstruction error:" maxc(abs(gdp_reconstructed - gdp_observed));
Extract Structural Shocks#
new;
library timeseries;
data = loadd(getGAUSSHome("pkgs/timeseries/examples/macro.dat"));
result = varFit(data, 4, quiet=1);
hd = hdCompute(result, quiet=1);
// Structural (orthogonalized) shocks
print "Structural shocks:";
print hd.shocks[1:5, .];
Remarks#
Historical decomposition expresses each observed variable as the sum of contributions from each structural shock plus initial conditions:
The contributions are computed by filtering the structural shocks through the MA(\(\infty\)) representation (truncated at n_steps).
Structural shocks are obtained by applying the Cholesky decomposition of \(\Sigma\) to the reduced-form residuals: \(\varepsilon_t = P^{-1} u_t\) where \(\Sigma = PP'\).
Interpretation: hd.hd[j, t, i] answers the question: “How much of
variable i’s value at time t is attributable to the cumulative effect of
shock j up to time t?”
For BVAR, the decomposition is computed at the posterior mean of B and \(\Sigma\).
Model#
The observed series is decomposed as:
where \(\Theta_h\) is the structural IRF at horizon h, \(P = \text{chol}(\Sigma)'\), and \(\hat{u}_t\) are the reduced-form residuals. The structural shocks are \(\hat\varepsilon_t = P^{-1} \hat{u}_t\).
The contribution of shock \(j\) to variable \(i\) at time \(t\) is:
Algorithm#
Compute structural shocks \(\hat\varepsilon_t = P^{-1} \hat{u}_t\) for all \(t\).
Compute IRF matrices \(\Theta_0, \ldots, \Theta_{T-p-1}\).
For each time \(t\), accumulate shock contributions via MA convolution.
Compute initial conditions as the residual: \(y_t^{\text{init}} = y_t - \sum_j \text{hd}_{j,t}\).
Complexity: \(O(T^2 m^2)\) — quadratic in sample size due to the convolution.
Troubleshooting#
Reconstruction error is not zero: The decomposition should reconstruct the observed series exactly (to machine precision). If the error exceeds \(10^{-10}\), there may be a mismatch between the estimation result and the data. Re-estimate the model.
One shock dominates the decomposition: Same as FEVD — check the variable ordering and consider alternative identification.
References#
Lutkepohl, H. (2005). New Introduction to Multiple Time Series Analysis. Springer. Section 2.3.4.
Kilian, L. and H. Lutkepohl (2017). Structural Vector Autoregressive Analysis. Cambridge University Press.
Library#
timeseries
Source#
hd.src
See also
Functions irfCompute(), fevdCompute(), varFit(), bvarFit()