How Level II tests trend choice, autoregression, stationarity, seasonality, ARCH, and forecast evaluation in time-series problems.
Time-series questions at Level II are about choosing a usable forecast structure, not about admiring a historical chart. The exam wants to know whether the series is stable enough to model, whether mean reversion is plausible, whether seasonality matters, and whether the chosen forecast actually outperforms alternatives.
Candidates often make two mistakes here:
Level II is much more demanding. It expects model choice to follow the series.
flowchart TD
A["What kind of time-series problem is this?"] --> B["Deterministic trend dominates"]
A --> C["Series may follow an autoregressive process"]
A --> D["Volatility dynamics matter"]
A --> E["Several time-series variables may be linked"]
B --> F["Choose linear or log-linear trend"]
C --> G["Check stationarity, unit roots, and seasonality"]
D --> H["Consider ARCH-type volatility modeling"]
E --> I["Check nonstationarity and cointegration before regression"]
That sequence is more important than memorizing isolated vocabulary.
A deterministic-trend model may be written as:
$$ \hat{Y}_t = b_0 + b_1 t $$
or, in log-linear form,
$$ \ln(Y_t) = b_0 + b_1 t $$
| Trend choice | When it tends to fit better |
|---|---|
| Linear trend | Absolute changes are more stable |
| Log-linear trend | Percentage growth is more stable |
The exam often tests whether the candidate knows when a log-linear model is better because growth compounds rather than rises in equal absolute steps.
An AR(1) process can be written as:
$$ Y_t = b_0 + b_1 Y_{t-1} + \varepsilon_t $$
If (|b_1| < 1), the process is covariance stationary and the mean-reverting level is:
$$ \frac{b_0}{1-b_1} $$
| Concept | Why it matters |
|---|---|
| Covariance stationarity | Supports stable mean, variance, and autocovariance structure |
| Unit root | Signals nonstationarity and a weak basis for standard AR interpretation |
| Mean reversion | Makes long-run level interpretation possible |
A nonstationary series can make a respectable-looking regression or forecast deeply misleading.
The exam often contrasts models that fit history well with models that forecast better. A common accuracy measure is root mean squared error:
$$ \mathrm{RMSE} = \sqrt{\frac{1}{n}\sum_{t=1}^{n}(Y_t-\hat{Y}_t)^2} $$
| Comparison | Why Level II cares |
|---|---|
| In-sample versus out-of-sample | Guards against overfitting historical noise |
| Lower RMSE versus prettier theory | Forecast usefulness matters more than elegance |
| Stable coefficients versus drifting coefficients | Time-series relations can break across regimes |
| Issue | What it changes |
|---|---|
| Seasonality | Forecasts may need seasonal lags or seasonal adjustments |
| ARCH behavior | The variance process itself becomes forecastable and relevant |
| Cointegration | Multiple nonstationary variables may still have a stable long-run relation |
This is where Level II becomes practical. The right answer is usually the model that respects the data-generating behavior, not the model that looks easiest to compute.
Two models forecast monthly volatility. One fits the historical data more tightly, but the residual pattern shows volatility clustering and the out-of-sample error is worse.
A weak answer still prefers the tighter historical fit.
A stronger answer recognizes that the forecast problem is about usable forward variance behavior, which makes the volatility structure and out-of-sample evidence more important.
What is the strongest reason to test for a unit root before using a time series in an autoregressive model?
Best answer: A unit root indicates nonstationarity, which can invalidate the usual interpretation and forecasting framework of the AR model.
Why: Level II often tests model choice and diagnostic logic before computation.