The power of value-at-risk lies in its generality. Unlike market risk metrics such as the Greeks, duration, convexity or beta, which are applicable to only certain asset categories or certain sources of market risk, value-at-risk is general. It is based on the probability distribution for a portfolio’s market value. All liquid assets have uncertain market values, which can be characterized with probability distributions. All sources of market risk contribute to those probability distributions. Being applicable to all liquid assets and encompassing, at least in theory, all sources of market risk, value-at-risk is a broad metric of market risk.
The generality of value-at-risk poses a computational challenge. In order to measure market risk in a portfolio using value-at-risk, some means must be found for determining the probability distribution of that portfolio’s market value. Obviously, the more complex a portfolio is—the more asset categories and sources of market risk it is exposed to—the more challenging that task becomes.
It is worth distinguishing two concepts:
- A value-at-risk measure is an algorithm with which we calculate a portfolio’s value-at-risk.
- A value-at-risk metric is our interpretation of the output of the value-at-risk measure.
A value-at-risk metric, such as one-day 90% USD VaR, is specified with three items:
- a time horizon;
- a probability;
- a currency.
A value-at-risk measure calculates an amount of money, measured in that currency, such that there is that probability of the portfolio not loosing that amount of money over that horizon. In the terminology of mathematics, this is called a quantile, so one-day 90% USD VaR is just the .90-quantile of a portfolio’s one-day loss.
This is worth emphasizing: value-at-risk is a quantile of loss. The task of a value-at-risk measure is to calculate such a quantile.
For a given value-at-risk metric, measure time in units of the value-at-risk horizon. Let time 0 be now, so time 1 represents the end of the horizon. We know a portfolio’s current market value 0p. Its market value 1P at the end of the horizon is unknown. Define portfolio loss 1L as
1L = 0p – 1P
If 0p exceeds 1P, the loss will be positive. If 0p is less than 1P, the loss will be negative, which is another way of saying the portfolio makes a profit.
Because we don’t know the portfolio’s future value 1P, we don’t know its loss 1L. Both are random variables, and we can assign them probability distributions. That is exactly what a value-at-risk measure does—in assigns a distribution to 1P and/or 1L, so it can calculate the desired quantile of 1L. Most typically, value-at-risk measures work directly with the distribution of 1P and use that to infer the quantile of 1L. This is illustrated in Exhibit 1 for a 90% VaR metric.
Exhibit 1 shows how the .90-quantile of 1L (the portfolio’s value-at-risk) can be obtained as the portfolio’s current value 0p minus the .10-quantile of 1P. Other value-at-risk metrics can be valued similarly. So if we know the distribution for 1P, calculating value-at-risk is easy. The challenge for any value-at-risk measure is constructing that distribution of 1P. Value-at-risk measures do so in various ways, but all practical value-at-risk measures share certain features described below.
Because value-at-risk measures are probabilistic, they deal with various random financial variables. Three types are particularly significant and are given standard notation:
- a portfolio value 1P;
- asset values 1Si; and
- key factors 1Ri.
We have already discussed portfolio value 1P, which is the portfolio’s market value at time 1—the end of the value-at-risk horizon. It has current value 0p. Mathematically, a portfolio is defined as an ordered pair (0p,1P).
Asset values 1Si represent the accumulated value at time 1 of individual assets held by the portfolio. Individual assets might be stocks, bonds, futures, options, physical commodities, etc. Current asset values are denoted 0si. Mathematically, we define an asset as an ordered pair (0si,1Si). The m asset values 1Si comprise an ordered set (or “vector”) called the asset vector, which we denote 1S. Its current value 0s is the ordered set of asset current values 0si.
Key factors 1Ri represent values at time 1 of financial variables that can be used to value the assets. Depending on the composition of the portfolio, key factors might represent exchange rates, interest rates, commodity prices, spreads, implied volatilities, etc. The n key factors 1Ri comprise an ordered set called the key vector, which we denote 1R. Value-at-risk measures utilize not only the current value 0r for the key vector but also other historical values –1r, –2r, –3r, … , –αr:
Where are we going with this? The quantities 1P, 1Si and 1Ri are all random. But the portfolio’s value 1P is a function of the values 1Si of the assets it holds. Those in turn are a function of the key factors 1Ri. For example, a bond portfolio’s value 1P is a function of the values 1Si of the individual bonds it holds. Their values are in turn functions of applicable interest rates 1Ri. Because a function of a function is a function, 1P is a function θ of 1R:
1P = θ(1R)
Value-at-risk measures apply time series analysis to historical data 0r, –1r, –2r, … , –αr to construct a joint probability distribution for 1R. They then exploit the functional relationship θ between 1P and 1R to convert that joint distribution into a distribution for 1P. From that distribution for 1P, value-at-risk is calculated, as illustrated in Exhibit 1 above.
Let’s formalize this. Exhibit 2 summarizes the components common to all practical value-at-risk measures:
A value-at-risk measure accepts two inputs:
- historical data 0r, –1r, –2r, … , –αr for 1R, and
- the portfolio’s holdings ω.
The portfolio holdings comprise a row vector ω whose components indicate the number of units held of each asset. For example, if a portfolio holds 1000 shares of IBM stock, 5000 shares of Google stock and a short position of 3000 shares of Microsoft stock, its holdings are
ω = (1000 5000 –3000)
The two inputs—historical data and portfolio holdings—are processed separately by two procedures within the value-at-risk measure:
- An inference procedure applies methods of time series analysis to the historical data 0r, –1r, –2r, … , –αr to construct a joint distribution for 1R.
- A mapping procedure uses the portfolio’s holdings ω to construct a function θ such that 1P = θ(1R).
The mapping procedure uses a set of pricing functions φi that value each asset 1Si in terms of 1R:
1Si = φi (1R)
For example, if asset 1S1 is a bond, pricing formula φ1 will be a bond pricing formula. If asset 1S2 is an equity option, pricing formula φ2 will be an equity option pricing formula. A functional relationship 1P = θ(1R) is then defined as a weighted sum of the pricing formulas φi, with the weights being the holdings ωi:
1P = ω11S1 + ω21S2 + … + ωm1Sm
= ω1φ1(1R) + ω2φ2(1R) + … + ωmφm(1R)
This is called a primary mapping. If a portfolio is large or holds complex instruments, such as derivatives or mortgage-backed securities, a primary mapping may be computationally expensive to value. Many mapping procedures replace a primary mapping θ with a simpler approximation . Such approximations are called remappings. They can take many forms. Two common examples are remappings that are constructed, using the method of least squares, as either a linear polynomial or quadratic polynomial approximation of θ. Such remappings are called, respectively, linear remappings and quadratic remappings.
Most of the literature on value-at-risk is either elementary or theoretical, so remappings receive little mention. This is unfortunate. As a practical tool for making production value-at-risk measures tractable, remappings can be indispensable.
Returning to Exhibit 2, we have discussed the two inputs to a value-at-risk measure as well as the inference procedure and mapping procedure that process these. If you think about it, the two outputs of those procedures correspond to the two components of risk. As explained by Holton (2004), every risk has two components:
In the context of market risk, we are uncertain if we don’t know what will happen in the markets. We are exposed if we have holdings in instruments traded in those markets. A value-at-risk measure characterizes uncertainty with the joint distribution for 1R constructed by its inference procedure. It characterizes exposure with the portfolio mapping θ constructed by its mapping procedure. A value-at-risk measure must combine those two components to measure a portfolio’s market risk, and it does so with a transformation procedure.
A transformation procedure accepts as inputs
- a joint distribution for 1R, and
- a portfolio mapping θ, which can be either a primary mapping or a remapping.
It uses these to construct a distribution for 1P from which it calculates the portfolio’s value-at-risk.
Transformation procedures take various forms, but there are essentially three types:
- Linear transformation procedures apply if the portfolio mapping θ is a linear polynomial. They employ a standard formula from probability theory for calculating the variance of a linear polynomial of a random vector. For certain asset categories, such as equities or futures, primary mappings can be liner polynomials. Alternatively, θ may be a linear remapping.
- Quadratic transformation procedures apply if the portfolio mapping θ is a quadratic polynomial and the joint distribution of 1R is joint-normal. Primary mappings are almost never quadratic polynomials, so quadratic transformations assume use of a quadratic remapping.
- Monte Carlo transformation procedures employ the Monte Carlo method and are applicable to all portfolio mappings. This advantage comes with potentially significant computational expense, as Monte Carlo transformation procedures entail revaluing the portfolio under numerous scenarios. A subcategory of Monte Carlo transformation procedures do not randomly generate scenarios but instead construct them directly from historical data for 1R. These are called historical transformation procedures.
Elementary treatments of value-at-risk often mention “methods” for calculating value-at-risk. Mostly, these reference the transformation procedures used. For example, the terms “parametric method” or “variance-covariance method” refer to value-at-risk measures that employ a liner transformation procedure. The “delta-gamma method” refers to those that use a quadratic transformation procedure. The “Monte Carlo method” and “historical method” refer, of course, to value-at-risk measures that use Monte Carlo or historical transformation procedures.
This article provides a broad introduction to value-at-risk measures. If you want to delve more deeply into the details, I have written an entire book on the subject, which I distribute for free on the internet. You can start reading now.