The volatility of the price of a security is a statistical measure of the risk of holding it.
Volatility is the standard deviation of expected return on a security. The volatility therefore changes with the period of times over which it is measured.
Beta is a slightly more sophisticated measure of risk, based on both relative volatility and the correlation between movements in the price of a security and the market are a whole. Although beta is useful for valuation, volatility is a better measure of the risk an investor takes. It is the basis of measures such as value at risk.
The problem with volatility is that it can only be measured with certainty for the past (realised volatility), but what matters to investors is the volatility now — over time periods starting from the current time and ending at some future time. There are two simple approaches:
- Assume that the volatility is unchanged in the long run and therefore use the average volatility over a past period.
- Use the implied volatility.
More complex models can be used. Approaches such as using a weighted average of past volatilities (with more recent data getting a higher weighting) have proved successful.
Methods for estimating volatilities do usually have the advantage of being well suited to back-testing.
Criticisms of the use of volatility
There are many criticisms of the use of volatility as a risk measure. However most of them assume either assume circumstances in which an investor has identified market in efficiencies, or fail to understand why volatility measures upside and well as downside probabilities.
The argument for using volatility as a risk measure rests on that for the standard deviation of returns as a measure of risk: volatility is the historical standard deviation of returns and therefore a good ex-post measure. So, is the standard deviation of expected returns the best measure of risk?
It is easiest to illustrate with an example. Suppose you have to choose between the following investments, all of which will cost £1,000:
- Will guarantee that you will receive £1,100 in an year
- Gives you a 50% chance of getting back £1,300, with a 50% chance of only getting back £900
- Gives you a 50% chance of getting back £1,500, with a 50% chance of getting back just £6,00
All the investments have the same expected return. How would you order them by riskiness? It is clear that the first is the safest and the third the riskiest. Volatility is a measure that makes sense.
Now consider another investment that has a 50% chance of getting back £600, a 40% chance that it will be £1,200, and a 10% chance of getting £3200. Once again, it has the same expected return. Is this more of less risky than the third investment above?
It is clear that it is more risky, and the only difference lies in the different distribution of the upside odds. A risk measure must take upside risk in order to take into account the effect different distributions of upside risk have on the expected value.
The adjustment is not simply necessary because of higher upside risk, it is necessary because that higher upside risk increases the expected value.
The biggest weakness of the critics case is their inability to find a better measure, and not measuring risk is often not an acceptable option.