One of the attractive qualities of Value-at-Risk (VaR) is that it’s easy to understand. When we compute a 10-day 99% VaR for a position (or portfolio of assets), we are basically saying: “I’m 99% certain that over a ten-day period I won’t lose more than x% on this position (or portfolio).”
Unfortunately, the financial crisis demonstrated two problems with that approach:
Cue BCBS FRTB, aka the Fundamental Review of the Trading Book (BCBS 219) and now known as the Revised Framework for Market Risk Capital Requirements. Two ‘fundamental’ aspects of BCBS’s review involve i) shifting the industry’s long-standing reliance on VaR towards Expected Shortfall (ES), and ii) an attempt to incorporate liquidity risk calculations into capital buffers.
Obviously, there are other profound implications that the Revised Framework for Market Risk Capital Requirements will prompt when it is implemented (due to be 2019). Banks will face new controls on moving assets between banking and trading books (to restrict capital arbitrage) and will generally be hit by much higher capital charges, making trading and market making much more expensive. However, for the purposes of this blog, we have simply focused on the data management implications of moving from VaR to a liquidity-adjusted measure of ES.
Whereas VaR can tell you ‘I am 99% certain that my loss over a ten-day period won’t be more than x%’, ES looks to quantify tail risk and estimate how much you would expect to lose under the worst of scenarios (your average loss in the worst 2.5% of outcomes).
By looking to understand the anatomy of a tail, you inherently place more emphasis on the quality of historical data as an input into risk models. That emphasis is further magnified by the criteria set out by the BCBS to determine whether a risk factor is modellable: only 24 ‘real’ observable prices are required in a year (an average of just two per month). You don’t have to be a mathematician to appreciate that even a single erroneous spike can seriously skew your results if you are modelling tail risk using just two price points per month.
Failing to properly validate outliers before you start modelling risk factors and calculating ES could therefore have serious financial repercussions (more so than with VaR approach). The same goes with quantifying liquidity risk. Good, clean data management will be key to accurately categorising positions in terms of their liquidity horizons.
Ultimately, as the industry moves its primary measure of market risk from VaR to a liquidity-adjusted measure of Expected Shortfall, risk models will be even more sensitive to outliers, and any incorrect data that creeps into modelling processes could end up being punitive.
Those that have not already done so will need to incorporate enterprise data management (EDM) tools and processes into their risk management operations, or adapt their centralised EDM functions to better service the requirements of risk managers. With capital charges set to increase across the board, the last thing firms need is to add to their burden through poor data management practices.