Stress-testing banks: are econometric models growing young again?
Speech by Ignazio Angeloni, Member of the Supervisory Board of the ECB,
at the Inaugural Conference for the Program on Financial Stability,
School of Management, Yale University,
1 August 2014
It is a pleasure to be here today at this inaugural conference of the Yale Program for Financial Stability. I am grateful to the organisers, to Andrew Metrick in particular, for inviting me. I think that establishing a financial stability programme at Yale is a very good idea, in terms of both substance and timing. I will give some reasons in a moment.
This is my first time at Yale, but in many ways, I feel at home. Economists of my generation have been intellectually shaped by James Tobin’s articles on liquidity preference, portfolio theory, and banking and financial sector modelling (the latter mainly together with Bill Brainard). For me, the sense of familiarity is even stronger. As a PhD student at Penn in the late 1970s and early 1980s, I remember well the references made by the late Lawrence Klein, in his advanced econometrics classes, to Ray Fair’s monumental work on model building and testing. Among all living economists, Fair is probably the one who knows econometric models best, their virtues and defects; his work is relevant also for what we are discussing here today. At Penn, I also had the privilege of taking the macro and monetary field with Bob Shiller, who later moved here. While not anticipating what he would become, we students at the time already admired his uniquely original way of seeing and analysing economic questions. Being here today gives me another welcome opportunity to meet him.
This conference covers issues that are hotly debated on both sides of the Atlantic: bank stability, systemic risks, capital requirements and other bank supervisory and regulatory issues. The crisis, with all its woes, has had at least one advantage: it has forced economists to focus on more important things, some of which we had previously lost sight of. The report presented at this conference by the team from the Federal Reserve Bank of New York  deals with bank stress-testing, a technique that has gained considerable ground since 2007, and is now one of the main tool used for calculating prudential capital buffers for banking institutions. In its barebones, the stress-testing approach consists of analysing, by means of models, the effect of certain macroeconomic shocks on the balance sheets of banks and, ideally – although this was not done in the New York Fed report – also the feedback effects from the banks to the economy. In the language of Klein and Fair, stress-testing is, therefore, a particular form of scenario analysis, where attention is placed on banks and on the transmission of impulses to and from them.
To me, today’s fervour for model-based stress-testing is both surprising and exciting. Econometric models have lost much of their reputation lately, accused rightly or wrongly of not having predicted the financial crisis. Against this background, it is surprising to see that an econometric practice that shares much of the same approach and potential pitfalls is being taken so seriously that key policy and business decisions, such as the setting of banks’ capital requirements, are often based on their simulation results. Are we placing too much trust in stress tests, thus perhaps preparing some other “spectacular failure” ? Or is there something special in stress-testing models that makes them inherently more reliable than traditional macro-econometric models? The new technique holds the promise of building a bridge between macroeconomics and micro-banking analysis, thereby making supervisory and regulatory policies, often considered opaque and arbitrary, more systematic, transparent and accountable. Whether this promise will be fulfilled is too early to say. But even at the moment, for someone with my background, it is exciting to see that econometric models, far from dying, are living a second youth. And as a former student, I can only welcome yet another confirmation, if ever needed, of what the Nobel committee wrote in 1980: “few, if any, research workers in the empirical field of economic science have had so many successors and such a large impact as Lawrence Klein”.
In the rest of my remarks, I will do three things. First, I will offer some reflections on the econometrics of stress-testing models. I believe that some of the lessons learned in building and using macro-econometric models in recent decades can help develop the new technique as well. Second, taking a more policy-oriented perspective, I will briefly discuss how stress-testing can complement existing supervisory practices in periods ahead, making them more transparent and effective. Finally, I will briefly describe the stress-testing exercise we are now conducting in Europe, as the ECB prepares itself for assuming its responsibility as banking supervisor in November. All three points will give me the opportunity to comment on the New York Fed report, which I found interesting and useful.
Stress tests as part of model-based policy analysis
The essence of the stress-testing approach for banks is well explained in Section 2 of the New York Fed report. The design of the exercise can involve different levels of complexity. 
At a first, simpler level, stress-testing consists only of mapping a set of macroeconomic risk factors into a number of balance sheet variables of relevance to banks over a given time horizon – typically two years.  The goal is to gauge how macro-risk factors impact on bank profitability, liquidity and solvency. Solvency is generally measured by a Tier 1 capital ratio and is assessed against a pre-set benchmark. Most often, capital adequacy is the main focus of attention in the whole exercise, with other variables being disclosed merely for transparency. The capital benchmark is a measure of the risk appetite of the supervisor; all other factors being equal, a higher benchmark implies a larger capital shortfall for each banking institution for any outcome of the simulation, and means that the authority is willing to accept a lower risk of insolvency of the banks after the exercise has been completed. By and large, the degree of “severity” of the exercise is determined jointly by the macroeconomic risk factors and by the capital benchmark(s), although several other (often apparently innocuous) technical features can have a significant impact on its outcome.
The way the macro-factors are determined is a crucial element of the whole design. Typically, there are two “scenarios”. The “baseline” scenario is given by the expected (mean or median) outcome of the main macro-variables of interest. The second, “adverse” scenario represents a particular realisation of their joint distribution that implies a strong negative effect on banks’ capital. Naturally, the first scenario measures the implications of macroeconomic risks in line with expectations, whereas the second measures the impact of very negative capital outcomes that could materialise under unlikely macroeconomic circumstances. The design of the exercise can vary in its specifications, complexity and degree of detail. Typically, the macro factors are specified consistently with historical correlation structures, both in the baseline scenario and in the adverse scenario. Such consistency is most easily established with an econometric model, which, as a by-product, also facilitates the external communication of the results.
The macro scenarios are then mapped onto the bank balance sheet variables; Figure 1 of the New York Fed report provides an illustration. Macro data are translated into revenue and loss implications, pre-provision and projected losses, as well as write-offs, for the main exposures in the different markets (credit and securities). Post-provision profit and loss projections, together with dividend policy assumptions, are used to determine the build-up (or reduction) of reserves and capital, and the latter is then compared with the benchmarks, to see if any capital shortfalls arise over the pre-set time horizon. Regression models are used for this stage, estimated on the basis of historical data; the simple AR1 specification of page 2 of the New York Fed report is a typical example:
ratio(t,i) = α + β(1) ratio(t‐1,i) + β(2) macro(t,i) + β(3) X(t,i) + ε(t)
where ratio(t,i) is a bank-specific financial ratio or another variable of interest (for example, write-offs on a particular class of exposures for bank i at time t). For credit exposures, the dependent variable can be a probability of default (PD(t,i)) or a corresponding loss given default (LGD (t,i)); these two parameters are in turn used to calculate expected losses. For securities accounted for as “held for trading” or “available for sale”, mark-to-market losses are calculated, with due consideration for developments in market discount factors.
For the exercise to be complete, one critical assumption that needs to be made concerns the reaction of banks to economic developments. While it is natural to think that banks can put in place different alternative strategies in response to such developments, especially under the adverse scenario (since that scenario implies more changes relative to current developments), specifying individual reaction functions for banks makes the exercise very complex and arbitrary. Hence, it is typically assumed that balance sheet items are “static”, i.e. do not react to macro developments. This strong assumption maintains a rough “equality of treatment” across banks; it is an acceptable simplification if the stress test is intended to just provide a rough measure of resilience to shocks on impact, but it becomes a serious limitation if one adopts a more protracted definition of resilience, as is the case especially for macro-prudential applications, to which I will now turn.
A second level of complexity which can be added to the first consists of enriching stress-testing models with feedback effects from bank balance sheets to macroeconomic developments – aggregate demand, government finances and asset market conditions, including contagion within the banking system via the network of bilateral exposures. Incorporating this second layer transforms the exercise from a simple micro-prudential resilience test into a full-fledged macro-prudential analysis. This has important advantages for two reasons. First, when the stress-testing horizon extends beyond the short run, the assumption of static balance sheets becomes restrictive. Banks react to economic and risk conditions by adjusting their business plans, and this mitigates the impact of those conditions on the balance sheets, but it may also affect those conditions if banks are of systemic relevance. Modelling those feedbacks entails considerable difficulties; this is the reason why these more complex models are still at an early stage of development, and have thus far been applied only to a limited extent by supervisory authorities. 
It is worth reviewing where stress testing models, at both levels of complexity, stand in terms of the challenges faced by more traditional models. In the remainder of this speech, I will examine four of the challenges that have haunted econometricians most, namely the lack of identification, specification errors, regime dependence and estimation problems.
Econometricians have struggled with identification since the early days of their profession. The Cowles Commission (which moved to Yale in 1955) advocated using prior and disaggregated information to identify structural models, or relying on reduced forms for forecasting purposes.  How serious are identification problems in bank stress-testing models? I can only offer some tentative thoughts. First, for models limited to the first level of complexity, where macro conditions can be assumed to be exogenous given the limited time horizon and the high data frequency, the identification of the relationships between aggregate business cycle conditions and bank risk measures (PDs, LGDs, or directly bank losses and related provisions) should not be a major constraint. Second, the growing wealth of micro information on individual banks can be used to solve identification issues. Extra information is also available, for example in the form of independent ratings in disaggregated exposures. This information is still valuable when we move to more complex models with feedbacks, although other problems may arise there (as I will indicate later). Moreover, in many cases identification can be facilitated by recourse to technical relationships, which are tantamount to prior identifying information; for example accounting rules or conventions linking expected losses to provisions. All in all, I think identification is likely to become a serious issue only in stress-testing models that are specified at a very aggregative level.
Specification errors are probably a more concrete risk. Practitioners have long been aware of the difficulty of estimating equations like (1) with time series data. The lagged dependent term in the regression often masks specification errors, due to omitted or invalid explanatory variables or wrong dynamic specifications. The properties of the error term need to be analysed jointly with the equation’s dynamics. R-squared and Durbin-Watson statistics are not very informative, especially when variables are trending.  The fact that stress-test modellers work with short samples compounds the problems. To deal with them, the literature has proposed a full battery of regression diagnostics , which have not as yet, however, become part of the daily toolkit of stress-test modellers. I think they should become so, especially in cases where models have a short time horizon and are estimated on data with a relative high frequency (quarterly or higher), which is likely to be the rule in most practical applications. Experience suggests, in my view, that regression diagnostics are not sufficient to isolate a single data-coherent model; some ambiguity always remains. But they help a great deal in reducing the range of acceptable models, and they do so at low cost. Their usefulness is likely to be great for models characterised by both the levels of complexity described earlier.
Regime dependence, first raised by Lucas in his critique on traditional econometric models  – arises when optimizing agents (banks in our case) incorporate changes in policy regimes in their expectations and, hence, change their own reaction rules. The Lucas critique has now been generally accepted as a concept, although its empirical relevance remains controversial.  Whatever the judgement is on this, in the context of stress-testing models, regime dependence should not be a serious issue for baseline scenarios analyses, but may be for adverse ones, which – by their nature – embody drastic changes relative to history. In the logic of the “critique”, changes in expectation formation schemes occur in case of significant changes in the policy regime or of other discontinuities; baseline scenarios do not contain such changes. Moreover, regime dependence is unlikely to be important in the simpler exercises characterised earlier as “first level”, provided the time horizon is short enough to make the static balance sheet assumption (no reaction by banks) plausible. It becomes especially important when the feedback from banks to the real economy is taken into account – as in the “second level”.
Regarding estimation, the rule in stress-testing work nowadays is to apply simple regression techniques (ordinary least squares). This assumes, as well known residuals that are white noise and uncorrelated with the explanatory variables. I would expect such assumptions to be plausible in most practical applications of models belonging to the “first level” of complexity, but in any event, such an assumption can and should be verified more systematically than is currently done. Financial markets are also prone to sudden regime changes and nonlinearities that make estimation more complex. Again, diagnostics are available for this purpose, and their application should become standard practice.
All in all, the elements just discussed portray a relatively favourable picture of the prospects for building reliable stress-testing models, at least where those belonging to the “first level” of complexity are concerned. In a data-rich environment, with different sources of micro-statistical information at the disposal of the researcher/supervisor, identification and regime-dependence problems should not be overwhelming.
However, it is important that model building techniques and performance diagnostics are upgraded, along the lines pursued in other areas of econometric work over the years. At present, most practitioners’ work does match the standards reached in other areas of econometric work. This is an area where the Program for Financial Stability established at Yale could make a contribution, as I will mention in my conclusions. Let me now turn to some policy-related problems in using stress-testing in banking supervision.
The twin goals of stress tests: transparency and repairs
Stress tests are often used by bank supervisors and regulators to achieve two objectives at one and the same time, namely to communicate to financial markets the current state of the banking sector and to identify corrections that can improve the state of the balance sheets of banks.  Both objectives (“informing” and “repairing”) serve the common ultimate goal of promoting financial stability, but they are distinct and can give rise to policy dilemmas, especially in crisis situations.
The “information-providing” motive involves uncovering potential vulnerabilities of balance sheets, in relation to the economic environment. To be effective, information must be sound and credible, which means that it must be based on exhaustive data and analysis, but also be frank and truthful. When weaknesses are identified, they need to be communicated openly, not concealed. This is where a delicate issue arises. Disclosing bank weaknesses can, in certain conditions, worsen expectations and exacerbate the problems one is trying to solve. Banks can, for example, see their funding prospects deteriorate or their chances of collecting capital worsened.  Conversely, if the authority eschews full transparency, the “information-providing” and the “repair” functions are jeopardised, and the authority may suffer a loss of credibility.
Similar trade-offs arise in other policy areas as well. The difference is that banking supervision has traditionally been particularly cautious and guarded in its approach to communications, partly due to the elements of confidentiality involved. Supervisory authorities may need, in the years ahead, to develop communication more extensively as they move in today’s increasingly transparent and interactive policy environment.  In the following, I will mention three factors that can contribute to making the “information-providing” and the “repair” purposes more effective and mutually consistent.
First, stress tests must have a reliable accounting and statistical starting point. Since the exercise consists of projecting balance sheet variables into the future, there must be clarity and certainty on the initial conditions. One way of achieving this is to carry out an asset quality review (AQR) prior to the stress test. AQRs are detailed reviews entailing a number of checks of bank exposures: correct accounting classifications, counterparties’ performance, adequate provisioning (from both accounting and prudential perspectives), collateral values, etc. AQRs are resource-intensive exercises; they are not performed routinely and must be conducted on representative samples of exposures. AQRs can be very effective in providing updated and reliable information, and in ensuring a sound informational basis for the stress test. This is the approach adopted in the current “comprehensive assessment” that I will describe in the next part of my presentation.
Second, designing a “realistic” adverse scenario is a challenge. Adverse scenarios by nature portray unlikely adverse circumstances. Yet, in their internal consistency and narrative they must be sufficiently well constructed so as to be plausible. Individual risk factors affect banks differently, depending on their balance sheet structures. There are several approaches that one may follow. The most common is to let an econometric model provide internally consistent sets of variables for a given composite exogenous shock. Alternatively, one can make separate judgemental assumptions concerning the main risk factors (economic activity, risk spreads, etc.). Both approaches depend heavily on the narrative and hinge on subjective beliefs that may not necessarily be well grounded. Often, for example, adverse scenarios tend to reflect the most recent experience (“the last crisis”), which is unlikely to repeat itself. A possible alternative, worth exploring, would be to construct model-based joint probability distributions for the main macro variables and to focus on the “adverse region” in terms of the impact on the banks’ capital ratios.
Finally, communicating stress test results is safe only if markets perceive that there is a reliable backstop to the exercise. If balance sheet weaknesses are disclosed, it is crucial to ensure that the resources necessary to repair them will be available. The approaches to achieve this are likely to differ, depending on market conditions. When these are calm, reliance on the market is possible and arguably the best strategy. In crisis situations (also potential ones), a solid public backstop is necessary. 
I will now, before concluding, turn to the stress-testing exercise currently being conducted in Europe.
The 2014 European stress test
An element that often escapes attention is that the 2014 European stress test is actually composed of two processes, namely an exercise conducted by the European Banking Authority (EBA) covering the European Union (EU), which is a recurrent event and follows on previous exercises in 2009 and 2011 (as mentioned in the New York Fed report), and the exercise coordinated by the ECB for the 18 countries belonging to the euro area  in preparation for it assuming the role of banking supervisor, which will start on 4 November this year. The ECB stress test follows the EBA’s methodology, but is based on results of a preliminary AQR. The two elements are combined in a “comprehensive assessment”, a large-scale supervisory exercise carried out for the 128 most important banks in the euro area (covering 85% of the whole euro area banking sector). The outcome will be communicated in late October. Almost all of these banks will then fall under the ECB’s direct supervisory responsibility as of 4 November.
Let me first focus on some key features of the stress test, which entails close cooperation between the ECB, the EBA and the national supervisors. In particular, the ECB has collaborated closely with the EBA on the stress test methodology and with the European Systemic Risk Board (ESRB) in producing the adverse scenario, while the baseline scenario is taken from the European Commission.
I have already mentioned the large number of banks involved, which stands in sharp contrast to the much more limited number of banks analysed in the United States. The exercise follows a “constrained bottom-up approach”: banks implement a common methodology and scenarios defined by the authorities, using their own internal models. More specifically, they are asked to calculate the impact of the defined scenarios on risk factors and parameters, which ultimately translates into the impact on capital available and on risk weighted assets and submit the results to the supervisors. This differs from top-down approaches (such as that in the New York Fed report) whereby supervisors calculate the impact of the defined scenarios on capital and RWA using their own models, with a lesser involvement of the banks themselves.
Both the bottom-up and the top-down approaches have advantages and disadvantages. The first draws on more granular proprietary information, hence it is potentially more data-rich. It also links-up more easily with supervisory practices, which use internal models for a variety of decisions, including capital plans. The advantages of the bottom-up approach increase with the informational advantage banks have relative to supervisory authorities. Conversely, the top-down method used in the United States can be operated more flexibly at the centre, and guarantees an in-built level-playing-field. Ideally, one would want to combine the two approaches, in order to benefit from their respective advantages. In our comprehensive assessment, this is done by using the ECB’s top-down model to carry out quality assurance checks on the bottom-up results submitted by the banks. Banks are asked to submit explanatory notes specifying how they have arrived at their bottom-up results. When banks’ calculations diverge significantly, or are deemed not to be sufficiently conservative, a further round of in-depth analysis is conducted. Banks are asked to explain their results and, if necessary, to adjust their assumptions and calculations.
The time horizon for the 2014 European stress test spans three years – December 2013 to December 2016 – a longer period than in the US case. It is being carried out on the basis of a static balance sheet assumption: the growth of banks’ balance sheets over the three years concerned is assumed to be zero, and the maturity profile of their respective assets and liabilities is assumed to remain unchanged.  Assets and liabilities that mature within the time horizon are assumed to be replaced with similar financial instruments in terms of type, credit quality on the date of maturity and original maturity as at the start of the exercise. Furthermore, it is assumed that banks maintain the same business mix and model in terms of operations, product strategies and geographical presence throughout the time horizon. The projections of banks’ profits and losses over the three years are to be made in line with the assumptions of zero growth and a stable business mix. I have already discussed some implications of the static balance sheet assumption in the second part of my presentation.
Let me now turn to the scenarios applied. The 2014 European stress test entails, as usual, a baseline scenario and an adverse scenario, which contain forward-looking paths for key macroeconomic and financial variables for all EU countries and a large number of non-EU countries. The adverse scenario captures the prevailing view of current risks facing the financial system in the EU, which include (i) an increase in global bond yields that is amplified by an abrupt reversal in risk assessments, especially towards emerging market economies, (ii) a further deterioration of credit quality in countries with feeble demand, (iii) stalling policy reforms jeopardising confidence in the sustainability of public finances, and (iv) the lack of necessary bank balance sheet repairs to maintain affordable market funding.
The scenario captures these risks in the form of a number of financial and economic shocks that are reflected in indicators such as GDP growth, HICP inflation, unemployment, interest rates and stock prices. In a nutshell, the overall narrative can be described as follows.
An increase in investors’ aversion to long-term fixed income securities results in a generalised re-pricing of assets and related sell-offs. Global long-term bond yields rise, yield curves steepen and emerging markets face additional market turbulence. The assumed increase in long-term US government bond yields, while varying over the three-year horizon, peaks at 250 basis points above the baseline, while stock prices are set to decline by, on average, approximately 18-19% in the euro area and the EU as a whole, with the country-specific shocks in the EU varying between -11% and -27%. These disturbances in the financial sector have spill-over effects on real economic activity in the EU. The latter is partly driven by a fall in foreign demand for EU exports as a result of declines in real economic activity outside the EU. Overall, the scenario implies a cumulative deviation of EU GDP from its baseline level by ‑2.2% in 2014, by ‑5.6% in 2015, and by ‑7.0% in 2016, a sharper decline than that assumed in previous exercises coordinated by the EBA. EU unemployment is set in the scenario to exceed its baseline level by 0.6 percentage point in 2014, by 1.9 percentage points in 2015 and by 2.9 percentage points in 2016.
The global financial shock also acts as a trigger for re-differentiation of EU sovereign bond yields according to associated perceptions of sovereign risk, which translate into funding difficulties for the respective banking sectors. The upward shocks to long-term bond yields in the EU range, at their peak, from 137 basis points in Germany to 380 basis points in Greece. In terms of bank funding costs, the shocks altogether entail an assumed permanent 80 basis point increase in short-term interbank rates, whereas longer-term bank funding costs are assumed to follow the pattern of government bond yields more closely.
Based on the impact of this scenario, the stress test covers credit risk, market risk, sovereign risk, funding risk and securitisation as bank-specific risk factors. In addition, capital requirements for operational risk are also taken into account in the exercise, using a simplified approach. Credit risk is assessed through the impact of the aforementioned scenario on the default and loss parameters for all counterparties (e.g. sovereigns, institutions, financial and non‐financial firms, and households) and for all positions exposed to risks stemming from the default of a counterparty (loan portfolio positions, positions in held-to-maturity securities and positions in the categories “available for sale” and “designated at fair value through profit and loss”).
Market risk is assessed by applying a common set of stressed market parameters to all positions exposed to risks stemming from changes in market prices, including counterparty credit risk. This also covers held-for-trading positions, available-for-sale positions and positions at fair value through profit and loss. Credit spread risk in accounting categories that are sensitive to market risk developments are also subject to the stressed market parameters.
Sovereign exposures are treated in line with the methodology for the risk types I have just mentioned, depending on their accounting classification. For sovereign positions in the regulatory banking book (excluding fair value positions subject to the market risk approach), banks are required to estimate impairments/losses in line with sovereign downgrades, as projected in the macro‐economic scenario, and to compute risk-weighted assets based on stressed inputs accordingly. Sovereign positions in the categories available for sale, designated at fair value through profit and loss, and held for trading are subject to the market risk parameters and to the haircuts defined by the authorities.
As I have already mentioned, one important factor that is specific to the ECB’s comprehensive assessment is the fact that the stress test will incorporate the results of an in-depth asset quality review (AQR). While the stress test is, by definition, a forward-looking exercise, the AQR assesses valuations of assets and collateral, as well as related provisions, at the specific point in time that marks the starting point of the stress test, namely 31 December 2013. The AQR conducted in the euro area is an exercise of unprecedented scope and depth, and is now being concluded after approximately nine months of highly intense and granular work, which included a review of over 120,000 individual credit files and 170,000 collateral items across the 128 banks involved.
The results of the AQR will be used to adjust the starting-point balance sheet of the banks for use in the stress test. The starting-point capital ratios will incorporate adjustments determined by the AQR, such as increased provisioning needs due to the re-classification of exposures from performing to non-performing. In addition, wherever the findings for portfolios covered in the AQR show material differences with the banks’ own figures, parameters to forecast total losses in the stress test will be adjusted so as to reflect those differences. This “join-up” of the two components constitutes a novelty, and materially enhances the strictness, consistency and credibility of the comprehensive assessment in comparison with previous stress tests.
At the end of the exercise, banks’ stressed capital ratios will be assessed against pre-defined thresholds. Those are set at 8% Common Equity Tier 1 (CET1) capital for the baseline scenario and at 5.5% CET1 capital for the adverse scenario. Whenever a bank’s capital is found to fall below those thresholds, it will be required to recapitalise within a period of six months for shortfalls stemming from the baseline scenario (or the AQR by itself), and within nine months for shortfalls stemming from the adverse scenario. The results of the comprehensive assessment will be published in late October 2014, and banks facing capital shortfalls will have to submit capital plans within two weeks after the publication date. In those plans, which will be assessed and validated by the ECB, the banks concerned will have to set out in detail how they intend to increase their capital positions to the required levels within the time frame of six or nine months.
As I mentioned already, establishing a Program on Financial Stability at the Yale School of Management strikes me as an excellent idea at this juncture. I look forward to learning more about your plans and to be following your future work. For the moment, I briefly looked at your website and noted a strong emphasis on high-level education and training. This is very welcome, because policy-making organisations (including the ECB) will need, to an increasing extent, university graduates with a strong set of skills for their supervisory and financial stability functions. The financial industry and the community of auditors and consultants also need similar professional profiles.
In addition, I think the Yale Program on Financial Stability is well-placed to provide impetus and guidance to research. Let me mention three specific areas (this is certainly not an exclusive list):
First, it would be useful to have a standing forum where new ideas, by academics and practitioners alike, are exchanged and discussed. This conference marks an excellent beginning; I hope there will be others. A well-focused series of working papers, open to outside contributions, could also be useful.
Second, a systematic forum to compare best practices, globally, would be useful. This may include model comparison exercises, as well as standing databases that monitor, in a consistent and comparable way, the performance of stress-testing models against ex-post developments. This may help improve the quality of the models as well as the design of the adverse scenarios, which is, as we have seen, a major challenge in the whole approach.
Finally, as I have mentioned, there is a need to bring current stress-testing practices up to speed with the progress made in practical econometric modelling over past decades. Given its brilliant tradition, I can hardly think of a better place to do this than Yale.
Thank you for your attention.
I would like to thank Til Schuermann, Thierry Timmermans, Julian Ebner and Cécile Meys for helpful exchanges. The ideas expressed here are my own and do not necessarily coincide with those of the ECB.
Beverly Hirtle, Anna Kovner, James Vickery and Meru Bhanot, “The Capital and Loss Assessment under Stress Scenarios (CLASS) Model”, Federal Reserve Bank of New York, 2014.
This is the term used by Lucas and Sargent to characterise the purported failure of macro econometric models in predicting the Great Inflation (see R. Lucas and T. Sargent, “After Keynesian Macroeconomics”, Quarterly Review, Federal Reserve Bank of Minneapolis, 1979.
See also T. Schuermann, “Stress Testing Banks”, Working Papers, Wharton Financial Institutions Center, 2013.
This simpler avenue is the one typically pursued by supervisors. For a survey of practices among supervisory authorities, see A. Foglia, “Stress Testing Credit Risk: A Survey of Authorities’ Approaches”, International Journal of Central Banking, 2009.
A survey of stress-testing models, including the pass-through from banks to the economy, is contained in Macrofinancial Stress Testing – Principles and Practices, International Monetary Fund, 2012.
T.J. Koopmans, “Identification in Econometric Model Construction”, Econometrica, Vol. 17 (2), 1949, pp. 125–144.
 C. Granger and P. Newbold, “Spurious Regressions in Econometrics”, Journal of Econometrics, Vol. 2, 1974, pp. 111-120.
 For a comprehensive study of the very large body of literature on regression diagnostics, see D.F. Hendry, Dynamic Econometrics, Oxford University Press, 1995.
See R. Lucas, "Econometric Policy Evaluation: A Critique", in K. Brunner and A. Meltzer, The Phillips Curve and Labor Markets, Carnegie-Rochester Conference Series on Public Policy 1, American Elsevier, New York, 1976.
See, for example, the remarks and the references on this point in J. Fernandez-Villaverde, “Horizons of Understanding: A Review of Ray Fair’s Estimating how the Economy Works”, Journal of Economic Literature, 2008.
The twin objectives are stressed in recent ECB communications; see, for example, the press release on the ECB’s comprehensive assessment, issued on 23 October 2013, and the transcript of the related press conference, both available in at http://www.ecb.europa.eu/ssm/html/index.en.html.
A discussion of this eventuality is contained in A. Orphanides, “ Bank stress tests as a policy tool: The European experience during the crisis”, in E. Faia, A. Hackethal, M. Haliassos and A.K. Langenbucher (eds.), Financial Regulation: A Transatlantic Perspective, Cambridge University Press, forthcoming.
On this, see D. Tarullo, Lessons from the Crisis Stress Tests, Speech presented at the International Research Forum on Monetary Policy, Washington D.C., 2010.
For example, in the context of the Supervisory Capital Assessment Program in 2009, the US Treasury established a Capital Assistance Program to provide any needed capital to banks through TARP funds. See again D. Tarullo, Lessons from the Crisis Stress Tests, Speech at the International Research Forum on Monetary Policy, Washington D.C., 2010. In the current “comprehensive assessment”, European authorities have devised a combined strategy including bail-in rules, recourse to markets, as well as backstops at national and (as last resort) at European level. On this see 8 July 2014 Ecofin Terms of Reference on shortfalls and burden-sharing following the comprehensive assessment, available at www.consilium.europa.eu/uedocs/NewsWord/en/ecofin/143781.doc, and the 17 July ECB Note on the comprehensive assessment, available at http://www.ecb.europa.eu/ssm/html/index.en.html.
The euro area countries are automatically members of the new supervisory mechanism led by the ECB. Other European Union countries will be able to join as well, with specific provisions for access.
The only cases in which exemptions from the static balance sheet assumption are accepted are those in which banks are subject to mandatory restructuring plans that have been formally agreed with the European Commission and publicly announced before 31 December 2013. In those cases, the banks are to consistently account for the evolution of balance sheets prescribed by the respective plans in all aspects of the projections.