Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond...

94
Published by THE FINANCE RESEARCH CENTER of Universidad Francisco Marroquín

Transcript of Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond...

Page 1: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

Published by THE FINANCE RESEARCH CENTER of Universidad Francisco Marroquín

Page 2: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

ISSN: XXXX-XXXX frc.ufm.edu

EXECUTIVE BOARD

Massimiliano NeriUniversidad Francisco Marroquín

Gabriel CalzadaUniversidad Francisco Marroquín

Helmuth ChávezUniversidad Francisco Marroquín

Olav A. DirkmaatUniversidad Francisco Marroquín

ADVISORY BOARD

Peter BoettkeGeorge Mason University

Jerry L. JordanPacific Academy for Advanced Studies

Edmundo RiveraForeign Trade Bank of Latin America (Bladex)

Journal NewFinance

of

EDITORIAL BOARD

Lawrence WhiteGeorge Mason University

Kevin DowdDurham University Business School

Philip BoothCass Business School

Jorge Rojas ArzúUniversidad Francisco Marroquín

Carlos MéndezUniversidad Francisco Marroquín

Szabolcs BlazsekUniversidad Francisco Marroquín

ASSOCIATE EDITORS

Mónica Río Nevado de ZelayaUniversidad Francisco Marroquín

Clynton R. López F.Universidad Francisco Marroquín

Page 3: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

For the Latest Information, Visit Our Website

For the latest information about the Journal of New Finance, about our meetings, or about other announcements, consult our website at:

https://frc.ufm.edu

Open Access Policy

This journal provides immediate open access to its content on the principle that making research freely available supports a greater global exchange of knowledge.

Permission to Reprint Materials from the Journal of New Finance

The Journal of New Finance consists of articles under the terms of the Creative Commons Attribution License 4.0, which allows use, distribution and reproduction in any medium, provided the original work is properly cited.

Contact Information

The Journal of New Finance is a publication of the Finance Research Center of Universidad Francisco Marroquín. For correspondence, please use the following contact information:egocios

Finance Research CenterUniversidad Francisco Marroquín, School of BusinessCalle Manuel F. Ayau (6 Calle final), zona 10Guatemala, Guatemala 01010

Page 4: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

Journal NewFinance

ofJanuary - March, 2017Volume 1, Number 1

Letter from the Editors:Is Modern Finance Geared Up

to Support Financial Regulation?

7 The Editors

A Call for Model Modesty 33 Mark Calabria

Assessing the Systemic Importance of Asset Managers

37 Gustave Laurent, Massimiliano Neri

Hyman Minsky: An Advocate ofBig Government

59 Juan Ramón Rallo

Macroeconomics and the Financial Cycle: Hamlet Without the Prince?

Claudio Borio

Beyond Mechanical Markets:Asset Price Swings, Risk, and the

Role of the State (Book Review)

87 Adrian Ravier

CONTENTS for JANUARY-MARCH 2017

81

Page 5: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

6

Page 6: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

7

1. Problems Exposed by the 2007-8 Crisis

The chief intellectual assumptions behind financial regulation is that capital markets are efficient and market participants (in well regulated and liquid markets) are rational.[1] Stock prices in liquid markets follow ‘random walks’, and adjust instantaneously to new

information thanks to the assessment of a widely distributed network of independently rational economic agents. Such mechanism creates a tendency toward an efficient equilibrium. These assumptions have always been subject to some challenge and their empirical verification occupy probably the most conspicuous portion of modern finance literature. Nevertheless, they

Journal of New Finance, January-March 2017

Letter from the Editors: Is Modern FinanceGeared Up to Support Financial Regulation?

The Editors of the Journal of New FinanceFinance Research Center @ UFM

AbstractThe chief intellectual assumptions behind financial regulation are that capital market are efficient and market participants act rationally. These assumptions have always been subject to some challenge and their empirical verification occupy a large portion of modern finance literature. Nevertheless, they have been the leading financial markets theory during the decades preceding the 2007-8 crisis. The crisis has shown that modern theory does not allow for solid risk assessment and reliable macroeconomic forecasting. Such challenges suggest that modern finance may be facing a paradigm crisis. In this paper, we take the measures to such crisis. The reaction of the regulators to the financial crisis was immediate and massive, but the reforms they brought about were built on top of the same theoretical framework that supported the pre-crisis environment. On one side, we suggest that a debate must be opened to assess how to move forward from the current mainstream paradigm; on the other side, we concede that today there are no viable alternative that can replace the current paradigm. Finally, we invite to challenge the rationale behind new financial regulation based on models that have failed in the past. The goal of the regulators should not be to eliminate the financial crises, nor the financial risk, but rather to facilitate an environment in which risk assessment is made more reliable.

Keywordsfinancial crisis, rational markets, market efficiency, behavioral finance, modern finance, paradigm, financial regulation

JEL ClassificationG01, G11, G14, G18, G32

Page 7: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

8

have been the leading financial markets theory during the decades preceding the 2007-8 crisis.

The crisis has been characterized by such a massive turmoil in the capital markets that to find a similar episode one should go back to 1929.[2] The S&P 500 index fell 57% from its October 2007 peak of 1,565 to a trough of 676 in March 2009. The Dow Jones Industrial Average plunged 54% from its high of 14,164.43, reached on Oct. 9, 2007, to 6,443.27 by March 6, 2009. On Sept. 29, 2008, financial markets experienced the biggest ever single-day crash in Dow Jones history (a drop of 777.68 points or 6.98%), which was a consequence of the news that the U.S. House of Representatives rejected the proposed government’s $700 billion bank bailout. The second biggest single-day loss happened just six days later, and during 2008 the Dow saw also the fourth, fifth, and tenth largest single-day crashes ever seen in its history.

In the US, three of the top 5 investment banks (Goldman Sachs, Morgan Stanley, Merrill Lynch, Lehman Brothers, and Bear Stearns) either filed for bankruptcy or were acquired after being close to bankruptcy. In March 2008, JP Morgan acquired Bear Sterns for a mere 10$ a share, with the Fed guaranteeing large parts of Bear’s liabilities. On September 15, 2008, Lehman Brothers filed for the largest bankruptcy in US history. During the same month, Merrill Lynch, struggling for survival, sold itself to Bank of America. With a $180 billion federal government bailout, AIG was nationalized the day after Lehman failed. In the US, the rate of bank failures went through the roof[3]. In 2007 only 3 banks failed, but in the following years the rate of failure accelerated to its peak in 2010 (25 in 2008, 140 in 2009, 157 in 2009); after that, the rate of bank failures began a gradual return to lower levels (92 in 2011, 51 in 2012, 24 in 2013, 18 in 2014, 8 in 2015, 5 in 2016). In Europe, a number of similar dramatic events reshaped the banking industry. On September 18, 2008,

Scottish HBOS was acquired by British Lloyds TSB, and the group was bailed out by the British government just one month later, together with Royal Bank of Scotland. On October 5, 2008, the French BNP Paribas acquired the Belgian and Luxembourg assets of Fortis, a Belgian bank bailed out just a week before. In September 2008, the Franco-Belgian Dexia Group, was bailed out by the Belgian government, and reorganized in 2012 with the healthy operations renamed into Belfius, and the remaining part left in a “bad bank”. Swiss based UBS was bailed out (US$9.7 billion) in December 2007 by the Government of Singapore Investment Corporation (one of the bank’s largest shareholders) and in 2008 by the Swiss National Bank ($60 billion) and the Federal administration of Switzerland. Between 2010 and 2012, the Spanish sector of cajas (savings and loan associations) was completely restructured, with the $19 billion bailout of Caja Madrid renamed into Bankia being the most notable one.

The capital market crisis anticipated what we would have observed in the broader economy, namely the beginning what has been called the Great Recession, which for the US began in December 2007 and ended in June 2009 (the longest recession since World War II). In this period, the American real gross domestic product (GDP) fell 4.3% in the period from 2007Q4 to 2009Q2. The US unemployment rate rose from 5% in December 2007 to 9.5% in June 2009, and peaked at 10 percent in October 2009, and US home prices fell approximately 30%, on average, from their mid-2006 peak to mid-2009. The reaction of policy makers was not mild. The Fed reduced its policy interest rate (the Fed Funds rate) from 5.25% in September 2007 to 0-0.25% in December 2008. At that point, the Fed also initiated quantitative easing programs to relief financial stress by purchasing housing-related debt, short and long term Treasuries, and other assets. The ECB followed

Page 8: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

9

the rate reduction, but with an one year lag as they initiated the decrease only after Lehmann collapsed.

Our objective here is not to add to the abundant literature on the causes of the crisis. Our goal is to highlight that facing such dramatic developments, the macroeconomic forecasts completely missed what was coming, and the ultra-sophisticated quant models were unable to provide a realistic view of the risks financial institutions were exposed to. Did market prices correctly reflect company valuations just before the crisis? Did momentum, herd behavior, or other market anomalies affected these valuations? Did market participants act rationally (in the sense defined by mainstream finance) before and during the crisis? The dramatic events described above suggest an easy negative answer to these questions. Furthermore, there is a growing share of academics and industry practitioners, for example Shiller (2008) and Cooper (2008), that believe that irrational market behavior contributed to the current crisis.

The reaction of financial regulators to the crisis was instantaneous and colossal. For example, a substantial amount of literature has been published by the Basel Committee of Banking Supervisor to analyze the crisis and identify the causes. As part of this effort, in order to fix the gaps of the existing regulatory framework, a major capital requirement upgrade called Basel III was published between 2010 and 2011. Similarly in the US, the Dodd-Frank Act was approved in 2010. The bill encompassed a number of reforms, including indications on how to support the Fed by enhancing capital requirements and financial stability. However, these reforms were built on top of the same theoretical framework of the previous ones. Moreover, they leveraged and extended the same risk measurement techniques such as Value at Risk (VaR).

In order to assess whether regulators acted in

the right direction to improve financial stability, in Section 2 we assess the current status of mainstream finance, and highlight the limitations that were uncovered during the 2007-8 financial crisis. In Section 3, we draw the conclusion that there is a crisis in the current mainstream in finance, and that a debate must be opened in order to assess how to move forward. The options to evolve the current paradigms are evaluated in Section 4. The editors of this new journal conclude that modern finance and the current financial regulatory frameworks are outdated, and a profound intellectual debate is required in order to define how and in which direction the finance discipline should move forward.

2. Current Status of Mainstream Finance

According to Ardalan (2008), mainstream modern finance embraces a broad spectrum of theories, namely: portfolio theory, the efficient market hypothesis, the capital asset pricing model, option theory, agency theory, arbitrage pricing theory, capital budget policy, capital structure policy, dividend policy.[4] The first three of this list are commonly reunited under the label Modern Portfolio Theory. In this section we review the elements of mainstream modern finance that came under scrutiny after the 2007-8 crisis, that is Modern Portfolio Theory, Value at Risk (the main technique used by the industry and the regulators to measure market risk) and the Dynamic Stochastic General Equilibrium.

2.1 Portfolio Theory and Diversification

Albeit Modern Portfolio Theory has been object of severe criticisms during its short life, Markowitz’s principles have always been treated with gratitude and respect for the leap forward they allowed to the investment discipline. As a demonstration that innovation always

Page 9: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

10

encounters resistance, it must be pointed out that initially the impact of these new principles was insignificant. They were originally published in 1952, but it took the 1973-74 stock market crash (the S&P 500 fell 43% from December 1972 to September 1974, or 50% after adjusting for inflation) to convince the industry that risk should be part of the decision making in investment management.[5]

The first key insight introduced by Markowitz was the notion of diversification, as a means to reduce the variance of an investment portfolio. The second important idea Markowitz pioneered was a new framework, known today as mean-variance, which is an approach that allows asset allocation decisions to be based on two dimensions that can be measured quantitatively: expected return and risk. Albeit this was not the explicit intention of Markowitz, the investment industry generalized this framework adopting the rule that the standard measure of an asset’s risk is the asset’s volatility.

Markowitz’s work revolutionized the investment management profession, but it also received strong criticism. The first three of the list that follows, were highlighted by Peter Bernstein.[6]

First, before Markowitz, investment decisions were based on historical returns and a qualitative assessment of risk (expert-judgment-based). Portfolio selection abolishes intuition and provides a prescriptive recipe about what investments to undertake, based on a given arbitrary dose of risk.

Second, critics questioned whether variance is the proper proxy for risk. Value investors have demonstrated multiple times the weaknesses of this approach. According to Portfolio Theory, high volatility corresponds to high risk, and consequently higher potential gain. However, a stock may have high volatility due to strong buying pressure (irrational exuberance), just before the end of its bull cycle. Clearly, it is a

risky stock with low potential gains. Conversely, a high volatility stock may be bottoming up its bear cycle and represent a great potential buy[7]. Finally, stocks with low volatility due to low momentum but high upside potential (based on fundamental analysis), may be seen by an orthodox viewer as a low volatility-low risk security, when instead it hides a great high return-low risk opportunity. An additional problem with volatility as a proxy of risk is the assumption of Portfolio Theory that volatility is constant, when in reality it is not. Some solutions have been found to overcome this issue (especially with stochastic models such as of the GARCH-family), but their use, due to computational complexity, remains limited. Furthermore, investment managers are aware of the importance to assess an investment against the risk taken (risk-adjusted performance management), however by measuring risk as volatility, risk-adjusted measures become biased toward low volatility instead of risk. The idea that a portfolio can be composed solely around the idea of two numbers (risk and return) holds only if returns are normally distributed (a normal distribution can be identified by its mean and standard deviation), but the limitations of the normality assumption (above all: how to address the tail?) are well known, but we will come back later to this point.[8]

Third, what if the positive monotonic relationship between risk and return is falsified? Empirical evidence like Murphy (1977) has highlighted anomalies in the relationship volatility – return (capital markets line), therefore the proportional relationship does not hold. Additionally, should we treat extra return as a risk premium?

As a fourth issue, the debate around the performance of Portfolio Theory (PT) has been ongoing for decades. DeMiguel, Garlappi, Uppal (2009), for example, were able to show that PT did not outperform a portfolio

Page 10: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

11

based on equal allocation. The debate has been fueled by industry practitioners that seek to demonstrate the superiority of active management (where the portfolio manager implements a specific investments strategy with the goal of outperforming a benchmark reference, such as for example the S&P 500) vs passive management (PT).[9]

A fifth limitation involves the diversification principle. Blanque (2014) is very vocal on the impossibility to obtain reliable results with diversification as an investment strategy. In order to diversify properly, one has to invest in market segments that are weakly correlated, which requires having specialization in a broad spectrum of investment types. As a result, an investor may be diversified, but have a weak idea of what is going on in his portfolio from a fundamental analysis perspective. According to Iyiola, Munirat & Nwufo (2012), diversification (or PT) forces to invest in assets without analyzing security fundamentals, solely for the benefit of eliminating non-systematic risk. This provides upward pressure on assets with low fundamental value, but with characteristics in terms of historical mean-variance that help reach the diversification goal. The same point is made by Scott (2011): “As highly-diversified strategies gain assets, inefficiencies become more prevalent because share prices are increasingly driven by factors other than fundamentals”. Vice versa, a concentrated investor may be exposed to a reduced range of risk factors, but may have a solid understanding about the investments made in his field of specialization. Moreover, large funds (especially mutual funds) reach such a gargantuan size that for the amount of stocks they hold, they result even “over-diversified”, with the consequence that it becomes tougher and tougher to beat the indexes. Finally, if every investor follows this strategy, we will obtain herd behavior toward a low volatility portfolio, with the logical consequence that in times of

crisis every investor would begin to sell as soon as volatility increases (systemic risk).

As a sixth point, we should underline the amount of research that has been dedicated for decades to the computational complexity of the mean-variance framework. In order to calculate the optimal asset allocation, one has to overcome the problem of estimating the mean, variance and correlations. Blanque (2014) emphasizes that the complexity is in particular due to the heavy data and computational requirements to assess correlations between arrays of assets. Using historical values to estimate future risk, return and correlations is an option that entails adopting an inductive view of the future. What if the past does not contain sufficient information about the risks we could possibly face in the future? In this respect, correlations are very difficult to estimate, and correlations breakdowns in times of crisis remain an unsolved issue, which can be partially moderated with a frequent (and computationally expensive) recalibration of correlations. The approach of using historical values is opposed to Probabilistic Risk Assessment, used in nuclear plants and other complex engineering undertakings (typically associated to low frequency events), where the risk is assessed through the probabilistic assessment of the risk factors.

Seventh, Portfolio Theory is focused on a single period perspective. While the capital asset pricing model (CAPM) introduces this assumption explicitly (as we will see below), Portfolio Theory requires the assumption to be in place, in order to enable comparative evaluation among different investment alternatives.

Finally, Portfolio Theory assumes infinite access to liquidity. This has been abundantly challenged by academics (Pedersen 2015). In addition to this, the financial crisis has shown that portfolio managers were on average unprepared to cope with the dramatic fluctuations of the financial crisis.

Page 11: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

12

The financial crisis has not been kind to those investors following Portfolio Theory and the principle of diversification. The already mentioned DeMiguel, Garlappi & Uppal (2009) confirms that a majority of investors, following the diversification principle, were unprepared for the crisis. One could only wonder whether Portfolio Theory can be of any value to investors facing such large market swings.

2.2 The Efficient Market Hypothesis

2.2.1 The Random Walk Hypothesis

A random walk is a game in which the outcome is determined by chance (like a coin flip). Kendal (1953) was the first to show that prices of stocks and commodities look like a random walk, followed by evidence by Samuelson (1965) and Fama (1965). Stock prices are said to follow a random walk, because it is not possible to find a pattern in share price changes. Adopting such an assumption is not a minor step since it enables to reuse all the theoretical architecture build by physicists regarding the random walk of physical particles (Einstein, Brownain motion, Martigales). We just assume that market prices will move in the same way.

The point of the Random Walk Hypothesis (RWH) is that price changes of an individual stock are independent from one another. A good way to see this, is to assess the degree of dependence of the price change in successive days. For example, if we consider pairs of days, we could assess the correlation of the price change in day t and in day t+1 and calculate the correlation coefficient (in this case we speak of autocorrelation or serial correlation, since it is about the correlation of a stock price change with itself). The literature abounds of empirical studies that confirm stocks have a very weak serial correlation.

The RWH is justified on the ground that past

prices do not allow to predict future prices. If this was possible, that is, if past prices would enable to identify future price trends, then an easy profit would appear in the market and investors would immediately react by buying or selling until the market price matches the value of the asset (Net Present Value, intrinsic value, etc.). As a result, prices adjust to new information.

Value investors have pointed out that the RWH is a solution for the problem that we do not understand the determinants of price changes. The random walk approach offers an easy short cut to such lack of visibility, rather than an explanation. If we do not understand soccer, we can find evidence that there are 33% chances that one team will win, 33% that the other win, and 33% chances that the result will be a draw. In the same way, stock prices are the result of random chance in so far as we lack the basic understanding about why they occur. Moreover, according to Fisher (1996), the problem with this type of thinking is that it focuses on the short term. Since we cannot find recurring patterns to obtain a profit by trading stocks based on short-term forecasts, RWH proponents concluded that prices follow a random walk. If one moves the time horizon to a longer term (taking the perspective of the investor rather than the trader), prices become “efficient”[10].

2.2.2 The Market Efficiency Hypothesis

The Random Walk Hypothesis explains that prices in one period (yesterday) are uncorrelated to prices in the following period (today). One could wonder why to stop at the information represented by past prices and not consider all the information available to investors? If we extend the idea to all information, we obtain that today prices reflect all information available to investor, therefore nobody will be able to profit by some bit of information regarding a

Page 12: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

13

certain company because its stock price already embeds it.

According to the original wording used by Eugene Fama: “An efficient capital market is a market that is efficient in processing information. The prices of securities at any time are based on correct evaluation of all information available at that time. In an efficient capital market, prices fully reflect available information”[11]. The adjective “efficient” could lead to a semantic confusion since in this context it does not refer to mechanical (or static) efficiency of the market. The concept refers to how “informationally” efficient market prices are. Prices should reflect the available information for the simple reason that if this would be the case, then arbitrage opportunities would trigger a process that closes the gap.

2.2.3 Empirical Challenges to the Market Efficiency Hypothesis

In its early stages, the theory gathered a quasi-plebiscitary consensus among researchers, becoming quickly the mainstream[12]. Eventually, the drive toward originality promoted a set of studies aimed at dissecting the deviations from the stock value forecasted by the EMH. Such deviations were defined as “anomalies”, that is, market situations in which the security price differs from the fundamental value (commonly defined as the Net Present Value, where the cost of capital is determined by the capital asset pricing model). Empirical testing of the EMH represents one of the most debated subjects in the history of financial literature. Michael Jensen was unequivocal in his editorial introducing the 1978 special issue of the new Journal of Financial Economics dedicated to the testing of the EMH: “I believe there is no other proposition in economics which has more solid empirical evidence supporting it than the Efficient Market Hypothesis”.[13]

Nevertheless, the number of empirical findings published against the predictions of the EMH is overwhelming. We provide here merely a concise list of the main anomalies:

1. Random Walk Hypothesis: In a 1988 paper titled “Stock market prices do not follow random walks: evidence from a simple specification test”, Lo and MacKinlay use weekly US stock returns indexes from 1962 to 1985 to derive a positive serial correlation in weekly returns. For holding periods longer than one week (three to five years) different studies – for example Fama and French (1988) and Poterba and Summers (1988) – obtain a non-zero serial correlation US stock returns indexes from 1962 to 1985. However, the amount of data used in these studies to reject the RWH was not sufficient to reach the usual level of significance.

2. Value investing, overreaction, underreaction, and reversion to the mean: Value investing was invented by Benjamin Graham and David Dodd way before Fama’s idea. When the debate on the market anomalies began, scholars found that stocks with a low price multiplier over earnings returned on average higher returns than stocks with higher price multipliers. This was a substantial confirmation of the main principles of value investing. Moreover, empirical testing confirmed that stock prices may overreact by buying stocks that had recent gains or selling stocks that suffered recent losses. Similarly, they may underreact to the news. Overreactions push prices away from their ‘equilibrium’ or ‘rational’ value and are brought back in line by the ‘arbitrage’ activity of rational investors. Reversion to the mean refers to the general phenomena of the market price revering to the equilibrium value after a certain time lag.[14] The debate was

Page 13: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

14

not settled and it was actually object of a revival starting 2007.[15]

3. Size effect (small firms outperform large ones): Fama and French (1992) is a cornerstone in the study of the size effect on stock returns.

4. Calendar effects: Different effects fall under this category. The January effect implies that the main portion of the relative outperformance of small firms occurs at the turn of the calendar year.[16] Other effects are: Monday returns are lower; most of the daily returns come at the beginning and at the end of the trading day.

5. Momentum effect: An investment strategy that buy winners (stocks with high returns in a short time period of 3 to 12 months) and sell losers. This is the anomaly that most troubled Fama[18], and the reference is Jegadeesh & Titman (1993). This anomaly appeals to the behavioral assumption that investors underreact to new information. This investment strategy is widely exploited in trading, see for example, Jegadeesh & Titman (2001), MacKenzie’s interview to Ross and Roll Asset Management[19], and the research published by AQR Capital.

6. Post-earnings announcement drift: the idea that investors underreact to good news was documented for the first time by Ball and Brown (1968).[20]

7. Bubbles and market efficiency: Financial bubbles represent deviations from the trajectory of “efficient” stock prices. It is not straightforward to identify whether a bubble is developing in a particular asset market. Conversely, it is quite simple to identify a bubble ex-post, when a bust makes it evident retrospectively. What is less easy, is to set up a reference in order to determine the amplitude of the asset price inflation during the bust end eventually how long it took to return to normality

after the bust. The same difficulty found by econometricians in studying bubbles is encountered by investors trying to evaluate single stocks. It takes strong rigor and discipline to assess the prospective value of a public company using fundamental analysis. Many investors therefore rely on easier decision making practices such as assessing the current market price against a reference benchmark (comparable securities, etc.)[21].

8. Smooth dividends with volatile market prices: Shiller’s (1981) and Leroy and Porter’s (1981) volatility tests found that stock market volatility was far greater than could be justified by changes in dividends. By showing that prices were more volatile than what they should be, Shiller implied that markets cannot be an efficient mechanism that perfectly reflect all the relevant information.

9. Other notable anomalies: closed-end funds, index inclusion (when shares of a company are included in a stock price index, the price of these shares suddenly increase), siamese twins (dual-listed companies sharing the same underlying cash flow but having different stock quotations).

There is also a large body of literature that has supported the EMH, arguing that these anomalies are instead evidence in favor of it, because market opportunities behind the anomalies cannot be exploited to a significant extend due to risk and transaction costs. BMA (2011) supports this line by citing the words of Professor Roll, a market efficientist and an authority in the field of anomalies: “Over the past decade, I have attempted to exploit many of the seemingly most promising “inefficiencies” by actually trading significant amounts of money according to a trading rule suggested by the “inefficiencies” . . . I have never yet found one that worked in practice, in the sense that

Page 14: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

15

it returned more after cost than a buy-and-hold strategy”[22]. According to Lo (2007), the possibility to profit from anomalies due to the existence of transaction costs, liquidity issues, institutional rigidities and non-stationaries cannot be demonstrated scientifically. Therefore, the economic value of the anomalies must be assessed in the long term.

2.2.4 Theoretical Challenges to the Efficient Markets Hypothesis

In 1990s a new “behavioral” approach to finance gained attention. According to its advocates, human decision making cannot be rational because it is affected by systematic biases (endowment effect, sunk costs, hyperbolic discounting, difficulties in maximizing utility, etc., the catalog of biases can be long). For example, investors may follow the herd, assuming that market leaders or the majority of people will know what will happen. If investors’ decisions are not 100% rational, then market prices cannot be efficient[23]. Behavioral finance has fostered much of the empirical research that challenged the EMH. The list of anomalies that are not compatible with the notion of market efficiency has grown to such an extent that behaviorists like Shleifer (2000) are confident that asset prices do not reflect neoclassical fundamentals.

When the 2013 Nobel Prize in economics was assigned to both Fama (representing the EMH) and Shiller (representing the most prominent behavioral challenger to the EMH), Clifford Asness and John Liew, who had written their doctoral thesis under Fama twenty years before, provided an interesting view about the EMH in Asness & Liew (2014). According to them, it is not possible to test the EMH without adopting an asset pricing model, that is why EMH and the capital asset pricing model (CAPM) go hand by hand. So when one observes efficient market anomalies, they must be referred to the joint

couple EMH + CAPM. According to them, the empirical evidence against the EMH has divided academia in two camps. According to one side of the debate, the CAPM is wrong, because risk is not only about beta but also about other factors (in line with the position taken by Fama against the CAPM).[24] According to the other side of the debate (the behaviorists), markets are not efficient because people are not rational, their thinking is imperfect and follows unconsciously heuristics (biases), hence market prices do not fully reflect the available information. Surprisingly, Asness and Liew take a mid-way position between Fama and Shiller. They believe the EMH is mostly right, except for a couple of difficult anomalies to deal with: value and momentum. One would expected that their opinion is due to the fact that the CAPM does not tell the full story and a multifactor model would be more appropriate. Not fully. They believe that the absence of market players that would represent intentionally the counterparty of a value + momentum investment strategy (long on cheap stocks and short on expensive ones) suggests that there is a behavioral component to take into account.

Another theoretical challenge came from the idea that it is impossible to reach an “informationally efficient” equilibrium (Grossman & Stiglitz 1980). If the market cannot be beaten, the idea is to invest in the market portfolio. However, if every investor adopts this strategy, nobody would have the incentive to be the first to bid stock prices based on new available information. As a result prices will not reflect 100% all information otherwise the incentive to operate to gather information would vanish. While the idea is appealing, according to MacKenzie (2006) this approach never reached mainstream acceptance[25].

Page 15: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

16

2.2.5 Market Episodes That Challenge the Market Efficiency Hypothesis

The stock market crash of 1987 was the first episode that generated an ample debate around the EMH. It is characterizing of that period reminding the position of the leaders of the two main fronts of that debate. On one side, Robert Shiller took a bold position against it: “The efficient market hypothesis is the most remarkable error in the history of economic theory. This is just another nail in its coffin.”[26] On the other side Eugene Fama admitted the frustration of not being able to identify the news that triggered the crash; other fathers of the theory such as William Sharpe or Fisher Black were not able to provide an account of the event fully consistent with the theory.[27]

Fast-forwarding thirteen years, a vast amount of research has been dedicated to study the Internet bubble: the Nasdaq index was 1140 in March 1996, reached its apex at 5048 in March 2000, and then started a steady decline that returned to 1140 in October 2002. The excesses observed especially during the last two years of the bubble that were defined Irrational Exuberance by Nobel Prize winner Robert Shiller. EMH advocates engaged in intellectual contortions to justify the validity of the theory. For example, Pastor & Veronesi (2006) claims that the fundamental value of a firm increases with uncertainty, thus stock market prices were consistent with this view and therefore the theory was not necessary invalidated. Surely, fundamental investors must have been impressed by the argument that we must pay more when we know less. Eugene Fama defended the EMH claiming that markets were victim of the recession (rather than the other way around) but conceded that “poorly informed investors could theoretically lead the market astray” and that stock prices could become “somewhat

irrational” as a result”.[28] However, the most challenging capital markets

episode for the EMH was the financial crisis of 2007–08. A number of industry practitioners have claimed that the EMH was responsible for it. Jeremy Grantham stated that the belief in the hypothesis led investors to a “chronic underestimation of the dangers of asset bubbles breaking”.[29] Roger Lowenstein declared “The upside of the current Great Recession is that it could drive a stake through the heart of the academic nostrum known as the efficient-market hypothesis.”[30]

Academics and policy makers went along the same lines. Paul Volcker chimed in, saying it is “clear that among the causes of the recent financial crisis was an unjustified faith in rational expectations [and] market efficiencies.”[31] Siegel (2010) said that “By 2007–2009, you had to be a fanatic to believe in the literal truth of the EMH”.[32]

2.3. Capital Asset Pricing Model

The year 2014 was the 50th anniversary of the capital asset pricing model (CAPM). This model is still the cornerstone of MBA investment courses, and it is often the only asset pricing model taught in these programs. As a result, today the model is used to estimate the firms’ cost of capital and to evaluate the performance of managed portfolios.

The CAPM rests on three pillars[33]. The first one is Harry Markowitz’s Portfolio Theory. The main point of this theoretical element is that investors only care about the mean and variance of their one-period investment return. As a result, investors choose “mean-variance- efficient” portfolios. The second and third pillars represent two assumptions introduced in Sharpe (1964), Lintner (1965) and Mossin (1966), who independently came to the same conclusions. The first assumption, introduced

Page 16: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

17

by Sharpe (1964), is the possibility of borrowing and lending infinite amounts at a risk-free rate. According to the second assumption, all investors agree on the distribution of expected returns at t+1, therefore they all see the same opportunity set. Hence, all investors hold the same portfolio of risky assets, and this portfolio must be the market portfolio. As a results, the expected return of a portfolio consisting of riskless and risky assets, can be calculated as a function of the market beta (systematic risk), the sensitivity (or correlation) of the asset’s return to the fluctuations of the market portfolio.

The CAPM introduced a heavy set of assumptions which we will not examine in detail here. These assumptions immediately attracted an array of criticism. Fama and MacBeth (1973) was one of the first empirical studies to successfully test the returns forecasted by the CAPM. Other studies demonstrated the opposite, the most prominent of which was Black, Jensen & Scholes (1972). They found that low-beta portfolios outperformed market portfolios (and, vice versa, high-beta portfolios had lower returns), which led them to conclude that their evidence was “sufficiently strong to warrant rejection of the traditional form” of the CAPM.[34]

The most serious theoretic critique to the CAPM came from Richard Roll, who previously had been one of Fama’s Ph.D. students at the University of Chicago. In Roll (1977), he highlighted the issue that the market portfolio must necessarily include every asset available in the economy; it could not be represented only by the S&P, a broader definition was required, including other asset types such as real estate, bonds, precious metals, rare art, collections, human capital, etc. Many of these assets will not have readily available market prices to be observed, therefore the market portfolio is by definition unknowable. As a consequence, the CAPM cannot be tested, a conclusion that was

conceded even by Professor Sharpe.[35]

The financial literature observed an accumulation of the empirical refutations of the CAPM. Fama himself revised his initial assessment and with Fama and French (1992) he has put forward what Mackenzie called the most influential empirical critique[36]. According to the paper, the linear relationship between beta and average return predicted by the CAPM was confirmed only in the period 1941–1965, while it was refuted with data after 1965. Fama and French’s paper has become known in the literature as the “the death of beta”, and it represents, together with Roll’s critique, the most important blow to the CAPM. It is interesting to note that even Markowitz presented his formal dismissal of the CAPM. In Markowitz (2005), he contested the realism of two of the model’s assumptions, that is a) investors can borrow infinite amounts of money at the risk-free rate, and b) investors can short without limit (which allows to obtain significant leverage). If the assumptions are made more realistic, then the market portfolio is not efficient anymore, making passive asset management nonsensical[37].

One of the more disputed assumptions of the CAPM was the infinite access to liquidity. However, the role of liquidity in the global financial crisis of 2007-8 taught us a few lessons[38].

First, the assumption of infinite access to liquidity has been clearly proven wrong by the crisis. Liquidity can suddenly dry up and evaporate. However, the presence of a lender of last resort, may introduce a moral hazard component to liquidity assessment. In this case, liquidity premia will be lower than in absence of a lender of last resort, since the provider of liquidity of last resort will always provide support at prices that will be sub-Bagehot standards[39].

Second, liquidity risk before the crisis was mispriced. Regardless of the origins of the issue, and the tools used by the monetary authorities

Page 17: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

18

to address the crisis, investors observed low liquidity premia before the crisis, and higher liquidity premia after the crisis. This has taught that standard theory needed to be updated in order to embed liquidity risk into asset valuation models. The lesson was also learnt by banking regulators, that moved from Basel II to Basel III, where one the most important updates is the introduction of quantitative capital requirements for liquidity risk, which were completely absent in the previous version of the regulation.

Third, liquidity risk is particularly important for investments where asset-liability matching is a concern. Exposures to illiquid assets are usually consistent with long-term investors that are not subject to asset/liability mismatch risk. They hunt for opportunities where (il)liquidity premia may allow higher long term returns. According to Pedersen, liquidity risk consists of three components: i) market liquidity risk: the risk of renouncing to a portion of the market price, when urged to sell quickly an asset; ii) funding liquidity risk: due to asset liability mismatch; iii) demand pressure risk: when, for example, an hedge fund needs to accommodate large demand pressure (buying low and selling high).[40]

2.4. Value at Risk

Value at Risk (VaR) is a simple marker risk measurement technique invented by Man Raymond at JP Morgan in 1994 to be able to tell to an executive how much a trading portfolio can lose probabilistic speaking, meaning under a certain level of confidence and over a given time horizon. The VaR gained tremendous success both in academia and industry thanks to two appealing characteristics. First, it provides a common consistent measure of risk across different positions and risk factors (so it enables to compare a fixed income position with an equity position)[41]. Second, it accounts for the

correlations among positions, so that if two risks offset each other, then the overall risk measure will be lower.

2.4.1 Main Theoretical Critiques

A number of critiques have been raised against the VaR since its inception, the main ones are:

• Failure to capture fat-tail risks

As highlighted in Dowd (2002), the VaR has been conceived to provide a statistic estimation valid 95% of 99% of the times. It does not provide any information regarding what happens to the remaining 5% or 1% of occasions, therefore when a tail event occurs, the VaR provides no indications. As a consequence, investment decisions based on the VaR will favor assets with low VaR under most circumstances with potentially significant losses in rare occasions. In other words, the VaR introduces a distortion in basic risk-return analysis since it breaks down the monotone relationship risk–return when we enter in the tail territory.[42]

• VaR lacks of sub-additivity defiesdiversification

According to the sub-additivity property, the risk of the sum should be not greater than the sum of the risks; in other words the aggregation of the individual risk should not increase the overall risk). The VaR is not sub-additive when built on top of normal or generally elliptical distributions[43]. This provides the incentive to build less diversified portfolios, since a diversified portfolio may have a higher VaR than a less diversified one.

• Builtlookingatthepast

The two main methods to calculate the VaR

Page 18: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

19

are based on past data. Historical simulation (the favorite of banks) simulates portfolio behavior over a preselected historical period and takes the worst loss. The variance-covariance method (the original method, more computationally intense) uses past market data to estimate future volatilities and correlations between portfolio components[44]. In both cases, if a crisis is preceded by a long period of low volatility (the pre-2007 period was labeled the “death of volatility” or the “great moderation”), then in either cases the VaR will underestimate the real risk. In addition to that, if the institution can select the historical period over which the historical simulation is performed (as is the case in banking regulation), there is an incentive to cherry pick to select past data to allow the highest correlation, which enables to minimize the capital requirement.

• It increases concentration risk, itspro-cyclical, and fails to capture systemicrisk

The VaR provides the incentive to invest in high-return/low-VaR instruments, that is, securities that experienced low volatility in the recent past. The market players that use it as an investment strategy or regulatory capital optimization strategy, will gather their portfolios around the same positions. As Persaud (2000) has shown, the model encourages to identify calm areas in the financial sea in order to exploit their low VaR characteristics and settle there for a while. As soon as the wind picks up in these spots, a sell off will occur at the generalized level and investors will move to the next placid spot. This provides the incentive to discriminate fundamentally sound investment against more shakier ones with a low VaR profile. Most importantly, such incentive creates a self-feeding effect. A localized sell off will increase the volatility of the security, triggering a contagion

that could build up into a liquidity shock (haircuts raise and correlations go to 1). This is what happened for example during the crises of 1997 (Asia) and 1998 (Russia), where the low VaR numbers were hit by a sudden increase in volatility, kicking in a liquidation cascade that led, in the end, to the LTCM default.[45]

This concentration risk was highlighted in Jorion & Taleb (1997), which according to Dowd (2002) is the most convincing argument against this issue, since it was written before the 1998 financial crisis.[46] At the same time; one could note that diversity in risk assessment brings diversification to risk approach and position-building, eliminating the pro-cyclicality issue and lowering systemic risk.

• It ignores the fundamentalcharacteristicsof anassets.

What matters for the VaR is not the intrinsic characteristics, but the recent past behavior of the asset. As an example, the VaR can treat in the same way a toxic CDO and a Treasury.

• It enables to build a sense of falsesecurityaroundhighlyleveragedportfolios

The issues above show why it is possible to build a portfolio with VaR that underestimates the reals risks. Consequently, it is possible to engineer highly leveraged portfolios with a low VaR figure. To give an idea, before the crisis, in Wall Street and the City of London, by following the VaR regulatory rules, institutions were able to design portfolios leveraged 100 to 1, that is, the bank had to post 1% of capital for its trading book.[47]

2.4.2 Adoption by Regulators

VaR was quickly adopted by regulators in their market risk capital frameworks to determine the

Page 19: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

20

regulatory capital required for the trading book of banking institutions. The 1996 Amendment to Basel I enabled to adopt an internal model based on a 10-day 99% VaR based on minimum one year of historical data, where it was allowed for correlations within and across asset families.[48] Basel II was released in 2004, and it basically introduced new requirements for credit risk (banking book) while keeping the same arrangements for market risk (trading book). The recommendations of the Basel Committee for Banking Supervision (BCBS) do not have the force of law but the regulatory framework proposed by the Committee (Basel I and Basel II) were adopted by almost all the supervisors worldwide, except for the US, where the SEC initially refused to substitute its requirements. However, in 2004 the SEC incorporated as well the VaR-model as a capital charge calculation methodology for the trading book following a methodology similar to Basel’s.[49] The new rule incentivized a race toward the lowest capital figure. Before April 2004, the regulatory capital for toxic, illiquid assets (for example subprime CDOs) was much higher than traditionally less risky assets (Treasuries, T-Bonds). The new rule allowed to treat the two assets classes in the same way.

2.4.3 The Challenge of the Financial Crisis to the VaR

The 1998 LTCM crisis had already raised tough challenges to the VaR as a measure of market risk. Risk controls at LTCM relied on a VaR model (one day 99% VaR) which systematically failed to detect the fundamental market movements during the six months leading to the crash. The literature is abundant on this case,[50] and probably because it was considered a unique episode (due to the idiosyncrasies of the hedge fund), it did not lead to a revision of the mainstream view of VaR. Above all, it did not

represent a sufficient case to refrain regulators from adopting VaR on a large scale.

The 2007-8 financial crisis challenged severely the VaR-framework. Triana (2012) defends the thesis that the crisis was caused by the introduction of innovative toxic instrument bearing a risk that was underestimated by VaR, considered as the official measure followed by supervisors globally. The thesis is supported by solid evidence. In 2007, financial institutions were on average highly leveraged (30/1 leverage, with trading book leverage of 10/1), and with mortgage positions in the trading book larger than equity capital base[51]. Turner (2009) shows that this was a trend that started in the early 2000s. However, the capital requirements for the trading book did not account for such extreme leverage and for the riskiness of the new toxic positions in the trading book. For example, Turner (2009) shows that the trading risk capital was 4-11% of total capital requirements, and the trading book’s market risk capital requirements as a percentage of the total capital requirements were in the range 0.1%-1.1%. The trading book capital requirements based on the 10-days 99% VaR severely underestimated the underlying risk, especially because of its inability to account for tail risk.

The failure of VaR to estimate the risks embedded in the trading books was immediately recognized by the BCBS in the publications that led to the release of Basel III. Furthermore, BCBS (2009) recognized the weaknesses of VaR in accounting for tail risk and large price movements developing over a large period of time: “the current VaR framework ignores differences in the underlying liquidity of trading book positions. In addition, these VaR calculations are typically based on a 99%/one-day VaR which is scaled up to 10 days. Consequently, the VaR capital charge may not fully reflect large daily losses that occur less frequently than two to three times per year as

Page 20: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

21

well as the potential for large cumulative price movements over periods of several weeks or months”. Bias toward recent historical data and inability to account for tail risk was reiterated in BCBS (2013): “the 10-day VaR calculation did not adequately capture credit risk or market liquidity risks; incentivised banks to take on tail risk; inadequately captured basis risk and proved procyclical due to its reliance on relatively recent historical data.”; as well as in BCBS (2016): “A shift from Value-at-Risk (VaR) to an Expected Shortfall (ES) measure of risk under stress. Use of ES will help to ensure a more prudent capture of “tail risk” and capital adequacy during periods of significant financial market stress”. Finally, the BCBS (2013) recognized the violation of the assumption of unlimited liquidity supply upon which the regulatory VaR was built: “The recent financial crisis was characterised by a sudden and severe impairment of liquidity across a range of asset markets. As a result, banks were often unable to promptly exit or hedge certain illiquid risk positions without materially affecting market prices. This violated a key assumption that was implicit in the 10-day VaR treatment of market risk”.

2.5. Dynamic Stochastic General Equilibrium

The 2007-2008 crisis has highlighted once again the inability of macroeconomists to forecast economic events. However, the debate about the adequacy of the tools used by mainstream macroeconomics has been going on for a long time and has involved severe ideological confrontations.

The orthodox approach is anchored in the New Keynesian Dynamic Stochastic General Equilibrium (DSGE), an approach based on the idea that the economy rotates around a single equilibrium. The approach was upgraded during the years with additional features, such as the

ability to deal with external disturbances like technological shocks.

DSGE has been criticized at least from three angles. First, situations of secular stagnation or savings glut cannot be explained using a single macroeconomic equilibrium that is supposed to reverts to its previous path after a crisis (consider here also the debate between Lawrence Summers and Ben Bernanke about the deep causes of the 2007-2009 Great Recession). Second, the model relies on a linear view of the events, which is in evident contrast with the nonlinear events observed during 2007 and 2008. (As a side note, stochastic models used in finance incorporate the notion of jumps, so one could wonder if mainstream macroeconomic models should not do the same). Third, as highlighted in Borio (2014), financial factors were put aside in the study of business fluctuations. The financial crisis that began in 2007 brought in, on one side, the need to reincorporate financial factors in macroeconomic models. On the other side, it did so almost “elusively” by adding so called “financial frictions” to the existing equilibrium macroeconomic models.

These criticisms are known by the profession but have never generated a remarkable reaction (Münchau 2015). At the same time, these models have become part of the standard tool box used by the major players in the financial markets (as an example the ECB uses the DSGE in its economic forecast).

3. The Crisis of the Current Paradigm

The debate over the composing elements of the current mainstream are controversial and generates stormy reactions. However, there are two conclusions that are difficult to oppose: 1) portfolio managers were on average unprepared to cope with the dramatic fluctuations of the financial crisis of 2007-8; 2) the financial regulation in place at the time did not prevent

Page 21: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

22

the financial crisis. According to Cliff Asness, few people think

the markets are perfectly efficient, rather, they are aware of anomalies and constantly try to exploit them, albeit this is not so easy to do (Buttonwood 2015).

Pascal Blanqué, Chief Investment Officer at Amundi, has suggested that Modern Portfolio Theory has depicted the financial markets as a heaven populated by “sacred cows” that have dominated that investment thinking, and that were proven wrong by the crisis. For example, in Blanqué (2014), he elaborates on the false promises of diversification implemented in a way that did not deliver safe and good portfolio returns. He also points to the notion of risk-free assets, which in theory should represent a low return asset with zero correlation with risky assets, while in practice we have observed US 10-year Treasuries violating these basic premises over the past three decades.

Triana (2011) has provided evidence that the crisis started in 2007 brought to the surface serious malfunctions of the financial mathematical models broadly adopted across the industry. These models also provided a sense of false security, displaying the tendency to behave according to expectations in normal times, and to break down in times of crisis, exactly the opposite of what one would expect from risk models. The adoption of sophisticated mathematical models is at the center of an endless methodological debate, mainly for three reasons. First, mainstream academia assumes that markets can be mathematized. The mathematization of finance began after the WWII, under the propulsion of neoclassical economics, which fostered the formal treatment of rational and optimizing economic agents interacting together, and generating a tractable and efficient equilibrium. In 1951, 2% of the pages of the American Economic Review, contained an equation, in 1978 the percentage

was 44%. The mathematization of finance followed in parallel the developments in the wider discipline of economics.[52] The trend of the mathematization of finance, and the risks that it embeds, are today recognized also by the academia. Robert Shiller, for example, has affirmed that “theorists like models with order, harmony and beauty. […] Academics like ideas that will lead to econometric studies. […] People in ambiguous situations will focus on the person who has the most coherent model” (Bottonwood 2015). Dowd (2014) and Down (2002) refer to the same concern, and if we go slightly back in time, Jorion & Taleb (1997) represents an excellent debate, just before the LTCM debacle, around the presumptuous use of mathematics to model risky financial events. Likewise, Hoppe (1999) offers a critical assessment of LTCM’s risk models, which considered statistically impossible the type of event that brought down the hedge fund (as a reminder: an 8 standard deviations event should not happen during the entire lifetime of the universe; LTCM was hit by a 14 standard deviations event; in 2007 we have observed 25 standard deviation events several days in a row). These studies warned against the transfer of mathematical and statistical models to the social sciences, where economic agents learn and react, making the environment non-stationary and prone to changes in behavioral patterns and correlations.

The second and third reason are suggested by Turner (2009). If we assume, for the sake of the argument, that the event under scrutiny can be mathematized, are we putting a misplaced reliance on these models, like what happened with VaR before the 2007-8 crisis? Moreover, is top management able to understand the complexity underlying the mathematical models used for risk management, or rather the latter are used as a checkbox exercise in order to communicate a false sense of assurance?

Finally, some commentators would defend

Page 22: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

23

that financial regulation even exacerbated the crisis. Studies in this direction are Friedman & Kraus (2011) and Cooper (2008). The latter suggests that economics and finance needs the kind of scientific revolution that physics witnessed with the contributions of Newton and Einstein. In the section above, we have shown that the 2007-8 crisis has severely undermined the current paradigm of modern finance. The current mainstream does not allow investment managers to be prepared for critical market fluctuations. The authors of this paper also believe that modern finance is neither geared up to provide a solid theoretical framework to financial regulation.

A paradigm crisis that calls for a scientific revolution attracts the ideas that Thomas Kuhn has proposed for the scientific paradigm of theoretical physics. Kuhn (1996) defines scientific paradigm as a framework of concepts, results, and procedures within which subsequent work is structured. If we adopt a Kuhnian view of the potential scientific revolution the discipline may need, three points are worthy of further elaboration. First, a theory never elucidates completely and with absolute precision all the challenges that it encounters because it is bound to the empirical context in which it has been conceived. Second, the falsification of a theory does not occur because of the emergence of a falsifying observation. “Instead, it is a subsequent and separate process that might equally be called verification since it consists in the triumph of a new paradigm over the old one”[53]. In the history of science, paradigm competition has been settled (meaning rejection of a theory) by employing not falsification but a more complex mixture of elements, defined by Kuhn as “incommensurabily of competing paradigms”. In this context: a) “the proponents of the competing paradigms will often disagree about the list of problems that any candidate for paradigm must resolve”; b) the new paradigm

will involve partial usage of the terminology associated to the old one and this will generate inevitably misunderstandings between the competing schools; c) the supporters of a theory see the world with a different prism than the supporters of the competing paradigms. Before they can hope to communicate fully one of the two groups must experience also a conversion that we have been calling a paradigm shift”.[54]

MacKenzie (2006) reports that the accumulation of a large amount of anomalies in a theory of the natural science was seen by Khun as sign of a coming scientific revolution, and that Jensen indeed suggested that the accumulation of EMH anomalies pointed to “a coming mini-revolution in the field”, albeit one that required a more accurate and general adoption of market efficiency, rather than its abandonment.[55]

4. What Are the Options on the Table?

Buttonwood (2015) affirms: “The best hope lies with the behavioral school”. Many value investors (who have criticized the EMH since inception and are among the main beneficiaries of its anomalies) believe that the capricious behavior of market prices can be explained by behavioral finance. Moreover, in academia, the body of knowledge supporting this school of thought has grown tremendously.

The major theoretical and empirical contributions of behavioral finance come from renowned scholars. Its fathers are the cognitive psychologists Daniel Kahneman and Amos Tversky, who have focused on the cognitive biases and the heuristics involved in economics decision-making. The former received in 2002 the Nobel Prize in Economics precisely for his analysis of rationality in economics. Two additional key contributors are the economist Richard Thaler, who was able to connect human psychology with market anomalies, and Robert Shiller who, as mentioned above, received the

Page 23: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

24

Nobel Prize in Economics for his work on asset pricing irrationalities.

Behavioral economics has advanced our understanding of biases in decision making such as the endowment effect, sunk costs, and hyperbolic discounting. Thaler, for example, has addressed unrealistic mainstream assumptions such as the assumption that profit maximization is sought until marginal cost equals marginal revenue. Surveys have shown that corporate executives do not follow such an approach at all.

Behavioral economics has been around for 40 years, but its conclusions were dismissed by the mainstream. Behavioral studies are able to find empirical evidence against market efficiency and have accumulated a large amount of laboratory experiments to show human biases in decision making. However, these studies are still considered impractical as an explanation of the behavior of an entire economy. Moreover, behavioral economics has not produced yet a coherent model that produces testable predictions.

The main criticisms to behavioral finance came obviously from the EMH advocates, who backfired. Fama (1998) holds that in the long term market efficiency survives the behavioral challenge. All in all, many anomalies in the short term compensate each other, canceling out in the long term. Moreover, some anomalies can be due to methodology, and tend to disappear with changes in technique.

Even if behavioral economics has not been accepted by the mainstream, it has grown its influence in policy making. The US government has adopted their core ideas with reforms like consumer protection. This has been another source of criticism. According to the EMH, there exists an optimal market equilibrium that discharges the need for government intervention. If the EMH is flawed, then behaviorists think that we need the government to tow the economy back to optimal equilibrium. This is the reason

why behavioral economics has been often the justification for increased policy making to fix supposed market failures. This has attracted further criticism of the advocates of unregulated markets against behavioral economics.

There are also critics of behavioral economics outside the realm of the EHM advocates. For example, Frydman and Goldberg (2011) argue that both the rational and the behavioral theories of the market rest on the notion of rational expectations. Rationalists believe the framework provides exact predictions, while behaviorists believe human behavior is biased away from the rational benchmark, therefore policies are required in order correct suboptimal human behavior toward the benchmark. In order words, both camps believe there is an optimal benchmark, and if decision-making is purged of biases, markets can become fully predictable. As an alternative, the authors present a framework based on imperfect knowledge economics. Accordingly, the 2007 bubble was not fueled by heard behavior, but rather by market players’ attempts to interpret (imperfectly) the economic fundamentals. This is consistent with Borio (2014), according to whom we should move away from rational expectations. Albeit licit, pretending that economic agents have a full understanding of the economy is unrealistic. Relaxing the rational expectations assumptions does not imply to bring into the model irrational behavior or model inconsistency. Furthermore, to acknowledge the existence of heterogeneous and incomplete knowledge, coupled with the need of dealing with with fundamental uncertainty, means to bring more realism to the model.

Besides Rizzo (2012), there is limited Austrian research in relationship to behavioral economics. Austrian economics provides intrinsically an alternative approach to rational and behavioral theory, namely a view of the economic agent that is not about a deterministic utility maximizer

Page 24: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

25

(like with rational expectations), but rather about the entrepreneurial function as the equilibrating factor.[56] Entrepreneurship and economics have always had an awkward relationship. In economics, entrepreneurship does not belong to the mainstream, since rational theory is based on the notion of equilibrium, therefore there is no need for equilibrating forces. Conversely, management scholars are well familiar with the work of Kirzner, Schumpeter, Baumol, etc.. In this domain, entrepreneurship is one of the pillars of the ‘paradigm’.

Buttonwood (2015) suggests that there are alternatives to behavioral finance, such as adaptive market theory (Lo) ad fractal market hypothesis (Joshi). In addition to this, there are alternative explanations to the momentum effect such as the presence of a principal-agent condition (Woolley and Vayanos).

5. Conclusions

In this paper we have shown why mainstream modern finance has been challenged by the 2007-8 crisis and is undergoing a paradigm crisis. We are also observing the inability of the tools used by standard macroeconomics to provide reliable forecasts. In both disciplines, critics have started a disorganized offensive, but for the moment orthodoxy still rules.

Regulators have also admitted the inadequacy of the pre-crisis regulatory environment. Still, in order to address the crisis and guarantee financial stability in the future, they have added stacks of new requirements based on the theories belonging to modern finance. What are the certainties that the same flaws that led to error ten years ago, are not at work today?

Thomas Khun taught that a paradigm shift can only occur when the new paradigm exists and has gained acceptance. Today, the editors of this journal do not yet see any viable alternative that can replace the current paradigm. The

idea to launch to this journal comes from this realization and from the hope to create a forum that fosters creativity and the audacity to challenge the orthodoxy.

We believe that a methodological approach that welcomes any mathematization of financial events should be questioned, and that a greater discipline should be applied in testing whether financial models are able to cope with reality. Researchers should not hesitate when facing resistance from traditionalists.[57]

It is paramount to advance quantitative finance by analyzing what went wrong with quant models. We believe that an ambitious research program should make standard practice of the analysis of the failures of the discipline, emphasizing the model flaws and the potential unintended consequences.

We invite also to challenge the rationale behind new financial regulation based on models that have failed in the past. The goal of the regulators should not be to eliminate altogether financial crises or the exposure to financial risk. After all, Cliff Asness warned that: “Making people understand that there is a risk (and a separate issue, making them bear that risk) is far more important, and indeed far more possible than making a riskless world. And if I may go further, trying to create and worse, giving the impression you have created, a riskless world makes things much more dangerous.”

Bibliography:Asness, C.S. & Liew, J. (2014) “The Great Divide

over Market Efficiency”, Institutional Investor, March 3rd, 2014.

Asness, C.S. (1994) “Variables that explain stock returns.” Ph.D. Dissertation, University of Chicago

Page 25: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

26

Ball, Ray & Brown, Philip (1968) “An Empirical Evaluation of Accounting Income Numbers.” Journal of Accounting Research 6: 159–178.

BCBS (1996), “Overview of the amendment to the capital accord to incorporate market risks”, document 221, January 1996, http://www.bis.org/publ/bcbsc221.pdf

BCBS (2009), “Guidelines for computing capital for incremental risk in the trading book, Bank for International Settlements 2009, http://www.bis.org/publ/bcbs149.pdf

BCBS (2013), “Fundamental review of the trading book: A revised market risk framework”, Bank for International Settlements 2013, http://www.bis.org/publ/bcbs265.pdf

BCBS (2016), “Minimum capital requirements for market risk”, Bank for International Settlements 2016, http://www.bis.org/bcbs/publ/d352.pdf

Bernstein, Peter L. (1992). Capital Ideas: The Improbable Origins of Modern Wall Street. Free Press.

Bernstein, Peter L. (1996) , Against the Gods: The Remarkable Story of Risk (New York: John Wiley & Sons, 1996)

Black, Fischer (1972) “Capital Market Equilibrium with Restricted Borrowing.” Journal of Business 45: 444–454.

Black, Fischer, Michael C. Jensen, & Myron Scholes (1972) “The Capital Asset Pricing Model: Some Empirical Tests.” In Studies in the Theory of Capital Markets, ed. M. Jensen. Praeger.

Blanque, Pascal (2014) , Essays in Positive Investment Management, 2014, ISBN : 978-2-7178-6700-8

Borio, Claudio (2014), “The financial cycle and macroeconomics: What have we learnt?”, Journal of Banking & Finance, August 2014

Bottonwood (2014), “Rational or not? Markets probably aren’t efficient but that doesn’t make them easy to beat”, The Economist, March 4th, 2014

Bottonwood (2015), “What’s wrong with finance. An essay on what economists and financial academics learned, and haven’t learned, from the crisis. The best hope lies with the behavioural school”, The Economist, May 1st, 2015

Brealey, Myers, Allen or BMA (2011), Principles of Corporate Finance, McGraw-Hill/Irwin, 10th Edition, 2011.

Chopra, N. Lakonishok, J. & Ritter, J. (1992) “Measuring Abnormal Performance: Do Stocks Overreact?”, Journal of Financial Economics 31, 235–86.

Cooper, George (2008), The Origin of Financial Crises: Central Banks, Credit Bubbles, and the Efficient Market Fallacy, Vintage; 1St Edition edition (October 29, 2008)

Crouhy M., Galai, M. R. (2001). Risk Management. McGraw-Hill, 45-51.

Davies, Greg B. & De Servigny, Arnaud (2012), Behavioral Investment Management, 2012

Davis, E. P. (1999). Russia/LTCM and Market Liquidity Risk, Bank of England, 13-17.

DeMiguel, V., Garlappi, L., Uppal, R. (2009). “Optimal versus naive diversification: How inefficient is the 1/N portfolio strategy?” Review of Financial Studies 22 (5), pp.1915-1953.

Dowd, K. (2002), An Introduction to Market Risk Measurement, Chichester and New York, John Wiley & Sons, Paperback.

Dowd, K. (2014), “Math Gone Mad: Regulatory Risk Modeling by the Federal Reserve”, Policy Analysis – Cato Institute, September 3, 2014 | Number 754

Fama, Eugene F. & Kenneth R. French (1988) Permanent and temporary components of stock prices. Journal of Political Economy 96, 246–73.

Page 26: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

27

Fama, Eugene F. & French, Kenneth R. (1992) “The Cross-Section of Expected Stock Returns”. Journal of Finance 47 (2): 427–465

Fama, Eugene F. (1965) “The behavior of stock market prices”. Journal of Business 38, 34–105.

Fama, Eugene F. (1998), “Market efficiency, long-term returns, and behavioral finance1”, Journal of Financial Economics 49 (1998) 283—306

Fama, Eugene F. (2009), “Fama on Market Efficiency in a Volatile Market”, Eugene Fama interviewed by David Salisbury-Chairman of DFA Europe on Aug 11th, 2009:

Fama, Eugene F., & MacBeth, James D. (1973) “Risk, Return, and Equilibrium: Empirical Tests.” Journal of Political Economy 81: 607–636.

Fisher, Phillip (1996), Common Stocks and Uncommon Profits, Wiley, 1996.

Fox, Justin (2009), The Myth of the Rational Market: A History of Risk, Reward, and Delusion on Wall Street, Harper Business, 2009

Friedman, Jeffrey & Kraus, Wladimir (2011), Engineering the Financial Crisis: Systemic Risk and the Failure of Regulation, University of Pennsylvania Press, 2011

Frydman, Roman & Goldberg, Michael D. (2011), Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State, Princeton University Press (February 27, 2011)

GAO (1999), “Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk”, GGD-00-3, Oct 29, 1999,http://www.gao.gov/products/GGD-00-3

Greennwald, Kahn, Sonkin, van Biema (2001), Value Investing, 2001, Wiley Finance

Grossman & J. E. Stiglitz (1980), “On the Impossibility of Informationally Efficient Markets,” American Economic Review 70 (June 1980), pp. 393–408.

Haugen, R.A. & Lakonishok, J. (1988). “The Incredible January Effect: The Stock Market’s Unsolved Mystery”. Dow Jones-Irwin.

Hilsenrath, Jon E. (2004), “Stock Characters: As Two Economists Debate Markets, The Tide Shifts”. Wall Street Journal, Updated Oct. 18, 2004

Hoppe, R. (1999), “Finance is not physics”. Risk Professional, October 1999. (Vol 1, No. 7)

Iyiola, Munirat, Nwufo (2012), The modern portfolio theory as an investment decision tool, Journal of Accounting and Taxation, Vol.4(2), pp. 19-28 , March 2012.

Jegadeesh, Narasimhan & Titman, Sheridan (1993), “Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency”, The Journal of Finance, Vol. 48, No. 1 (Mar., 1993), pp. 65-91

Jegadeesh, Narasimhan & Titman, Sheridan (2001) “Profitability of Momentum

Jorion, Philippe (2000), Risk Management Lessons from Long-Term Capital Management European Financial Management (2000).

Jorion, Taleb (1997), “The Jorion/Taleb debate”, Derivatives Strategy, April 1997, http://www.derivativesstrategy.com/magazine/archive/1997/0497fea2.asp

Kavous Ardalan (2008) On the Role of Paradigms in Finance, Ashgate, 2008

Kendall, M.G. (1953), “The Analysis of Economic Time Series, Part I. Prices,” Journal of the Royal Statistical Society 96 (1953), pp. 11–25.

Kirzner, I. (1979). Perception, Opportunity, and Profit: Studies in the Theory of Entrepreneurship. Chicago: University of Chicago Press.

Page 27: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

28

Kirzner, I. (1997) “Entrepreneurial discovery and the competitive market process: An Austrian approach”. Journal of Economic Literature 35: 60–85.

Kuhn, T. S. (1996). The Structure of Scientific Revolutions. 3rd ed. Chicago(IL): University of Chicago Press.

Lintner, John (1965) “The Valuation of Risk Assets and the Selection of Risky Investments in Stock Portfolios and Capital Budgets.” Review of Economics and Statistics. 47:1, pp. 13–37.

Lo, Andrew (2007), “Efficient Market Hypotesys”, in L. Blume and S. Durlauf, The New Palgrave: A Dictionary of Economics, Second Edition, 2007. New York: Palgrave McMillan.

Lord Turner (2009), The Turner Review, UK Financial Service Authority, March 2009, http://www.fsa.gov.uk/pubs/other/turner_review.pdf

Lowenstein, Roger (2000). When Genius Failed: The Rise and Fall of Long-Term Capital Management. Random House.

Lowenstein, Roger (7 June 2009). “Book Review: ‘The Myth of the Rational Market’ by Justin Fox”. Washington Post. Retrieved 5 August 2011.

MacKenzie, Donald (2006), An Engine, Not a Camera: How Financial Models Shape Markets, 2006, The MIT Press, Cambridge, Massachusetts.

Markowitz, Harry M. (2005), “Market Efficiency: A Theoretical Distinction and So What?” Financial Analysts Journal, September/October 2005, Vol. 61, No. 5: 17-30.

Mehrling, Perry G. (2010), The New Lombard Street: How the Fed Became the Dealer of Last Resort, Princeton University Press (November 28, 2010)

Mossin, Jan (1966) “Equilibrium in a Capital Asset Market,” Econometrica, Vol. 34, No. 4, October 1966, pp. 768-783.

Münchau, Wolfgang (2015), “Macroeconomists need new tools to challenge consensus” , The Financial Times, April 12, 2015

Murphy, J. Michael (1977), “Efficient Markets, Index Funds, Illusion, and Reality”, The Journal of Portfolio Management, Fall 1977, Vol. 4, No. 1: pp. 5–20

Nocera, Joe (5 June 2009). “Poking Holes in a Theory on Markets”. New York Times. Retrieved 8 June 2009

Pastor, Veronesi (2006), “Was There a NASDAQ Bubble in the Late 1990s? 2006, Journal of Financial Economics 81, 61 - 100;

Pedersen, Lasse Heje (2015), Efficiently Inefficient: How Smart Money Invests and Market Prices Are Determined, Princeton University Press, ISBN: 9780691166193

Persaud, Avinash (2000), “Sending the herd off the cliff edge: the disturbing interaction between herding and market-sensitive risk management practices”, Institute of International Finance, Washington, 2000. This essay won first prize in the Institute of International Finance’s Jacques de Larosiere Awards in Global Finance 2000. http://www.bis.org/publ/bppdf/bispap02l.pdf

Posner (2010), “After the Blowup”. The New Yorker. 11 January 2010. Retrieved 12 January 2010.

Poterba, J. & Summers, L. (1988). Mean reversion in stock returns: evidence and implications. Journal of Financial Economics 22, 27–60.

Rich, Robert (2013), The Great Recession of 2007–09, Federal Reserve Bank of New York , November 22, 2013, http://www.federalreservehistory.org/Events/DetailView/58

Rizzo, Mario J. (2012). “Austrian Economics Meets Behavioral Economics: The Problem of Rationality” (2012)

Page 28: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

29

Roll, Richard (1977). “A Critique of the Asset Pricing Theory’s Tests. Part I: On Past and Potential Testability of the Theory.” Journal of Financial Economics 4: 129–176.

Samuelson, Paul A. (1965). “Proof That Properly Anticipated Prices Fluctuate Randomly.” Industrial Management Review 6, no. 2: 41–49.

Samuelson, Paul A. (1974), “Challenge to Judgment”,, Journal of Portfolio Management, 1 (1): 17-19

Scott (2011), “Is Portfolio Theory Harming Your Portfolio?” The Journal of Applied Research In Accounting And Finance, Vol. 6, No. 1, pp. 2-13, 2011.

SEC (2004), “Alternative Net Capital Requirements For Broker-Dealers That Are Part Of The Consolidated Supervised Entities”, 17 CFR Part 240, http://www.sec.gov/rules/final/34-49830.pdf

Sharpe, William F. (1964), “Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk.” Journal of Finance. 19:3, pp. 425–42.

Shiller, Robert J. (1988), “Portfolio Insurance and Other Investor Fashions as Factors in the 1987 Stock Market Crash.” NBER Macroeconomics Annual: 287–295.

Shiller, Robert J. (2008), The Subprime Solution: How Today’s Global Financial Crisis Happened, and What to Do about It, Princeton University Press, Aug 4, 2008

Shleifer, Andrei. 2000. Inefficient Markets: An Introduction to Behavioral Finance. Oxford University Press.

Siegel, Laurence B. (2010). “Black Swan or Black Turkey? The State of Economic Knowledge and the Crash of 2007–2009”. Financial Analysts Journal. 66 (4): 6–10.

Smith, Mark (2003), A History of the Global Stock Market, The University of Chicago Press, 2003.

Spierdijk, Laura & Bikker, Jacob (2013), “Mean Reversion in Stock Prices: Implications for Long-Term Investors”, Journal of Investment Strategies Volume 2/Number 1, Winter 2012/13; http://www.risk.net/digital_assets/6129/jis_bikker_web.pdf

Strategies: An Evaluation of Alternative Explanations.” Journal of Finance 56: 699–720.

Triana, Pablo (2012), The number that Killed us: a story of modern banking, flawed mathematics, and a big financial crisis, Wiley, 2012.

Volcker, Paul (27 October 2011). “Financial Reform: Unfinished Business”. New York Review of Books. Retrieved 22 November 2011.

Notes:1. Turner (2009): “The predominant

assumption behind financial market regulation – in the US, the UK and increasingly across the world – has been that financial markets are capable of being both efficient and rational and that a key goal of financial market regulation is to remove the impediments which might produce inefficient and illiquid markets”.

2. See Rich (2013).3. See FDIC: https://www.fdic.gov/bank/

historical/bank/2007/index.html4. See Ardalan (2008) p. 10 for an overview

of the literature for each one of these theories.

5. See Bernstein (1996), p. 250-51.6. See Bernstein (1996), p. 257-8.i b7. Consider, as an example, Warren Buffet’s

valuation of the Washington Post, when he purchased the newspaper in 1973. See Greennwald, Kahn, Sonkin, van Biema (2001), p. 11.

Page 29: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

30

8. Alternative measures to variance have been proposed. See, for example, Dowd (2002), p.17 box 2.1.

9. The 2016 historical environment of low interest rates, the low margins the active management industry had to face and the strong performances that index and ETF based investment strategies has revamped the debate

10. See Fisher (1996), p. 245.11. See Fama (1976)12. See Cootner (1962; 1964), Fama (1963;

1965a), Fama and Blume (1966), and Osborne (1959), all cited in Lo (2007).

13. See Jensen (1978, p.95), quoted in MacKenzie (2006, 95).

14. See Lo (2007), where it is also possible to find the main references in the literature for such phenomena: The classical study is DeBondt and Thaler (1985), Poterba and Summers (1988), Summers (1986), Conrad & Kaul (1988), Chopra, Lakonishok and Ritter (1992), Lehmann (1990).

15. See Spierdijk and Bikker (2013).16. Lo (2007) recommends also to refer to

Banz (1981), Keim (1983), Roll (1983), and Rozeff and Kinney (1976). We also recommend Haugen and Lakonishok (1988) and Chopra, Lakonishok, Ritter (1992)

17. See BMA (2011), p. 322.18. In a 2009 interview, Fama affirmed:

“Well there is evidence that there is somewhat more momentum in stock returns that can’t easily be explained by a risk theory-that gives me a little trouble…then there is another one that says that the market returns following earnings announcements tend to persist a little more than you would expect if the markets were completely efficient-but neither of these present a lot of opportunity on which a lot of money

can be made-because it involves so much trading and trading costs…but as far as I know those two are the biggest contradictions or potentially biggest contradictions of market efficiency…”. Fama (2009).

19. See MacKenzie (2006), p.102.20. See BMA (2011), recommends also

Bernard and Thomas (1990) and Chordia and Shivakumar (2005).

21. See BMA (2011), p. 325.22. See Roll (1994), cited in BMA (2011), p.

322.23. See MacKenzie (2006), p.97.24. “The dominant response of Eugene Fama

and his University of Chicago students to the growing list of anomalies was to suggest that the fault lay in the Capital Asset Pricing Model, not in the efficient-market hypothesis”. MacKenzie (2006), p. 98.

25. See MacKenzie (2006), p. 327, note 7.26. See Fox (2009), p. 232.27. See mith (2003), p. 234.28. See Hilsenrath (2004).29. See Nocera (2009).30. See Lowenstein (2009).31. See Volcker (2011). 32. See Siegel (2010), p. 7.33. See Bernstein, p. 257-8.34. MacKenzie (2006), p. 90. Black (1972)

also represented an attempt to provide an alternative form of CAPM where Sharpe’s assumption of unlimited borrowing at the risk free rate was dropped.

35. See MacKenzie (2006), p. 93.36. See MacKenzie (2006), p. 91.37. See Bernstein (1992), p. 129.38. See Blanque (2014), p.31. Albeit the

liquidity crisis is generally located around 2008, in 2007 the first signals raised to the public in 2007. In August 9, 2007, BNP

Page 30: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

31

Paribas terminated withdrawals from three hedge funds citing “a complete evaporation of liquidity”

39. On the role of central banks as providers of liquidity of last resort, see Mehrling (2010).

40. See Pedersen (2015), p. 42.41. See Dowd (2002), p.10, 25.42. This distortion is overcome only in

specific circumstances such as when risks are elliptically distributed or rankable by first-order stochastic dominance, and empirically not very common situation. See Yoshima and Yamai (2001), p.16-17, referenced in Dowd (2002), p. 26.

43. See Artzner et al (1999), p. 217, quoted in Dowd (2002), p. 27. This study shows the properties that have to be followed by a certain risk measure in order to be coherent, that is in order to correctly reflect diversification effects and facilitate decentralized decision-making.

44. See Triana (2012), p.17.45. See Triana (2012), p. 29.46. Other studies supporting this position

are : Danielsson (2001), Danielsson and Zigrand (2001), ,Danielsson et al. (2001), Basak Shapiro (2001).

47. See Triana (2012), p. xvii48. See BCBS (1996): “The main feature of

the 1995 Amendment was to respond to the industry’s request to allow banks to use proprietary in-house models for measuring market risks as an alternative to a standardized measurement framework originally put forward in April 1993”.

49. See SEC (2004)50. See Davis (1999), GAO (1999), Lowenstein

(2000), Crouhy, Galai, (2001), Pablo Triana (2012).

51. See Triana (2012).52. See MacKenzie (2006), p. 7. It was part of

these developments Friedman’s 1953 “The

Methodology of Positive Economics,” and the debate that followed (about, among other subjects, the over simplistic assumptions that the approach allowed), especially against its main opponent, Paul Samuelson, the champion of mathematical rigor in economics. It is not the goal of this paper to resuscitate such methodological debate, but it interesting to highlight that Samuelson (1974) argued that active portfolio managers should “go out of business – take up plumbing, teach Greek, or help produce the annual GNP by serving as corporate executives” and that investor should invest their capital in highly diversified and passively managed funds.

53. See Kuhn (1996), p. 147. 54. See Kuhn (1996), p. 148-150. 55. See MacKenzie (2006), p. 97.56. See Kirzner (1979) and Kirzner (1997).57. The anecdote recounted by Münchau

(2015) is representative: “Revolutions are always countered by traditionalists. It is instructive to go back to one episode, concerning the German mathematician Richard Dedekind. He was one of the rebels of his time, and used a new technique to prove an important result. His method would be considered standard stuff today, but was revolutionary then. The response from the traditionalists was harsh. Leopold Kronecker, another German mathematician, decried Dedekind’s proof as useless on the grounds that it had no practical applications. Dedekind retorted, not helpfully, that he wanted “to put thoughts in the place of calculations”.

58. See Buttonwood (2015).

Page 31: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

32

Page 32: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

33

Creditors, including policyholders in the case of insurance, could of course absorb unexpected losses, but for a variety of reasons corporate shareholders, who are generally well diversified, are usually considered in the best position to carry that risk – at least in terms of being in a first-loss position.

Naturally a substantial amount of prudential regulation could go away if only governments were willing to impose losses on creditors in the event of insolvency. Such would also have the benefit of increasing the overall monitoring of financial institutions since creditors would now have more powerful incentives to do so. A credible resolution regime can assist in this regard, but ultimately government officials must commit not to, or be prohibited from, rescuing creditors.

Historically capital or solvency standards were either vague or were based upon simple

leverage ratios. For example a simple leverage ratio could take the form of 5 percent of total assets. Capital can also, in part, take the form of a fixed minimum level of owner equity.

For instance insurance companies typically are required to maintain $2 to $3 million in minimum capital regardless of activity levels.[1] All U.S. states require capital in excess of these minimum as determined by risk-weighting. The National Association of Insurance Commissioners (NAIC) developed model risk-based capital standards for life-health and property-liability companies in the early 1990s. Specific risk-based standards were further developed for health insurers in the late 1990s. Federal bank regulators embraced a similar approach under the Basel Accords.

Risk-weighted capital standards begin from the perhaps obvious observation that not all assets and liabilities are equally risky. NAIC’s model

Journal of New Finance, January-March 2017

A Call for Model Modesty inBanking and Insurance

Mark CalabriaCato Institute

AbstractModern financial regulation has increasingly come to rely upon the imposition of capital standards. To some degree capital regulation has become synonymous with prudential regulation. At its most basic level, capital, in the form of shareholder equity, provides a cushion to cover unexpected increases in liabilities or decreases in asset values.

Keywordsrisk-weighted capital, tail risks, risk models, sovereign debt, financial regulation

JEL ClassificationG01, G18, G23

Page 33: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

34

formula for company risk includes the following elements: 1) interest rate risk; 2) asset credit risk; 3) pricing/underwriting risk; 4) business risk and 5) off-balance sheet guarantees.

In addition to setting capital standards, risk models can also be used for projecting cash flows (liquidity), calculating the fair value of liabilities, pricing of risk (and hence premiums), as well as general business planning. Although the discussion here will focus on capital standards, the presented concerns apply to these other areas as well.

Regulatory risk models are basically computer algorithms that estimate potential outcomes.[2] The more complex models may also offer a range of probabilities for those outcomes. The goal of these models is to help calculate the capital that would be required to avoid insolvency, with a given degree of certainty.

These models can also be reverse engineered, in the sense that a target probability of failure can be chosen first, which then allows an estimated level of capital to be calculated to achieve the target probability of insolvency. As an aside, few, if any, policymakers or industry participants would advocate a target probability of failure of zero. Embedded in current capital risk models is some low, but positive, probability of failure. A zero probability is likely both impossible and extremely costly.

If a risk model is wrong, which is certain to be the case to some degree, then financial companies could be at a much greater risk of failure than recognized or they could misallocate excess capital. There are at least three general avenues by which a risk-based capital model can be “wrong.”

The first of these errors is the model itself. A model is a simplified version of reality. Does the simplification eliminate the actual drivers of failure? Is the model, even when simplified, a reliable guide to reality? For instance in the area of mortgage regulation, it is well established

that borrower credit and loan-to-value are the predominate drivers of residential mortgage default; yet the promulgation of Dodd-Frank’s qualified residential mortgage rule abandoned these factors for less predictive, yet more politically acceptable, drivers of default.[3] A similar model choice was incorporated in the treatment of sovereign default in the Basel II framework.

It had been well understood before the Euro crisis that Greece had a long history of serial default, yet regulators choose to assign a zero-risk weight to Greek sovereign debt. Put simply, regulators in these instances chose models of financial risk they knew to be faulty. Perhaps more troubling is that many regulatory models are flawed but in a manner not widely recognized: Donald Rumsfeld’s “unknown unknowns”.

While we can never really know if a particular model is “correct,” we can gauge the assumptions behind that model. Those assumptions are likely to be built upon other models as well as observations of past data.

One of the biggest errors going into the crisis was the widely used assumption that loss probabilities were “normally distributed,” that is they followed the well known “bell shape” probability distribution. It has long been recognized that the normal distribution has “thin” tails, which assigns a relatively low probability to extreme observations. Distributions with “fatter” tails have also long been understood to better represent financial markets.

The choice of a normal distribution was made not for the sake of accuracy but for the sake of tractability. Normal distributions have a number of characteristics that make them useful in a manner that other distributions are not.

Quite simply, going into the crisis certain model characteristics were chosen because they yielded unique, but inaccurate, solutions; whereas more accurate assumptions would have

Page 34: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

35

offered less “useful” solutions.Even if one has developed a model that

includes the appropriate variables, both the impact of those variables and the relationship between those variables could be inaccurate. For instance one can look at mortgage defaults as a function of house prices.

We could represent such in function form by f(D)=B*P where D is defaults, B is the relationship of defaults to prices and P is prices. It was well understood before the financial crisis that house prices influenced defaults, so that model was sound. What was not sound was the assumed relationship B. The sensitivity of mortgage defaults to house prices was significantly underestimated prior to the crisis.

As a result financial institutions held far less capital behind residential mortgages than was needed. One reason this relationship was misunderstood was the lack of sufficient data on subprime mortgage performance. Extensive data on prime mortgage data goes back several decades, but for subprime only goes back to the early 1990s. Analysts commonly used estimates derived from prime mortgages in their calculations of subprime performance.

These estimates proved wildly inaccurate. But since they gave what at the time appeared to be reasonable results, they were widely accepted.

The relationship between variables is also a crucial component of financial modeling. Many analysts believed that mortgage defaults would have little correlation across U.S. states. History had suggested that Texas or New England could have a contained housing boom and bust. Yet, as we painfully learned, defaults can be highly correlated. With the expansion of subprime lending, mortgage defaults because increasingly sensitive to national economic trends. Losses across insurance lines may also be more highly correlated than is recognized. Large natural disasters can result in both life and property claims.

Getting the correlations between variables wrong is one of the most significant flaws in modeling. Of course, as we learn in Finance 101, holding a portfolio of different assets, of varying risk, can result in a portfolio with a total risk lower than that of its parts. A risk model that does not appropriately recognize the benefits of asset (and liability) diversification can result not only in excessive levels of capital but also serve as a deterrent to diversification, ultimately increasing, rather than reducing, risk.

We are all familiar with the phrase “garbage in, garbage out” reminding us that even the best theoretical model is dependent upon the quality of data for its predictive power. An example of such is that a significant portion of mortgages on second homes were coded as being for a primary residence.

Mortgage defaults rates are considerably higher for investment and vacation properties. If mortgages on those properties are instead believed to be for primary residences, whether due to fraud or neglected reporting, then expected defaults will be underestimated. Similarly if loan-to-value calculations do not include second mortgages, default rates will also be underestimated. Such problems are not limited to mortgages.

For instance, the flood maps used in the National Flood Insurance Program have long-standing flaws, including being out-of-date. An overreliance on such maps can lead to estimates of actual flood risk that are grossly inaccurate for both individual properties and the program as a whole.

A model can also be flawed in the choice of a risk measure. A commonly used measure of risk in banking is Value-at-Risk (VaR). A VaR measure is generally reported in terms of X days out of 100. For example a 99 percent VaR attempts to measure the worst loss than can be expected on the best 99 out of a given 100 days. Obviously this measure tells us nothing about

Page 35: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

36

that 1 day out of 100 that could sink the firm. In other words, VaR by design ignores tail risk.

This design flaw is made all the worse if the VaR measure is based upon a normal distribution which underestimates financial tail risk. A parallel in the insurance world would be extreme natural disasters, such as Hurricane Katrina. The approach of VaR is essentially to say, what’s the worst risk we face if we assume no Katrinas, Sandys or 9/11s. Obviously such an approach leaves a company (and industry) quite vulnerable to such tail risks.

None of this is to say we should abandon models. Regulators, academics risk managers and executives should continue to improve our current models. A rare silver lining of the 2008 crisis is that we now have a solid inflection point with which to stress financial models. But we must also not let models substitute for common sense and judgment.

A model that cannot be explained to executives, market participants and regulators will be of limited value. We must also be cautious that a handful of models come to dominate the financial services industry.

Such could easily result in the herding of institutions into similar assets, exaggerating the damage of fire sales.[4] A robust financial system is one with a great diversity of business models and balance sheets. The current obsession with financial models, as demonstrated by the Federal Reserve’s stress tests, runs the very real risk of undermining financial stability, rather than improving it.

Again, the point is not to stop modeling risk. We all, either implicitly or explicitly, make decisions based upon some “model” of the world. The point is to approach such models with considerable skepticism and modesty.

(This article was first published online by Cayman Financial Review on April 22, 2015. We express our gratitude to both the publisher as the author for their permission to republish this article)

Notes:.

1. For comparisons of minimum capital standards by state, see: http://www.naic.org/documents/industry_ucaa_chart_min_capital_surplus.pdf

2. For a summary of how internal models are used in insurance regulation, see: http://www.naic.org/cipr_newsletter_archive/vol9_internal_models.pdf

3. See Floros and White, 2014. Qualified Residential Mortgages and Default Risk. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2480579

4. See Kevin Down, Math Gone Mad: Regulatory Risk Modeling by the Federal Reserve. Policy Analysis. Cato Institute. September 2014. http://www.cato.org/publications/policy-analysis/math-gone-mad

Page 36: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

37

In March 2015, the Financial Stability Board (FSB) and the International Organization of Securities Commissions (IOSCO) released a consultative document to propose new “Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions” (FSB-IOSCO, 2015 – henceforth, FSB-IOSCO framework).In this paper we analyse these methodologies

by summarising the main responses to the consultation that were gathered from industry participants.

Although highly awaited, the FSB-IOSCO framework has proved a real disappointment for the investment industry. The industry firms that answered to the consultation argued that the FSB-IOSCO framework fails to grasp the uniqueness of the investment management industry, and if

Journal of New Finance, January-March 2017

Assessing the Systemic Importance ofAsset Managers and Investment Funds

Gustave LaurentUniversity of York

Massimiliano NeriMoody’s Analytics, UFM Finance Research Center

AbstractFinancial regulation has recently focused on “too big to fail” institutions such as banks and insurance companies by assigning to some of them the designation of “systemically important”, which brings additional regulatory requirements on top of the traditional requirements. In 2015 various international regulatory bodies (FSB and IOSCO) proposed a similar regulation for other large financial institutions, such as asset managers and investment funds. This new regulatory methodology was a real disappointment for the investment industry. The two elements that attracted the most criticism were the entity-based designation of individual funds and asset managers, and the proposed materiality thresholds for systemic importance designation. Additionally, the proposed methodologies failed to grasp the uniqueness of the investment management industry, and if applied, could potentially enhance systemic risk instead of diminishing it. In this paper, we propose a critical review of the consultative responses of the main industry players to the new regulatory proposal, and highlight the main areas in which the regulation could be revised.

Keywordsasset managers, commitment approach, credit risk, derivatives, FSB, exchange traded funds, gross notional exposure, hedge funds, interconnectedness, investment funds, IOSCO, liquidity risk, market risk, notional value, SIFI, substitutability, systemic risk

JEL ClassificationG01, G18, G23

Page 37: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

38

applied, could potentially enhance systemic risk instead of diminishing it. The two elements that attracted the most criticism were the entity-based approach of the methodology (which entails focusing on a limited number of designated firms, rather than on activities that are deemed to be risky across the whole industry), and the proposed materiality thresholds, whereby, firms are designated as systemically important, not on the basis of the riskiness of their activities, but rather on their size.

This paper has two goals. First, it seeks to review the consultative responses of thirteen leading individual investment firms and asset managers to the FSB-IOSCO framework, and to present a concise overview of their views on the proposed methodologies.[1]

The firms were chosen for their expertise and importance within the investment world and were thought to accurately reflect the broader opinion of the investment industry. It is noteworthy that, although the proposed global systemically important financial institutions (G-SIFI) designation process for investment funds and asset managers would apply globally, in practice it would only result in United States (US) regulators applying these designation procedures to a handful of individual US firms. Consequently, it comes as no surprise that in the past two years FSB and IOSCO have summoned BlackRock, Fidelity, PIMCO, and Vanguard, among others, “to discuss whether large asset managers should be considered as systemically important financial institutions” (Flood, 2014). These market players have been particularly active, not only in pre-consultation meetings, but also with vigorous reactions to the consultation, therefore, we have particularly delved into their responses to the FSB-IOSCO framework. Second, we seek to present the opinion of the Finance Research Center of Universidad Francisco Marroquín (UFM) on the proposed methodologies and to support

the activities-based recommendations of the industry.

Although the FSB-IOSCO framework identifies finance companies, market intermediaries, investment funds, and asset managers as the four non-bank noninsurer (NBNI) financial entity types, methodologies concerning finance companies and market intermediaries will not be discussed in this paper, our focus being solely on asset managers and investment funds.

The paper is organised as follows: In section 2 we review the definitions adopted to assess the systemic importance of asset managers and investment funds. Section 3 is dedicated to the investigation of the materiality threshold, which represents the pivotal criterion to define which entities should be targeted for systemic importance designation (entity-based approach). In section 4 we analyse the main issues associated with the indicators proposed in the FSB-IOSCO framework in order to assess the systemic importance of an institution. One of the most problematic indicators is based on the notional value as a measure of the riskiness of derivative exposures. We explain why the approach is defective, and propose as an alternative measure the ‘Commitment Approach’. Finally, in section 5 we explore a few additional weaknesses of the methodologies, which were not considered in the FSBIOSCO framework. We believe that the proposed methodology lacks supporting empirical evidence, that it does not take into account the unique features of the investment industry, and that it adds unnecessary entropy to an already crowded pipeline, in terms of new regulatory programmes that were launched for the investment industry in the US and in the European Union (EU).

Page 38: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

39

2. Systemic Importance of Asset Managers and Investment Funds

The principal aim of the methodology set out in the FSB-IOSCO framework is to mitigate systemic risk in order to enhance financial stability. However, we found significant definitional blurriness around the concept of systemic risk and the transmission mechanisms which enable an event to have systemic repercussions. The methodology scrutinises three potential channels of contagion (counterparty channel, market channel, and substitutability). In this section, we review them by taking into account both the existing academic literature and the remarks set forth in the industry’s consultative responses.

2.1 Definitional Blurriness

The methodologies identified to address the systemic risks, potentially caused by investment funds and asset managers, do not have solid definitional foundations. Although the concept of systemic risk occupied the forefront of all the regulatory and academic debates after the financial crisis of the period 2007-2008, no consensual definition has been reached. This definitional challenge was officially recognised by the Financial Stability Oversight Council (FSCO, 2011). This gap is, probably, due to the fact that the precise meaning of ‘systemic risk’ is ambiguous, and that different people assign to it different meanings, as highlighted by Elliott (2014). Additionally, Bisias et al. (2012) surveyed thirty-one different methods of measuring systemic risk.

While policy makers often use ‘systemic risk’ to generically address the risk of global instability in financial markets, scholars have attempted to outline its definitional components in more detail. For example, an often-refereed

definition in the prudential regulation literature is that which was drafted by the International Monetary Fund (IMF), the FSB and the Bank for International Settlements (BIS) for the G20. They define systemic risk as “a risk of disruption to financial services that is caused by an impairment of all or parts of the financial system and has the potential to have serious negative consequences for the real economy” (IMF, FSB, and BIS, 2009), a definition that has been supported by Caruana (2010) and Chakrabarty (2012).

The literature on systemic risk assessment is strongly banking-focused and, according to Cerutti, Claessens, and McGuire (2012), can be classified into three main categories. In the first category, balance sheet links are responsible for the amplification of shocks and their transmission cross the borders. The second category groups empirical-based methodologies which use market data to measure systemic risk premia and the correlation of shocks across markets. The third category relies on simulations to understand how specific types of shocks can escalate into severe systemic events. Adapting this classification to asset managers is difficult, since, as we see below, asset managers bear a small balance sheet compared to assets under management and there are no sufficient historical cases to feed an empirically-based assessment. These may be the reasons why the third category is the only category for which further development would make good sense. In this this respect, one of the most cited contributions in this domain comes from de Bandt and Hartmann (2000), according to whom, systemic risk can be defined as the risk that a shock will have in affecting one or more institutions (or markets), thus determining the failure of one or more institutions (or markets). This definition lays down the key definitional components of systemic risk: the shock - idiosyncratic or systematic - and the

First published online: January 12, 2015

Page 39: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

40

propagation mechanism. Yet, according to Borio and Drehmann (2009), it remains elusive in its relation with the broader issue of financial stability. For a recent literature survey, see also Hartmann, De Bandt, and Peydró-Alcalde (2009), who define systemic risk as some trigger which leads to problems in a larger number of financial intermediaries or markets, due to the presence of some market failure(s). The usual reference in the literature for a discussion of the definitional components of systemic risk is still that of Kaufman and Scott (2003). These authors identify three main elements. The first element is the concept of macroshock, which may produce adverse effects on a broad system of institutions. The second is the propagation mechanism, from one individual institution to a broader system, under the implicit assumption that a macroshock will have a direct causal effect on one institution. The third component is similar to the second component, since it refers to the transmission mechanisms, but differs with the latter since it addresses indirect, rather than direct links. Under conditions of market uncertainty, risk-averse agents may opt for a flight to safety, meaning that they will transfer portions of their portfolio toward safer asset classes without properly differentiating between solvent and insolvent counterparties. Such runs will put downward pressure on the price of securities, with potential further spillover effects into markets that are not directly involved with the original shock.

Given the above, it is surprising that the FSB-IOSCO framework, neither addresses the definitional challenges around systemic risk, nor refers to an authoritative definition provided elsewhere in the macro prudential literature. In our view, defining the methodology under consideration cannot prescind from a formal and operational definition of systemic risk. The second element the FSB-IOSCO framework seeks to address is the transmission mechanisms.

Yet again, we found that the document neither defines nor describes such mechanism under the guise of the complexity of the various business models involved. Notwithstanding, it advances three types of mechanisms or channels: (1) Exposures / Counterparty channel; (2) Asset liquidation / Market channel; and (3) Critical function or service / Substitutability. As we shall illustrate later in this paper, the remainder of the FSB-IOSCO framework document addresses, in an unsystematic way, the mechanisms of contagion through these three channels. We address the deficiencies of these mechanisms in the following section.

Although it is not the goal of this paper to propose our own definition of systemic risk, in reviewing the above-mentioned literature we have highlighted that scholars identify two crucial definitional components related to the transmission of systemic risk: the triggering shock and the transmission mechanism. Notwithstanding this, FSB-IOSCO (2015) does not advance, anywhere, a formal definition of the triggering shock. This blurriness creates a twofold problem. On the one hand, an assessment methodology that does not define the target-triggering event that is generating systemic risk, is both methodologically and technically incomplete. On the other hand, and as a result of the above, the document is pervaded by inconsistent references to the triggering event. For example, in the section ‘scope of assessment’ (p. 10), the triggering event is clearly identified as the financial entity’s failure, while in the following pages (pp. 17, 18) the focus is moved towards the distress that may be experimented by financial institutions. We have identified at least four different events which are referenced across the FSB-IOSCO framework: financial distress, default, insolvency, and failure. These are different events and yet, each one lacks a clear definition. Specifically, ‘financial distress’ has not been defined at all,

Page 40: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

41

while ‘default’ could have been defined according to the standard credit risk literature as a credit event or as the actual event of missed payment. Similarly, ‘insolvency’ should have been properly defined, whether it referred to potential future insolvency based on balance sheet projections or to the actual event of insolvency. Finally, ‘failure’ could have been associated to bankruptcy, even if in each jurisdiction this receives a different treatment.

In conclusion, we recommend that the FSB and IOSCO provide more solid foundations to their methodologies by defining the systemic risk they seek to address and by formally defining the triggering event that can trigger contagion.

2.2 Contagion through Counterparty Channel

Counterparty risk as a mechanism of systemic contagion, refers to the effects that the failure of a financial entity would have on its creditors, counterparties, investors, or other market participants through their exposures to the failing entity. One of the key concerns expressed by the FSB and IOSCO, with respect

to large asset managers and investment funds, is the ‘Exposures / Counterparty channel’ associated with security lending. While the FSB-IOSCO framework appears to assume that asset managers follow the same originate-to-distribute lending model as banks and other sorts of financial institutions, BlackRock points out that it “typically requires borrowers to post collateral between 102% and 112% of the value of the securities lent”, and that “this overcollateralization provides an additional ‘safety cushion’ in the event that a borrower fails to return the security that is out on loan” (BlackRock, 2015, p. 12).

An asset manager such as BlackRock does not pose systemic risk from the ‘exposures / counterparties channel’ point of view, for several reasons. First, as aforementioned, asset managers do not act as counterparties to client or investment fund trades or derivative transactions. Second, in the case of securities lending arrangements, asset managers are legally required to meet overcollateralisation requirements, which prevent them from en masse defaults.

For example, as disclosed in BlackRock’s 2014 10K, while the amount of securities on loan and subject to indemnification was USD 145.7 billion, borrowers posted USD 155.8 billion as collateral. BlackRock even mentions that it “never had its indemnification agreements triggered or had to use its own monies to repurchase a security on a lending client’s behalf ” (BlackRock, 2015, p. 13). Finally, together with the above-mentioned reasons, one should also contextualise the relatively small size of the asset management industry in the overall market ecosystem,

Collective Investment

Vehicles (CIVs) -registered & unregistered

(~15%)

Separate accounts managed by an asset

manager (~10%) Assets managed by asset owners

(~75%)

Figure1:Estimatedbreakdownof globalinvestableassets

Source: BlackRock (2015)

Page 41: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

42

corresponding to less than 10% of the global investable assets, as shown in Figure 1.

2.3 Contagion through Market Channel

In the FSB-IOSCO framework, contagion through the ‘Asset Liquidation / Market Channel’ refers to the “impact of distress or liquidation of an investment fund on other market participants through asset sales that negatively impact market prices”. For the FSB and IOSCO, this channel is of crucial importance in determining the systemic importance of a fund or asset manager (FSB/IOSCO, 2015, p. 33). However, empirical data from both PIMCO and the Investment Company Institute (ICI) suggest otherwise.

On the one hand, the case of PIMCO’s flagship ‘Total Return Strategy’ fund undermines the FSB-IOSCO framework’s argument. In September and October 2014, following the announcement of the decision of PIMCO’s co-founder, Chief Financial Officer (CFO), and star manager, Bill Gross, to leave the firm, PIMCO experienced heavy redemptions that could have, according to FSB and IOSCO, negatively impacted market prices. To empirically illustrate the extent to which Gross’ departure influenced redemptions, according to Grind (2014), 70% of the USD 48 billion outflows from PIMCO came from funds that were previously managed by Gross (USD 23.5 billion in net redemptions in September 2014, and USD 27.5 billion in October 2014). Although these redemptions where important, PIMCO was able to “meet them in an orderly and timely way, while also maintaining risk exposures and maintaining and dynamically replenishing its cash buffers across its mutual funds”. As PIMCO further notes in its consultative response, there were no fire sales or forced selling, and PIMCO never had to, nor did it even consider, supporting the fund or any of its other funds (PIMCO, 2015, pp. 13, 14).

Several reasons, that are inherent to asset

managers, explain why PIMCO dealt with these important redemptions without problems. Most importantly, large asset managers with a fiduciary duty to deliver the best services to their clients, have no choice but to actively manage liquidity and redemption risks. As such, and as self-noted, PIMCO manages “liquidity on a daily and intra-daily basis and constructs portfolios with liquidity as a principal consideration” (PIMCO, 2015, p. 14), thus alleviating the negative impact of heavy redemptions, and undermining the view of the FSB-IOSCO framework that the market channel may be a source of systemic risk to the financial system.

On the other hand, the ICI notes that even during the extremely volatile period of 2008, redemptions from US mutual funds were limited, thus corroborating the above-mentioned point with regards to the orderly replenishing of funds. The study, specifically, points out that during the tumultuous months between September and November 2008, while US mutual funds experienced net redemptions of about USD 229 billion over a three-months period, these funds returned to positive net purchases of approximately USD 25 billion in January of the following year (ICI, 2013).

Finally, given the fiduciary duty of asset managers, any potential losses would be widely dispersed across the fund’s shareholders, thus preventing asset managers and investment funds from suffering from spiralling fire sales and deteriorating balance sheets (Richardson, 2012).

Hence, the reasons noted above cast serious doubts on whether the market channel per se could fuel systemic risk when a large asset manager is facing heavy redemption pressure.

2.4 Substitutability

In the FSB-IOSCO framework, substitutability refers to “cases where it is difficult for other entities in the system to provide the same or

Page 42: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

43

similar services in a particular business line or segment in the global market in the event of a failure” (FSB/IOSCO , 2015, p. 6). FBS and IOSCO have maintained this factor to maintain consistency with the methodology adopted to identify globally systemically important banks and insurers. However, the decision to adopt substitutability as a basic impact factor for fund and asset management represents a clear step backward from the first consultation, wherein, the FSB recognised that “the investment fund industry is highly competitive with numerous substitutes existing for most investment fund strategies” (FSB/IOSCO, 2014, p. 30).

A few points are, therefore, worth highlighting. Firstly, unlike banks asset management, firms lack access to central bank safety nets or governmental support, thus, making them much more prone to closure and, consequently, highly substitutable (Richardson, 2012).

Secondly, and which follows from the above, asset management is a highly competitive industry, within which, managers are replaced routinely and clients are offered a plethora of investment possibilities in accordance with their investment preferences and risk appetite. To illustrate this point quantitatively with a popular ratio, we refer to PIMCO, who, in its consultative response, draws from the Herfindahl-Hirschman Index (HHI), which measures market concentration, and notes: “the asset management industry is highly substitutable and competitive with a HHI Index of 481 as of December 2013”, where companies with an index score of less than 1,000 are considered to be less-concentrated (PIMCO, 2015, p. 22). It might, nevertheless, be argued that in spite of a low value indicating low concentration, the HHI Index may fail to account for very large asset managers. Further research could, therefore, be undertaken to identify firms with substantial advantages in certain markets. Bearing this in mind, the asset management industry is in no

way monopolistic or lacking competitiveness, as recognised by Capital Group (“competition to manage assets is intense and asset managers are highly substitutable”, Capital Group , 2015, p. 12) and echoed by Fidelity (2015) and Invesco (2015). In the first consultation, the FSB even noted that “from 2000 to 2012, on average 671 new funds were launched per year, compared to an average of 291 liquidations” (FSB/IOSCO, 2014, p. 30). Furthermore, the ICI notes that “of the largest 25 fund complexes in 2000, only 13 remained in this top group in 2013” ICI (2014). Given the above, it comes as no surprise that leading academics also recognise that there is “high substitutability between asset management companies” (Roncalli and Weisang, 2015, p. 14).

Third, the FSB-IOSCO framework assumes that large funds and asset managers are potentially systematically riskier. However, one should consider Table 1 below, showing the survival rate of US mutual funds.

Table1:SurvivalRateof USMutualFundAssetManagers2004-2014

Asset manager size decile in

2004

Percentage of decileThat did

survive in 2014That did not

survive in 2014(Largest)

10 98.3% 1.7%9 91.5% 8.5%8 83.3% 16.7%7 71.2% 28.8%6 61.7% 38.3%5 61.0% 39.0%4 45.0% 55.0%3 52.5% 47.5%2 36.7% 63.3%1 40.7% 59.3%

(Smallest)Average 64.2% 35.8%

Source: Fidelity (2015), exhibit 7

Page 43: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

44

Table 1 shows that the largest funds, supposedly more likely to be sources of systemic risk, according to FSB and IOSCO, are actually more resilient with a survival rate of 98.3%, while most mid-size and small asset managers tend to disappear and let new actors take their places. Indeed, from the first to seventh decile of asset manager size surviving in 2004, a minimum of 28.8% have not survived to 2014 (with more than a half disappearing in the first, second, and fourth deciles), thus revealing the high degree of substitutability in the industry.

Finally, it is important to recognise that the substitutability factor not only is inappropriate, but also has the potential of actually enhancing, rather than reducing, systemic risk in the financial system. Although there is little doubt that the size of the first fund would seemingly matter most in times of market distress, whereby, the value of the fund would collapse in line with dropping equity prices, the fact that it trades in highly liquid markets would allow fund managers to easily reallocate funds without creating further distortions. Indeed, as noted by Richardson (2015, p. 3), “the price sensitivity of fund investors...would lead investors to re-allocate investments to either different funds or even different fund families”, thus, offsetting the possibly significant losses and fire sales inflicted upon the portfolio.

If the SIFI tag, proposed in the FSB-IOSCO framework, were to be endorsed, there is little doubt that a designated fund or manager “would be placed in an unfair and inappropriate competitive disadvantage vis-à-vis non-designated competitors” (Invesco, 2015, p. 4). This could in turn lead to important asset shifts toward non-designated managers, which although smaller in size, may undertake riskier activities that are not captured by the methodologies of the FSB-IOSCO framework. According to Richardson (2015, p. 16): “Given the high level of substitutability in the asset management

industry [...] is it not only possible but likely that such a narrowly focused regulatory intervention would simply shift the assets to less regulated (and potentially more systemically risky) parts of the financial system?”. As such, Richardson further argues that “imposing regulations on a small set of funds based largely on asset size will not reduce systemic risk” and instead, would have the potential of increasing systemic risk (Richardson, 2015, p. 4). The most evident example of risky effect that the FSB-IOSCO framework produces is “shifting assets out of regulated funds and reducing the appeal of funds in the very markets with which the FSB purports to be most concerned” (Fidelity, 2015, p. 2). Indeed, the higher operational costs of SIFI-labeled large funds and asset managers would cause the reallocation of client’s assets toward less-regulated and, arguably, more complex and illiquid vehicles.

In light of the industry opinion and the academic research presented here, it can clearly be argued that the inclusion of substitutability as one of the basic impact factors to assign the SIFI label, is not only irrelevant to the investment industry but, could also lead to unintended consequences enhancing systemic risk.

3. Materiality Threshold as the Pivotal Criterion

The FSB-IOSCO framework adopts a specific materiality thresholds to provide an initial filter to restrict the asset managers and investment funds universe under consideration. There is a consensus among the industry responses to FSB-IOSCO (2015), that the proposed materiality thresholds fail to account for the specific activities of asset managers and investment funds. We shall delve, in more detail, into the proposed materiality thresholds in order to clarify the technical and industry-specific reasons why such thresholds are arbitrary and

Page 44: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

45

would represent an important step backward in reducing systemic risk. In order to meet this endeavour, we will first outline the proposed materiality thresholds. Then, we will explain why, in this respect, the FSB-IOSCO framework is a step backward from the first consultation.

According to the FSB and IOSCO, the following materiality thresholds should apply (FSB/IOSCO, 2015, p. 36, 50-51):

Table2:MaterialityThresholdsof theFSB-IOSCOFramework

Type of institution

Threshold

Private funds (hedge and private equity)

USD $400 billion of gross no-tional exposures (GNE).

Traditional investment funds

Option 1: USD $30 billion net asset value (NAV) and balance sheet financial leverage of 3 times NAV, with a size-only backstop of USD $100 billion net assets under management (AUM).Option 2: USD $200 billion gross AUM unless it can be demonstrated that the investment fund is not a dominant player in its markets.

Asset managers

USD $100 billion threshold in total balance sheets assets to capture managers whose large balance sheet “could indicate the existence of potentially significant non-asset management activities”.USD $1 trillion AUM threshold in order to capture a fund’s “potential systemic impact on the global markets in situations where the risks are transferred through the assets they manage”.

3.1 A Step Backward from the Previous Consultation

The consultation organized in 2015 to propose the FSB-IOSCO framework follows a first consultation organized a year before (FSB-IOSCO, 2014). When reading the consultative responses to the second consultation, it appears clearly that the investment industry regards the newly proposed methodologies as a step backward from the first consultation, wherein, it was recommended that asset managers ought not to be included. As most industry players have emphasised, the second consultation directly omits the conclusions reached in the first document. Three examples can easily support this point.

Firstly, as pointed out by PIMCO, the first consultation conceded that the size threshold – the cornerstone of the newly proposed methodologies – is based on unproven theoretical foundations (PIMCO, 2015, pp. A-1-5). Also, as the Fidelity Investments team suggest in their response, instead of building upon the many responses to the first document, which advocated that an activity-based approach was more in line with the characteristics of the investment industry, FSB-IOSCO (2015) patently ignores the evidence given and, instead, expands the G-SIFI designation proposal (Fidelity, 2015, p. 2).

As such, it is little wonder that comments of outright dissatisfaction and disappointment from industry players are found in every consultative response to the FSB-IOSCO (2015). FRC believes that the FSB-IOSCO framework is an illustration of regulatory initiatives that ignore the complex technicalities of the target industry. Hence, we believe that only by actively coordinating their research and efforts with industry actors will the FSB and IOSCO manage to deliver sound regulations for the increasingly

Page 45: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

46

important asset management and investment industries.

3.2 Entity-based versus activity-based approach

One of the key concerns echoed in the industry consultative responses regards the arbitrariness of the proposed G-SIFI designation of individual funds and asset managers. Indeed, it is feared that an entity-based designation focused on size, could ultimately be counterproductive and potentially risk enhancing, thus, achieving the very opposite of its initial aim. In light of works by Richardson (2015) and Roncalli and Weisang (2015), regulators should weigh any form of SIFI-designation onto individual funds and asset managers against the likely costs that it would inflict on both investors and capital market activity. Such a designation is not appropriate to the investment and asset management industry

for several reasons.First, the designation of a handful of

“nonbank-affiliated firms would increase moral hazard concern” by allowing several firms to, for example, use their ‘systemic title’ to undertake riskier activities, while benefiting from access to central bank safety nets or governmental support (Capital Group, 2015, p. 4). The deleterious consequences of moral hazard are well-known and well-documented (Dowd, 2009; Farhi and Tirole, 2012). As such, creating new incentives for moral hazard should be avoided at all cost if important actors of the investment industry are not to become the source of future financial distress.

Second, by designating individual funds and asset managers due to their size instead of their activities, the proposed regulation would fail to successfully address the very issue of systemic risk they seek to grapple with. Applying the proposed methodologies to currently active

Table3:InvestmentFundsGreaterThanOrEqualtoUSD$30billioninAUMRank Fund

companyFund name AUM as of Dec 31,

2014 (USD millions)Investment type

1 Vanguard Vanguard Total Stock Market Index Fund $ 383,003 Open-End Fund

2 State Street SPDR® S&P 500 ETF $ 215,908 Exchange-Traded Fund3 Vanguard Vanguard Five Hundred Index Fund $ 198,712 Open-End Fund4 Vanguard Vanguard Institutional Index Fund $ 187,725 Open-End Fund5 PIMCO PIMCO Total Return Fund $ 143,358 Open-End Fund

6 American Funds

American Funds Growth Fund of America $ 142,631 Open-End Fund

7 Vanguard Vanguard Total Bong Market Index Fund $ 136,673 Open-End Fund

8 Vanguard Vanguard Total Intl Stock Idx Fund $ 134,442 Open-End Fund9 Vanguard Vanguard Prime Money Index Fund $ 132,692 Money Market Fund

10 American Funds

American Funds Europacific Growth Fund $ 120,868 Open-End Fund

11 JP Morgan JPMorgan Prime Money Mkt Fund $ 118,520 Money Market Fund12 Vanguard Fidelity® Cash Reserves $ 113,946 Money Market Fund13 Vanguard Fidelity® Contrafund® Fund $ 109,845 Open-End Fund

Source: Fidelity (2015)

Page 46: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

47

investment funds, the list of potential G-SIFIs would be relatively small and limited to large and passive equity long-only funds, thus, failing to capture those actors whose activities – not size – “were at the origins of past systemic shocks or more importantly the future likely sources of these shocks” (Roncalli and Weisang, 2015, p. 34).

To make the point that large funds per se do not pose risk, Fidelity has presented the following table (Table 3), showing thirteen investment funds greater than USD 100 billion in AUM. Six of them are index funds and three are money market funds, which do not pose significant, or even negligible, systemic risk to the financial system.

The same point can be made for large exchange-traded funds (Table 4). The only exchange-traded fund that would be concerned by the proposed methodologies would be the

SPDR S&P 500 ETF Trust (USD 173.9 billion AUM), a large, although traditional, long-only US equity investment fund tracking the S&P 500.

Such designation would inflict higher costs onto millions of investors, simply seeking to gain exposure from the S&P 500 and would possibly distort competition and also create incentives that could ultimately lead to shifts in asset allocation towards less regulated, riskier, and less-liquid instruments, thus achieving the very opposite of what was sought. For example, the FSB-IOSCO framework would fail to identify the risks created by, and shifted among, say, 10 smaller funds, such as hedge funds, of USD 15 billion each relying on leveraged short-selling and highly illiquid Over-the-counter (OTC) derivative positions. Although individually less significant, these funds, when aggregating their values and risks - significant volume of non-centrally cleared derivatives, may have the potential to pose much more extensive damages to the financial system than large, but long-only funds. Furthermore, those riskier funds, if importantly leveraged in positions in illiquid assets, and being “either strategically well interconnected with other large SIFI or being a strategic player in a particular market”, have the potential to generate greater disruption than a very large fund invested in highly liquid assets or the SPDR S&P 500 ETF Trust evoked above (Roncalli and Weisang, 2015, p. 32), which would be labelled as systemically important, irrespectively of its leverage. As a reminder, investment funds and asset managers are already subject to extensive leverage regulation under SEC rules in the US and UCITS/AIMFD[2] in Europe, therefore the “failure of a long-only manager is not something we need to worry about” (Metrick, 2014, p. 236).

There is a consensus in the industry that the prudential emphasis on size rather than the nature of activities is at the core of the FSB-

Table4:LargestExchange-TradedFunds(inUSDBillion)

Fund AUMSPDR S&P 500 ETF Trust 173.9iShares Core S&P 500 ETF 68.1iShares MSCI EAFE ETF 60.2Vanguard Total Stock Market ETF 55.9Vanguard FTSE Emerging Markets ETF 50.1PowerShares QQQ Trust 39.5iShares MSCI Emerging Markets ETF 33.0Vanguard S&P 500 ETF 31.4iShares Russel 1000 Growth ETF 29.0Vanguard FTSE Developed Markets ETF 27.5Vanguard Total Bond Market ETF 27.2Vanguard REIT ETF 26.7iShares Russel 2000 ETF 26.7iShares Russel 1000 Value ETF 25.8iShares Core S&P Mid-Cap ETF 25.3iShares Core US Aggregate Bond ETF 24.3

Source: Roncalli & Weisang (2015)

Page 47: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

48

ISCO framework’s failure to deliver sound assessment methodologies for preventing systemic risk in the asset management and investment industry. In this respect, PIMCO (2015) and Vanguard (2015) are at the forefront of advancing alternative, activities-based metrics for determining whether a fund or asset manager should be subject to SIFI regulation. The view of the industry is corroborated by academic research which points out that the proposed materiality thresholds overleap key aspects of systemic risk: liquidity risk, leverage risk, and risks associated with a given investment strategy.

In conclusion, by attempting to regulate investment funds and asset managers with prudential-like regulation focused on size rather than on activities, the proposed measures could lead to important market distortions, eventually enhancing rather than preventing systemic risk in investment funds and asset managers. All thirteen consultative responses and academic works used in this paper agree that the FSB and IOSCO should adopt a more comprehensive set of assessment methodologies acknowledging the various sources of systemic risk stemming from funds’ and asset manager’s particular activities, rather than their size.

4. Indicators for Assessing Systemic Importance

In its high-level framework, the FSB-ISCO framework identifies a basic set of impact factors (size, interconnectedness, substitutability, complexity, cross-jurisdictional activities) to develop a set of quantitative and qualitative indicators in order to further identify systemically important asset managers and investment funds.[3] In this section, we address those indicators that raised the greatest concerns. Please note that, in this report we do not address the indicators that are intended to measure the substitutability channel, since we have questioned, in above

argument, the claim that this channel can generate systemic consequences (see section “Substitutability”).

4.1 Size

In terms of size, given the variety of business models present in the asset management universe, the methodology identifies completely different indicators. In theory, the larger the size of an investment fund, the larger the impact on counterparties (counterparty channel) and markets (market channel). For unleveraged funds, the size is measured by the Net AUM (Assets Under Management net of any liabilities), while for leveraged funds the Gross Notional Exposure (GNE) has been proposed, both for balance sheet leverage (repos, prime broker financing, secured and unsecured lending) and synthetic leverage (exposure through derivatives, considering the resulting exposure to the underlying asset or reference). Since existing regulation imposes strict leverage limitations on investment funds other than hedge funds, the latter indicator is intended to address mainly hedge funds. The situation is entirely different for asset managers. For them, Net AUM is reproposed, but this time it represents a proxy of interconnectedness due to the trading activities for their clients. Events that jeopardise the reputation of the asset manager may trigger outflows from the funds managed, which could put downward pressure on market prices thorough the asset liquidation/market channel. Since asset managers have traditionally small balance sheets (since they invest clients’ assets), total balance sheet assets are proposed as a second indicator to measure non-asset manager activities.

We have already summarised, in the above discussion, the concerns of the industry regarding using size as a proxy of systemic importance. The industry has also expressed

Page 48: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

49

concerns in their responses to the FSB-IOSCO framework with respect to the adoption of the GNE as a measure of derivative exposures. This is the reason why we dedicate a specific section below to this subject, and to the assessment whether GNE should be used as a size indicator for investment funds.

• Weaknessesof thenotionalvalueasanindicatorof derivativeexposures

The notional value of the derivatives portfolio is often considered a good indicator of the systemic risk posed by an institution , because it represents the maximum amount the institution could owe to other institutions in case of market distress (Gerding and Blair, 2009). Albeit the notional amount is a straightforward measure that gives an indication of the equivalent position of a derivative in the cash markets, it does not represent a good indicator for systemic risk purposes for several reasons.

First, the notional amount is affected by the double counting of offsetting positions. For example, “if J.P. Morgan and Chase Manhattan Bank engage in a swap with USD 5 billion in notional amount, this will be counted as a USD 5 billion transaction for each firm. Thus, this USD 5 billion transaction will anecdotally be listed as USD 10 billion in notional principle. Careless use of such elephantine figures may lead to “overblown fears of the systemic risks presented by derivatives.” (Waldman, 1994, p. 1033). Second, many of the positions hedge each other. Third, what really matters is not only the current market value, but also the potential changes due to market stress. Fourth, since different derivatives may have different risk profiles, the notional amount could convey a distorted measure of risk (two derivatives with the same notional amount may suggest that they have the same risk profile). Jorion (2007) estimates that the gross market value of all

OTC derivative contracts is only 3.7% of the total notional amount. Similarly, Miller (1997) considers that the notional value should be viewed merely as a bookkeeping number, and estimates the percentage of global notional amount that would specifically represent a risk at only 2%.

• Critical assessment of theGNE as asizeindicator

The FSB-IOSCO framework proposes to adopt the GNE as an indicator for determining the size of private funds, with the following definition: “GNE is calculated as the absolute sum of all long and short positions, considering notional value (delta-adjusted when applicable) for derivatives” (FSB/IOSCO, 2015, p. 35, n.54). The underlying assumption being that for general investment funds the use of derivatives is not intended to obtain (synthetic) leverage, but to rather hedge exposures to certain asset classes.

In the case of hedge funds, GNE can be appropriate to measure the traditional balance sheet leverage (repos, prime broker financing, secured and unsecured lending) while, for the reasons exposed above, it does not represent an appropriate measure for systemic leverage gained through derivative positions. The framework proposes that the GNE should follow the same reasoning presented in the previous section: although it does not represent a measure of the money that the fund is at risk of losing, it provides an indication of the impact of the fund on the markets in case of market distress; additionally, it is simple to measure.

Reponses to the first consultation had already cast doubts in line with the above, particularly, pointing out that the GNE does not take into account basic risk mitigation aspects, such as offsetting positions and hedging. Nevertheless, the proposed methodology in FSB-IOSCO

Page 49: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

50

(2015) omits to apply adjustments to the measure in order to take into account these basic risk mitigation aspects. Three justifications are provided.

First, adjustments “may introduce complexity and model risk as risk mitigation techniques employed will vary considerably across funds” (FSB/IOSCO, 2015, p. 39). However, this argument can be challenged, considering that offsetting positions are relatively easy to assess without introducing controversial assumptions and, therefore, do not introduce model risk. Furthermore, the argument implies that if the industry were to develop heterogeneous models, this would lead to increased complexity. On the contrary, we think that crowding out competition in risk innovation in both the academia and the industry does not allow to progress our knowledge of risk models.Moreover, homogeneity in risk measurement produces homogenous response and herd behaviour in times of crisis, which exacerbate distress and increase systemic risk.

Second, data to take into account risk mitigation adjustments are not available at a sufficiently granular level. Albeit, this could be a limitation, we do not believe it is appropriate to adopt the wrong methodology because of a lack of a data. Historically, regulators have not been shy in requesting regulatory reporting with granular data, and institutions could be interested to obtain a more idiosyncratic risk adjustment of their derivative exposures in exchange for supervisory disclosure of the required information for validation.

Third, risk mitigation techniques may not function as intended in times of markets distress. This is certainly true for some market risk measurement techniques mandated by banking or insurance regulation, such as Value at Risk (VaR), which has been seriously questioned after the 2008 crisis. Other techniques have shown more promising prospects in assessing

stress conditions, and indeed, stress testing has attracted a lot of attention after the 2008-2010 crisis, especially from the banking regulators. Moreover, providing the industry and the academia with the incentives to develop alternative models would promote, rather than restrain, the development of better market risk techniques to address adverse scenarios. As above-mentioned, we believe that heterogeneity in risk modelling will favour the progress of our knowledge in this domain.

• Alternativestothenotionalvalue

It is unlikely that notional amount has no correlation at all to risk, and this is why scholars keep debating about this topic. According to Jorion (2007), the risks of a position are better assessed through mark-to-market valuation under stress conditions, which is a conclusion that is consistent with the observations we provided in this report regarding the Long Term Capital Management (LTCM) debacle. Albeit it has not been proven yet that the mark-to-market value actually represents a measure of a derivative’s risk, it seems safe to say, at least, that it can be used as a better proxy measure than the notional amount. Still, the mark-to-market has procyclical issues, therefore as an alternative we present the so-called Commitment Approach, mentioned in the consultative responses of, among other, Amundi (2015) and Union Investments (2015).

This approach was initially presented by the Committee of European Securities Regulators (CESR) as a part of Alternative Investment Fund Managers Directive (AIFMD) (CESR, 2010). Initially focused only on UCITS funds, it could represent a viable proxy for calculating global exposure of non-UCITS funds, too. Unlike the GNE, the CA seeks not to determine the overall global exposure, but rather attempts to offer a more leverage-focused methodology by i) converting derivatives into “the equivalent

Page 50: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

51

position in the underlying assets of those derivatives”, and ii) by calculating global exposure “using efficient portfolio management techniques.” Accordingly, the CA for standard derivatives is always defined as “the market value of the equivalent position in the underlying asset. ” (Amundi, 2015, pp. 6-7).

Given the manner in which it is defined, this approach does not capture the overall derivative exposure which, as we have seen, is not representative of a fund’s systematic riskiness. Rather, it limits itself to net market exposures and, thus, becomes a better-suited valuation method for a fund’s global exposure than GNE. Moreover, where the GNE applies, homogeneously, the same criterion to the whole portfolio, the CESR puts forward a well-detailed and product-tailored methodology as part of the CA.[4] Finally, it should also be pointed out that the CA takes into account netting and hedging positions in order to better-reflect a fund or asset manager’s global exposure (CESR, 2010, pp. 13-20).

We now turn to assessing the methodological relevance and effectiveness of the CA for the investment industry. Indeed, although this approach offers more appropriate tools for valuating global exposure than GNE, several methodological flaws remain. It is hoped that by concisely shedding light and analysing these flaws, this paper will provide the industry with insights in order to successfully alleviate such shortcomings. First, and foremost, the CA represents only one element of an individual fund or asset manager’s overall risk management strategy. As such, the CA, like the GNE, is not liquidity sensitive. In other words, the CA does not take asset liquidity into consideration when applied to defining appropriate materiality thresholds (Roncalli and Weisang, 2015). The same authors propose a solution to this issue by combining the CA and the so-called “scoring system”, where the AUM is weighted by leverage

and a measure of liquidity: Si =AUMi × LEVi × λi (where the score Si is determined by LEVi, the fund’s portfolio leverage, and λi, an asset liquidity factor that depends on the asset class of the portfolio). For individual funds with diversified portfolios, a weight can be assigned to each asset class. The drawback of this technique is that Roncalli and Weisang (2015) do not propose a definition for λ. Although defining lambda is not in the scope of this paper, it may be suggested that this definitional gap leaves asset and fund managers with the task of calculating their asset ‘liquidity factor’ by using their own, in-house models. It should be noted, nonetheless, that it is possible to calibrate the lambdas using the methodology suggested in the Bank of International Settlements’ paper on trading book capital requirements as part of Basel III (BIS, 2014, p. 23).

A second issue with the CA, is that it should not be used by funds “using, to a large extent and in a systematic way, financial derivative instruments as part of complex investment strategies” (CESR, 2010, p. 6). This point should be of great importance to closed-ended funds, such as hedge funds, that rely on highly technical strategies involving highly illiquid derivative positions. Yet the amount of such funds is very limited and not significant enough to qualify as NBNI SIFIs. Thirdly, the CA neither adequately, nor fully, captures non-directional risks such as volatility risk, gamma risk or basis risk, that are often found in hedge fund and hedge fund-like strategies, such as arbitrage strategies, option strategies, complex long-short, and market neutral strategies. Lastly, being based on portfolio optimisation, the CA is not appropriate when addressing situations of disequilibrium. We, therefore, reiterate the importance of combining the CA with other risk management techniques to successfully evaluate the risks associated with derivatives exposure.

Page 51: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

52

4.2 Interconnectedness

For an investment fund, a greater degree of interconnectedness corresponds to greater counterparties credit exposures and diversification, which actually enables funds and managers to palliate the negative effects of counterparty defaults.[5] As such, interconnectedness does not seem to be relevant for the traditional asset management activities.

4.3 Complexity: The Resolution Process

While the FSB-IOSCO framework endorses a prudential-based approach to regulating the asset manager, it is worth emphasising that the resolution process of asset managers is very different to those of banks and other highly leveraged financial institutions. Of considerable

interest, are the responses from Vanguard and BlackRock, regarding the resolution process of an asset management firm, in case of reputational crisis or market distress.

Where banks have “commitments to repay billions of dollars in fixed obligations”, among many more complex commitments, investment managers/advisers exit the business in an orderly fashion, because they lack this complexity (Vanguard, 2015, p. 18). To further illustrate their argument, Vanguard mentions the example of Strong Financial, an asset management firm whose founder and CEO, Richard Strong, was prosecuted in 2004 for market-timing trading for favoured clients using his company’s funds. And yet, in spite of such reputational damages, the firm’s board of trustees appointed a new manager, John Widmer, to manage the firm’s funds. More importantly, the funds were bought

Figure2:U.S.MutualFundsandMutualFundSponsorsRoutinelyExittheU.S.MutualFundMarket

Source: Stadler & Graham (2014)

Page 52: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

53

by Wells Fargo in 2005, and were efficiently integrated into the Wells Fargo Funds family without requiring any government or regulatory intervention.

The Vanguard point is further confirmed by the empirical evidence included in ICI (2015) on the resolution of mutual funds and their managers, from which Figure 2 is taken. The data show that funds and asset managers are regularly reorganised or closed down without government intervention or taxpayer assistance, even during severe market distress. In 2013 alone, 424 funds were merged or liquidated, respectively, “with little notice” and without generating “distress in the financial markets” (ICI, 2015, p. 2).

Furthermore, history shows that even in times of severe market stress, many asset management firms are both willing and able to take on additional inflows fund. For example, the ICI (2015) refers to a study by Grail Partners LLC, which notes that in year 2008, despite the chaos in financial markets, the global merger and acquisition activity in the asset management industry totalled $2 trillion, increasing to $4 trillion in 2009. Additionally, despite the exceptional situation in which financial markets were at the height of the crisis, asset managers were able to close their funds swiftly and smoothly. Here, the resolution process of Putnam investments is highly instructive. In mid-September 2008, Putnam’s Money Market Fund had suffered heavy redemptions pressures from institutional investors and it was decided to close the fund – rather than sell the securities – thus, protecting investors from losses in spite of the liquidity-constrained market in which they were placed. The fund, in the vein of the Strong Financial case, was successfully merged six days later with another fund belonging to Federated Investors.

According to the evidence presented above, historical episodes do not support the

preoccupation that the resolution process of asset managers can originate externalities with systemic amplitude. Albeit the absence of historical episodes does not represent proof that they cannot occur, we think that FSB and IOSCO should reconsider whether their assumptions on this subject are empirically ill-founded.

4.4 Cross-Jurisdictional Activity and Systemic Risk

Although this point could appear, seemingly, minor compared to the other issues covered in this paper, in a world of globalized financial markets it is crucial to recognise that the cross-jurisdictional activities of investment funds and asset managers do not enhance risk, but actually limit it by diversifying different types of risk across different jurisdictions. In the FSB-IOSCO framework, it is argued that “the greater the number of markets a fund invests in or has interaction with, the greater its global footprint and its importance for global financial stability”. Accordingly, “where managers invest significant amounts of investors’ funds in one or more foreign jurisdictions [...] the occurrence of a fund liquidation may create contagion that would transmit across borders.” (FSB/IOSCO, 2015, p. 44).

On the one hand, we agree with Amundi (2015), that the number of jurisdictions where a fund has activities – and thus counterparties – can be a good indicator of the complexity – and associated costs – of potential litigations. On the other hand, the more diversified the client base, investments, and counterparties, the lower the risk of having a dominant position and, thus, posing a systemic threat to financial stability (Vanguard, 2015; Franklin Templeton, 2015). Indeed, cross-jurisdictional activities enable to disperse counterparty risk, to mitigate the impact of actions taken by any one group of clients,

Page 53: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

54

and to diversify country risk. Moreover, from an operational point of view, asset managers, unlike banks, “do not have complex liabilities and intercompany funding arrangements that lead to confusion over claims from creditors in multiple jurisdictions.” (Vanguard, 2015, p. 9).

5. The Erroneous Methodology at the Wrong Moment

Thus far, we have enumerated the issues identified in the methodology, as proposed in FSB-IOSCO (2015). Nevertheless, there are matters that go beyond the technical details and that plague the roots of the regulatory draft. First, the proposed regulation does not rely on strong empirical evidence. As such, it attempts to provide safety measures against possible scenarios that have never occurred in the history of financial markets. Second, the proposed methodology fails to recognise the uniqueness of the industry by applying prudential measures that are amore appropriates for banks and insurance companies. Third, it adds a further regulatory proposal to an already busy agenda for the compliance department of asset managers and investment funds.

5.1 Lack of Empirical Evidence

According to the methodologies adopted in the FSB-IOSCO framework, the four NBNI financial entity types have been identified in light of “…historical examples of financial distress or failures in these four sectors that had an impact (or potential impact) on the global financial system.” (FSB/IOSCO, 2015, p. 8). When looking at financial history, there is little evidence of historical episodes of distress caused by the scenarios described in the proposed methodology, and that have produced systemically important consequences.

The FSB-IOSCO framework addresses the

concern that outflows from long-term mutual funds might destabilise financial markets. Such concern dates back to the Great Depression and has been recently studied in Collins and Plantier (2014). The mechanism they study is a tripartite one. First, a stress in the financial markets trigger heavy redemptions by fund investors. Second, investment funds sell securities to meet redemptions. Third, sales of securities put further downward pressure on stocks and bonds creating a feedback loop and exacerbating the situation. The findings of the authors are in line with the existing literature on the subject, that is, that there is no evidence of a direct feedback effect from aggregate fund outflows to market returns. In other words, investments funds distress does not pose systemic risk from a market channel perspective.

FSB-ISOCO (2015) presents only one historical case, associated with the LTCM crisis, classified as an excess of leverage, originated by the lack of regulatory leverage limits. Since the creditors and the counterparties of the company failed to enforce their own risk management standards with the LTCM, there are reasons to believe that leverage could represent a risk that could become systemic through the counterparty channel.

Associating the LTCM debacle with their excess of leverage is one of the classical criticisms about the fund conduct, as found in Hull (2012), for example. However, Allen (2003) has disagreed with this position, arguing that the leverage ratio is not always an appropriate measure of default risk, since it does not take into account the assets portfolio’s volatility. The excess of leverage, as the main argument for LTCM’s failure, is also undermined, in view of the fact that in the same period, LTCM was not the most highly leveraged among its peers. The Government Accountability Office (GAO) in the US published a report which throws light on this aspect (GAO, 1999) (see Table 5).

Page 54: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

55

Table5:LeverageRatiosof MajorFinancialInstitutionsin1999

Financial Institution Leverage Ratio

LTCM 28/1Goldman Sachs 34/1Merrill Lynch 30/1Lehman Brothers 28/1Morgan Stanley 22/1

Source: Government Accountability Office (1999)

Other researchers have pointed to the failure of the fund to adopt an appropriate market risk measurement methodology. For example, according to Jorion (2000), LTCM failed to adopt appropriate liquidity risk measures to control fire sales and to complement its VaR model with stress testing based on appropriate worst case scenarios. In reality, the fund’s worst-case scenarios (a loss of USD 2.3 billion, calculated while stressing the 12 major transactions which it had completed with each of the 20 major counterparties), turned out to be overwhelmed by the dramatic behaviour of market correlations, as observed from 17th August 1998 onwards (the fund lost USD 4.4 billion during 1998). The VaR model used by LTCM has been further criticised by Triana (2012), as an inadequate methodology to measure the fund’s market risk exposure. The need to adopt appropriate stress tests in market risk measurement was underlined by the Counterparty Risk Management Policy Group (1999), which recommended a better use of extreme stress tests to assess credit risk and a greater emphasis on greater integration between market risk and credit risk stress tests in order to account for the overlapping risks.

In conclusion, we believe that the LTCM episode alone does not represent a clear-cut empirical example of distress due to an excess of leverage that could in turn generate systemic risk. Additionally, the recommendations that

have been advanced in order to avoid the risk measurement failures of the LTCM are consistent with an activity-based approach, rather than an entity-based one.

5.2 Recognising the Uniqueness of the Industry

Of crucial concern is the neglect in the FSB-IOSCO framework of the investment industry’s uniqueness. Despite recognising that asset/fund managers are subject “to a number of regulatory, legal and contractual limits [...] such as an investment fund’s governing documents or the contractual arrangements for a separately managed account, securities laws, market conduct regulations, and corporate laws that create fiduciary duties to investors” (FSB/IOSCO, 2015, p. 47), the methodologies proposed in the FSB-IOSCO framework seem to omit all of the aforementioned elements that differentiate asset managers and investment funds.

The suggested methodologies fail to account for several unique features of the industry. Firstly, asset managers, unlike other banking and financial institutions, act as fiduciary to their clients. As pointed out by Capital Group, “managers act as agents for the funds and serve in a fiduciary capacity. Subject to a written contract, they manage each portfolio in accordance with the fund’s investment objectives and policies, as disclosed in the fund’s prospectus” (Capital Group, 2015, pp.11-12). As such, where a bank’s investment model creates complex and highly interconnected exposures (with and between its creditors, financial institutions it does business with, central banks, etc.) the connections and exposures of the managed portfolio are decoupled from the manager’s balance sheet due to his or her fiduciary duty. Because managers act as a fiduciary to their clients, “client assets are held separately from the asset manager

Page 55: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

56

by a custodian” (BlackRock, 2105, p. 3), thus, considerably limiting the size of an individual manager’s balance sheet. Given the fiduciary duty of asset and fund managers, investment firms “have never used [their] balance sheet to support or guarantee performance of a fund.” (PIMCO, 2105, p. 1). Secondly, and deriving from the asset / fund manager’s fiduciary duty, is the fact that managers’ balance sheets are relatively limited in size, therefore, rendering irrelevant the FSB-IOSCO framework’s assumed relationship between AUM and balance sheet size. Finally, traditional fund and asset managers do not present the typical risks addressed by prudential regulation with banks and insurance and should not be dealt with under the same loophole used for shadow banks. They do not rely on short-term funding or use extensive leverage, nor do they guarantee a return on capital.

In summary, attempting to regulate asset managers in a prudential-like manner, similar to that of banks and insurances, is failing to understand the operational and legal uniqueness of the industry.

5.3 The Regulatory Pipe Is Already Crowded

The industry’s comments have highlighted that the FSB-IOSCO framework should take into consideration how industry-related risks might already be reduced or eliminated by existing or proposed regulations in both the US and Europe (Capital Group, 2015; PIMCO, 2015). This view is taken even further by Invesco, when they note that the proposed framework could have negative unintended consequences, “given the stark differences between the regulatory approach being undertaken with respect to bank and insurance G-SIFI and the existing regulatory regime”, for asset managers and investment funds (Invesco, 2015, p. 1).

Without a doubt, it is difficult to disagree

with the fact that the release of the proposed assessment methodologies is ill-timed, given the plethora of existing and pending proposed regulations. Consider the following non-exhaustive list of such regulations:

• In the US: SEC Rules, CFTC , Dodd-Frank Act Title VII on derivatives, Investment Company Act, Investment Company Reporting Modernization Securities Act;

• In the EU: AIMFD, UCITS, EMIR, MiFID , CCR I, Solvency II.

In the case of the US, since the enactment of the Dodd-Frank Act, approximately eleven different agencies are charged with the duty of implementing about 250 different and new regulations for derivatives, counterparty clearing, and the role of Financial Markets Utilities among others. Unsurprisingly, such regulations often result in the overlapping of decision-making and in high legal and operational costs for investment firms (Culp, 2010). Moreover, the SEC, under the Investment Company Reporting Modernization Securities Act, has recently proposed new rules, forms, and amendments to modernise and enhance the reporting and disclosure of information by registered investment companies and investment advisers.

In the case of the European Union, AIFMD, EMIR, and MiFID II (which is due to enter into force in June 2016 and to be applied in January 2017) provide a fully comprehensive regulatory framework for investment funds and asset managers in the European Economic Area (EEA). Furthermore, as pointed out in ESMA (2014), the recently agreed regulations will apply to most NBNI institutions.[6]

These examples corroborate the industry’s view that the FBS and IOSCO ought to give the precedence to national regulators in fully implementing recently proposed regulations, rather than adding a new supervisory layer in an uncoordinated manner. This is consistent with the FSB and IOSCO status of non-legally

Page 56: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

57

binding organisations that were neither required nor empowered by the G20 to either produce assessment methodologies nor apply them to specific fund or asset managers (Fidelity, 2105).

6. Conclusion

The journey to tackle prudential measures for investment funds and asset managers has not yet defined a destination and the mileage to get there is still significant. While the regulatory bodies in charge aim at the traditional goal of financial stability, how this objective can be met with mandates that take into consideration the unique features of the industry is far from been understood. FSB and IOSCO have not taken a consistent stance either, by proposing a second consultation document that represented a step backward, compared to the progress made with the first consultation just one year before. In this report, we have analysed the methodology proposed to identify systemically important asset managers and investment funds, and have backed our analysis by assessing the consultative responses of thirteen financial institutions.

Our analysis followed the structure of the consultation document, which starts by describing the high-level framework of the methodology. The framework provides blurred definitions of what is intended by systemic risk and of the transmission mechanisms that trigger contagion to systemic level. Moreover, in order to study contagion, one has to define what is the triggering event, while we have noted that this is dealt with across the consultation document in an inconsistent way, sometimes referring to financial distress, and at other times focusing on default, insolvency, or failure.

The proposed framework analyses three possible channels of contagion: counterparty channel, market channel, and substitutability. These channels are among the most troublesome issues raised by the industry. In

terms of counterparty channel, the consultation document aims mainly at addressing security lending risk. Here, like in other areas of the methodology, emerges the assumption that asset managers and investment funds follow an originate-to-distribute lending model, which is similar to the model used by banks. However, asset managers do not act as counterparties to clients, and security borrowers are requested to post 100+% as a collateral (overcollateralisation). Moreover, it is doubtful that this channel can fuel systemic consequences when the asset management industry counts for only 10% of global investable assets. The market channel refers to spiralling sales originated by heavy redemptions that could put downward pressure on market prices. We have presented empirical evidence by PIMCO and ICI that redemptions have been always met, in an orderly and timely way, even during historical episodes of market collapse, such as in 2008.

The substitution channel was the most criticised by industry participants. According to the proposed methodology, systemic risk may arise when, in the event of a failure, it is difficult for other entities to provide a similar service. There was a consensus among consultative responses that asset management is a highly competitive industry with numerous substitutes existing for most investment fund strategies, as the FSB underlined in the first consultation. The empirical data show that a large number of funds are launched every year, net of those that are liquidated, and that among the larger funds there is a high degree of turnover. To consider substitutability as a basic impact factor for systemic importance designation could also produce unintended consequences, such as the incentive to shift funds from designated to non-designated entities, which may take more risky positions.

Industry participants agreed also that FSB-IOSCO (2015) was a step backward from the

Page 57: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

58

previous consultation. Among the many issues, by far the most criticised was the adoption of an entity-based approach, based on size, to identify SIFIs, while the first consultation recommended an activity-based approach as being more appropriate to asset managers and investment funds. The entity-based approach could actually enhance rather than mitigate systemic risk, since it introduces moral hazard (the SIFI safety net could create incentives to adopt more risky behaviour), it limits the regulation of large equity long-funds while failing to identify the more risky and systemically important players, and it increases the cost structure of the designated SIFIs in determining a reallocation of asset toward non-regulated institutions, that are potentially riskier and less liquid.

Among the indicators proposed to identify a SIFI, the industry criticised the adoption of asset portfolio size as a measure of systemic importance. The size threshold can return false positives (large USD 150 billion asset managers with no leverage) and false negatives (smaller funds, such as hedge funds, relying on leveraged short-selling and highly illiquid OTC derivatives). The materiality threshold could even enhance, rather than reduce, systemic risk by creating the incentive to shift assets out of the regulated funds toward, potentially, more complex and illiquid funds.

To measure the size of an investment fund, FSB-IOSCO (2015) recommends using Net AUM for institutions without derivative leverage and GNE when derivatives exposures are concerned. The latter indicator is based on the notional value of an instrument which, as scholars like Philippe Jorion and Merton Miller have shown, does not represent a realistic proxy of the riskiness of the exposure. For these reasons, we think that the GNE is not an appropriate indicator for measuring derivative exposures. The industry has already advised against the GNE with their responses to first consultation,

also because it does not take into account basic risk mitigation measures. FSB-IOSCO (2015), intentionally, excluded risk-mitigation adjustments, because they introduce complexity and model risk, but we do not agree with this position. Offsetting positions are relatively easy to assess and do not increase model risk. Moreover, risk modelling heterogeneity enables innovation in risk management, while mandated risk modelling homogeneity promotes herd behaviour which, in turn, increases systemic risk.

Mark-to-market value could act as a better proxy measure than the notional amount, as confirmed by Jorion and the findings from the LTCM case that we have illustrated. Keeping in mind its procyclicality issues, we have proposed the Commitment Approach as an alternative.

In terms of indicators of complexity, we have addressed the resolution process. Here, we echo Vanguard, which holds that investment managers can exit the business in an orderly fashion, because they lack the complexity that characterises banks, in terms of “commitments to repay billions of dollars in fixed obligations”. We use the historical cases of Strong Financial and Putnam Investments, as well as the empirical evidence provided by ICI, to shows that investment funds and managers are regularly reorganised or closed down without government intervention or taxpayer assistance, even in times of severe market distress.

Cross-jurisdictional activity is typically measured by assuming that the number of jurisdictions can be a good indicator of the complexity of potential litigations. However, for investment funds and asset managers, this type of activity does not enhance risk, but rather limits it by diversifying across geographies (country risk diversification). Furthermore, there are no intercompany transactions to increase the complexity, as in the case of banks with creditors from multiple jurisdictions or insurances with reinsurance.

Page 58: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

59

In conclusion, the industry feedback has shown that the proposed methodology should be deeply reconsidered. There are no historical episodes of systemic instability caused by the risk scenarios addressed by the FSB-IOSCO framework. The only historical case mentioned in the consultation document was the LTCM case, but we present evidence that the causes of LCTM failure are not synchronised with the motivations of this regulation. Moreover, the standard empirical literature does not back up the idea that a stress in the financial markets may trigger heavy redemptions that, in turn, generate fire sales that put downward pressure on the price of assets at a systemic level.

FSB and IOSCO should recognise that investment and asset managers are extremely different businesses from banks and insurance undertakings. Asset managers act as a fiduciary to their client, and do not rely on short-term funding or the use of extensive leverage, nor do they guarantee a return on capital. The same applies to investment funds, which never use their balance sheet to support the performance of their funds.

The regulatory pipe of investment funds and asset manager is already crowded. In the US, approximately eleven different agencies are charged with the duty of implementing about 250 different and new regulations for derivatives, counterparty clearing, and the role of Financial Markets Utilities, among others. In the European Union the regulatory agenda is equally busy with the AIMFD, UCITS, EMIR, MiFID, CCR I, and Solvency II. The FBS/IOSCO ought to give precedence to national regulators in fully implementing recently proposed regulations, rather than adding a new supervisory layer in an uncoordinated manner.

To conclude, let us remind the recent intervention of Federal Reserve Board Governor, Powell (2015): “[T]he Fed and other prudential and market regulators should resist

interfering with the role of markets in allocating capital to issuers and risk to investors unless the case for doing so is strong and the available tools can achieve the objective in a targeted manner and with a high degree of confidence”.

Bibliography:Allen & Overy (2013), Understanding EMIR:

A guide for funds and their managers (20 February 2015). Available at: http://www.allenovery.com/publications/en-gb/Pages/Understanding-EMIR-A-guide-for-funds-and-their-managers.aspx

Allen Steve (2003), Financial Risk Management: A Practitioner’s Guide to Managing Market and Credit Risk. New York: John Wiley & Sons.

Amundi (2015), Amundi asset management response to FSB/IOSCO (2015).

BIS (2014), “Fundamental review of the trading book: outstanding issues”, Bank of International Settlements, December 2014, http://www.bis.org/bcbs/publ/d305.htm

Bisias Dimitrios, Mark Flood, Andrew W. Lo, and Stavros Valavanis (2012), “A Survey of Systemic Risk Analytics,” Working Paper 0001, Office of Financial Research.

BlackRock (2015), “Comments on the Consultative Document (2nd) Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions”, May 29, 2015, http://www.blackrock.com/corporate/en-gb/literature/publication/2nd-NBNI-gsifi-fsb-iosco-052915.pdf

Blair, Margaret M. and Gerding, Erik F., “Sometimes Too Great a Notional: Measuring the ‘Systemic Significance’ of OTC Credit Derivatives” (August 31, 2009). Lombard Street, Vol. 1, No. 11, August 31, 2009; Vanderbilt Law and Economics

Page 59: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

60

Research Paper No. 09-22. Available at SSRN: http://ssrn.com/abstract=1475366 Bord Vitaly, and Santos Joao (2012), “The Rise of the Originate-to-Distribute Model and the Role of Banks in Financial Intermediation”, Economic Policy Review, 18(2), 21-34. Available at SSRN: http://ssrn.com/abstract=2149136 Borio Claudio and Drehmann Mathias (2009), “Towards an operational framework for financial stability: ‘fuzzy’ measurement and its consequences”, BIS Working Paper, No. 284 (June). Available at: http://www.bis.org/publ/work284.htm

Capital Group (2015), Letter from James Rothenberg, Chairman, Capital Group Companies to Secretariat of the FSB, regarding the FSB-IOSCO framework (May 29 2015).

Caruana Jaime (2010), “Systemic risk: how to deal with it?” Speech at the Melbourne Centre for Financial Studies. (February 12th). Available at: http://www.bis.org/publ/othp08.htm#P01

Cerutti Eugenio, Claessens Stijn and McGuire Patrick (2012), “Systemic Risks in Global Banking: What Can Available Data Tell Us and What More Data Are Needed?”, BIS Working Papers, No 376. (April). Available at: www.bis.org/publ/work376.htm

CESR (2010), “Guidelines on Risk Measurement and the Calculation of Global Exposure and Counterparty Risk for UCITS, Committee of European Securities Regulators”, 2010. Available at: esma.europa.eu/system/files/10_788.pdf Chakrabarty, K. C. (2012), “Systemic risk assessment : the cornerstone for the pursuit of financial stability”, published in Economic developments in India : analysis, reports, policy documents; Academic Foundation’s continuing series. 2012, 174, p. 65-76, http://www.bis.org/review/r120404a.pdf

Collins, Sean and Plantier, L. Christopher, Are Bond Mutual Fund Flows Destabilizing? Examining the Evidence from the ‘Taper Tantrum’ (September 1, 2014). Available at SSRN: http://ssrn.com/abstract=2510666 or http://dx.doi.org/10.2139/ssrn.2510666

Counterparty Risk Management Policy Group (1999), “ Improving Counterparty Risk Management Practices”, June 1999, http://archives.financialservices.house.gov/banking/62499crm.pdf

Culp Christopher (2010). OTC-Cleared Derivatives: Benefits, Costs, and Implications of the “Dodd-Frank Wall Street Reform and Consumer Protection Act”. Journal of Applied Finance, 2(1), 1-27.

de Bandt, Olivier and Hartmann, Philipp, Systemic Risk: A Survey (November 2000). ECB Working Paper No. 35. Available at SSRN: http://ssrn.com/abstract=258430

Dowd, Kevin. (2009), “Moral Hazard and the Financial Crisis”, Cato Journal, 29(1), 141-166.

Elliott Douglas (2014), “Systemic Risk and the Asset Management Industry”, Economic Studies at Brookings (May 2014).

European Securities and Markets Authority (2014), “Consultation Paper MiFID II/MiFIR”, ESMA/2014/549. Available at: http://www.esma.europa.eu/consultation/Discussion-Paper-MiFID-IIMiFIR

Farhi, E. and Tirole, J. (2012), “Collective Moral Hazard, Maturity Mismatch, and Systemic Bailouts”, American Economic Review, 102(1), pp. 60-93.

Fidelity (2015), Letter from Scott C. Goebel, Senior Vice President, Fidelity Management & Research Company to Secretariat of the FSB, in response to FSB/ISCO (2015), (May. 27th, 2015).

Financial Stability Board. (2013), “Progress

Page 60: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

61

and Next Steps Toward Ending “Too-Big-To-Fail” (TBTF): Report of the Financial Stability Board to the G-20” (2 September 2015). Available at: https://www.financialstabilityboard.org/publications/ r_130902.pdf

FSOC (2011), Financial Stability Oversight Council’s 2011 Annual Report. Available at: http://www.treasury.gov/initiatives/fsoc/Pages/annual-report2011.aspx

Flood Chris (2014), “Big four fund groups summoned to talks over size”, Financial Times, 16th February 2014. Available at: http://www.ft.com/cms/s/0/ea69ff2e-94c7-11e3-9146-00144feab7de.html

FSB/IOSCO (2014), “Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions”, 8 January 2014, http://www.financialstabilityboard.org/2014/01/pr_140108/

FSB/IOSCO (2015), “Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions”, 4 March 2015, http://www.financialstabilityboard.org/2015/03/assessment-methodologies-for-identifying-non-bank-non-insurer-global-systemically-important-financial-institutions/

GAO (1999), “Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk”, GGD-00-3, Oct 29, 1999, http://www.gao.gov/products/GGD-00-3

Grind, K. (2014), “PIMCO Sees $48 Billion in Outflows After Gross Departure”, The Wall Street Journal (November 5). Available at http://www.marketwatch.com/story/pimco-sees-48-billion-in-outflows-after-gross-departure-2014-11-05. Hartmann, P., De Bandt, O., and Peydró-Alcalde, J. L. (2009), “Systemic risk in banking: An

update”. In A. N. Berger, P. Molyneux, and J. Wilson, (Eds.) The Oxford Handbook of Banking. Oxford: Oxford University Press, 2009.

HFR (2015), “Global Hedge Fund Industry Report: Third Quarter 2015”, Hedge Fund Research, https://www.hedgefundresearch.com/?fuse=products-irglo? (Nov 12, 2015)

Hull, J. (2012), Options, Futures, and Other Derivatives, 8th ed. New York: Pearson Prentice Hall IMF, FSB and BIS (2009), “Guidance to assess the systemic importance of financial institutions, markets and instruments: initial considerations”, October 2009.

Invesco (2015), “Comments on the Consultative Document (2nd) Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions”, 2015.

ICI (2013) ICI Investment Company Fact Book, (53rd edition).

ICI (2014), ICI Investment Company Fact Book, (54th edition)

ICI (2015). “Orderly Resolution” of Mutual Funds and Their Managers. (July 15). Available at: https://www.ici.org/pdf/14_ici_orderly_resolution.pdf

Jorion Philippe (2000), “Risk Management Lessons From Long-Term Capital Management”, European Financial Management 6(3): 277 – 300, September 2000 .

Jorion Philippe (2007), Value-at-Risk: The New Benchmark for Managing Financial Risk, 3rd Edition (New York: McGraw-Hill, 2007). Kaufman, G and K Scott (2003), “What Is Systemic Risk, and Do Bank Regulators Retard or Contribute to It?”, The Independent Review, 371-391, Winter.

Metrick, A. (2014), Remarks at the Financial Stability Oversight Council Conference on Asset Management in Washington D.C.,

Page 61: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

62

236-37 (May 19, 2014)., (May 19), quoted in (Fidelity 2015).

Merton H. Miller (2007), Merton Miller on Derivatives, Wiley, September 1997

PIMCO (2015), Letter from Douglas M. Hodge, Chief Executive Officer, Pacific Investment Management Company LLC to Secretariat of the FSB, regarding the FSB-IOSCO framework (May. 29).

Powell, J. H. (2015), “Financial Institutions, Financial Markets, and Financial Stability”, Speech at the Stern School of Business, New York University (February 18). Available at: http://www.federalreserve.gov/newsevents/speech/powell20150218a.htm

Richardson, M. (2015). Asset Management and Systemic Risk: A Framework for Analysis. 16-34 (Mar. 19, 2015) (on file with the Fin. Stability Oversight Council, Docket No. FSOC-2014-0001).

Roncalli, Thierry and Weisang, Guillaume (2015), “Asset Management and Systemic Risk”, Paris December 2015 Finance Meeting EUROFIDAI - AFFI. Available at SSRN: http://ssrn.com/abstract=2610174 or http://dx.doi.org/10.2139/ssrn.2610174

Stadler Frances and Graham Rachel (2014), “Living Wills and an Orderly Resolution Mechanism? A Poor Fit for Mutual Funds and Their Managers “, ICI Viewpoints, Investement Company Institute, August 12, 2014, https://www.ici.org/viewpoints/view_14_orderly_resolution

Franklin Templeton (2015): Comments on the Consultative Document (2nd) Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions, 2015.

Triana Pablo (2012), The number that Killed us: a story of modern banking, flawed mathematics, and a big financial crisis, Wiley, 2012.

Union Investment (2015): Comments on the

Consultative Document (2nd) Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions, 2015.

Vanguard (2015): Comments on the Consultative Document (2nd) Assessment Methodologies for Identifying Non-Bank Non-Insurer Global Systemically Important Financial Institutions, 2015.

Waldman, Adam P. (1994), “OTC Derivatives & Systemic Risk: Innovative Finance or the Dance into the Abyss?” American University Law Review 43, no. 3 (Spring 1994): 1023-1090. http://digitalcommons.wcl.american.edu/cgi/viewcontent.cgi?article=1546

Notes:

1. The FSB-IOSCO framework received responses from 47 participants, including academics, trade groups, and the US Chamber of Commerce. We have analysed the responses of the thirteen individual firms that are present in the list: Amundi, BlackRock, Capital Group Companies, Federated Investors, Fidelity Management and Research Company, Franklin Resources, KfW, LCH Clearnet, PIMCO, State Street Global Advisors, Union Investment, and Vanguard. Since the methodology under scrutiny was based on the size of the institution, the largest and most representative players were selected.

2. Undertakings for Collective Investments in Transferable Securities (UCITS); Alternative Investment Fund Managers Directive (AIMFD).

3. The impacts factors are broadly consistent with the factors identified by the Basel Committee on Banking Supervision (BCBS) and the International Association of Insurance Supervisors (IAIS) to identify

Page 62: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

63

systemically important bank and insurance undertakings.

4. While we shall not list the various conversion methodologies proposed in their paper, we invite the reader to consult the typology proposed in CESR (2010, pp. 7-12).

5. The indicators suggested by the FSB-IOSCO framework are: balance sheet leverage (not appropriate to capture derivative leverage), leverage ratio, GNE to NAV ratio (intended to capture derivative leverage), ratio of collateral posted by the investment fund, counterparty credit exposure to the investment fund, liabilities to G-SIFIs, and nature of investors of the fund (FSB/IOSCO, 2015, pp. 40-41).

6. They will apply to: Investment firms; Third-country firms (entities incorporated outside the EU) providing investment services or activities within the Union; Credit institutions when providing investment services and/or performing investment activities; Central Counterparties and persons with proprietary rights to benchmarks; Market operators including any trading venues they operate with; All financial and non-financial counterparties as defined in EMIR (Including AIFs, UCITS, and probably any holding or operating companies owned by an AIF, whether established in the EEA). See Allen & Overy (2013).

Page 63: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

64

Page 64: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

65

According to Minsky, prices in a capitalist society have the function not only of assigning scarce resources amongst competitive ends, but also of guaranteeing the creation of a surplus of current production above current consumption which, once materialized into cash flows, allows to cover debt and to provide some remuneration to shareholders that makes the investment in new capital goods profitable (Minsky 1986).

The way in which prices guarantee the creation of that surplus is by preventing workers from buying all what they have produced, meaning: “market prices of consumption goods have to be greater than the labor income per unit of output that is earned in the production of these goods” (Minsky 1992). Therefore, the more workers employed in investment goods industries and the higher their wages, the higher

Journal of New Finance, January-March 2017

Hyman Minsky: An Advocate of Big Government

Juan Ramón RalloOMMA, ISEAD, Instituto Juan de Mariana

AbstractIn The End of Laissez Faire (1926), John Maynard Keynes advocated government growth “not to [do] those activities which private individuals are already fulfilling, but to [do] those functions which fall outside the sphere of the individual, to [do] those decisions which are made by no one if the State does not make them”. The three major functions that Keynes assigned to his new model of the State and which put an end to laissez faire were: governmental control of money and credit, governmental coordination of savings and investments, and governmental demographic planning. Ten years later, in The General Theory of Employment, Interest and Money (1936), Keynes insisted on his theory that “a somewhat comprehensive socialization of investment” was “the only means of securing an approximation to full employment” (Keynes 1936, p. 378). Clearly, Keynes advocated the expansion of Government to reach macroeconomic stability through monetary, fiscal and even demographic control of investment. One of the most original and modern thinkers within this Keynesian tradition in defense of “Big Government” was the postkeynesian economist Hyman Minsky. And, precisely, the Great Recession has revived and reinvigorated the minskyian positions up to the point of becoming a “required reading” especially for central bankers (Yellen 2009). Here we will critically review the minskyian defense of Big Government as a mechanism to achieve macroeconomic stability. We shall begin by summarizing his thesis in favor of an expansionary fiscal policy and later we will discuss its major problems.

KeywordsHyman Minsky, fiscal policy, Big Government, State, Keynes, full employment

JEL ClassificationX1

Page 65: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

66

consumer goods prices per real wage unit must be: i.e., consumer goods produced through capital-intensive processes require a larger mark-up on wages than those produced through less capital-intensive processes (Minsky 1986, p.187). Additionally, this mark-up in consumer goods prices over real wages will also depend positively on the volume of public deficit, on the amount of corporate taxes and on the capitalist propensity to spend, and negatively on the workers’ propensity to save (Minsky 1986, p. 170). Therefore:

where,Wc is the wage rate in consumer goods

industriesNc is the number of workers in the consumer

goods industriesAc is the average productivity of labourWi is the wage rate in the investment goods

industriesN i is the number of workers in the investment

goods industriesDf is the budget deficitπ are profits before taxesTπ is the corporate tax ratec is the propensity to consumeπ are after-tax profitss is the propensity to saveẆ is wage rate after taxes

As we were saying, the creation of a surplus in current production over current consumption through the forced rationing imposed by the prices of consumer goods should allow debt

repayment and an adequate remuneration of shareholders to stimulate the investment in new capital goods. However, the puzzle in the previous equation is that one of the components making up the price of consumer goods is entrepreneurial profits, being investment (I) one of the main determinants of those entrepreneurial profits (Minsky 1986 p.170).

Therefore, the existence of the cash flows needed to validate outstanding debts and asset prices depends on the investment decisions which were made in the immediate past: an insufficient volume of investment will cause a drop in consumer goods prices which will lead to an insufficient surplus to validate inherited debts and asset prices. For this reason, according to Minsky, it is a mistake to assume that the free market can self-regulate itself to achieve full employment equilibrium: a fall in consumer goods prices does not set limits on its own by increasing consumption and employment in consumer goods industries, but paralyzes entrepreneurial investment throughout the whole economy due to investment industries inability to cover outstanding debts and sustaining asset prices (Minsky 1986, pp.197-198).

Hence, the correct functioning of a capitalist economy requires sustaining a large volume of investment that generates, through its influence on the price of consumer goods, a high enough surplus that enables debt repayment and profits reinvestment. At the same time, the volume of investment depends on the spread between the demand price and the supply price of capital goods: the higher that spread, the higher the propensity to invest. The demand price (Pk) of an asset is the present value of its expected cash flows, while the supply price (P0) is technologically determined by the relation between the wage rate and the average productivity of labor times

.

Page 66: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

67

a mark-up which depends on the minimum profit asked by investors as compensation and protection for the risks involved in investment (Minsky 1986, p. 195 and p. 253):

where,K is capitalization functionM is the mark-up on unitary labor costs

This potential investment demand will not become effective demand unless it is supplemented by financing. Financing comes from three sources: cash and financial assets on hand, free cash flows and external funds (Minsky 1986, pp. 211-212). Since some portion of aggregate investment will be debt-financed, interest rates will exert some influence both on the supply and the demand price of capital goods: on the one hand, supply prices will be increased by the cost of the short term financing required to produce capital goods (Minsky 1986, p. 206); on the other hand, demand prices which investors are willing to pay shall be lowered to provide them a larger margin of safety which compensates the higher risk implied in the periodical obligation to repay long term debt (Minsky 1986, p. 213).

High short term and high long term interest rates will generate a small volume of investment, which will make it hard to repay debts and sustain asset prices. Low interest rates will ease the financing of a large volume of investment (especially with debt), which will positively reinforce itself by generating large volume of aggregate profits which will validate inherited debts and encourage new investments. Lastly, the existence of a normal yield curve — long

term interest rates above short term interest rates — will provide the incentives for a larger volume of investment through short term and long term interest rate arbitrage: agents will borrow on low short term interest rates and will invest in assets with a higher long term yield (Minsky, 1990).

This kind of arbitrage between interest rates along the yield curve spreads two types of financial structures that Minsky labels as “speculative” and “Ponzi” finance (Minsky 1990, pp. 371-379). The common characteristic of these two financial structures is that cash payment commitments (CC) for some periods are larger than the expected cash flows (Q) for those same periods:

The difference between both is that, in speculative finance, cash flows net of capital consumption (Qy) for some periods are able to cover interest expense repayment (CCy,) but not principal expense repayment, expecting that future cash flows will be able to meet that principal.

In other words, speculative finance requires refinancing the principal expense of debt for some time, although the present value of the purchased asset (Pk) is larger than the present value of the cash payment commitments [K(CC)] if and only if the set of interest rates at which debt is being refinanced (r) does not get above certain threshold (r) :

By contrast, in Ponzi finance, the cash flows net of capital consumption (Qy) of an asset will

^

-

Page 67: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

68

not even allow to repay interest expense (CCy,) for all its periods; therefore it will be necessary not only to refinance its principal expense, but also to increase total indebtedness just to meet the payment of accrued interest expense:

In other words, debt refinancing in Ponzi finance may actually increase the principal expense of liabilities up to a point where the present value of the expected cash flows of the asset ends up being inferior to the present value of its cash payment commitments for an array of interest rates much lower (r ) than those that eroded the net present value of the speculative financial schemes (r ) :

Therefore, speculative and Ponzi finance are very sensitive to the movements of short-term interest rates. Only so called hedge finance is robust enough against the short term interest rate movements, as long as hedge finance is characterized by:

such that

As we have indicated before, the transition from an economy dominated by hedge finance to an economy dominated by speculative and Ponzi finance will cause a short-term investment boom (Minsky, p. 235) that will end up as a self-defeating prophecy (Minsky p. 242):

the investment boom will bring about several half-finished entrepreneurial projects with an inelastic demand for short term refinancing, thus increasing short term interest rates (Minsky, p. 239) and consequently eroding the margin of safety of every speculative and Ponzi financial scheme. In that scenario, cash payments commitments of many speculative and Ponzi finances will exceed their cash inflows, disabling them either to meet their outstanding liabilities or to provide an adequate compensation for their shareholders.

All these imbalances will lead investors to liquidate part of their assets, pushing their demand prices below their supply prices. Therefore, investment spending will necessarily fall. Additionally, aggregate investment will be also negatively affected by three other factors: the supply price of capital goods will raise due to the increase in short term interest rates; their demand price will be reduced due to the smaller cash flows available for the shareholders and due to the higher general uncertainty; and lastly,some part of the demand for the assets will be met through the previously mentioned liquidation of assets. Those consequences will be especially severe amongst the entrepreneurial projects that are more capital intensive (and, therefore, in higher need of larger surpluses to meet their debts and to remunerate their shareholders). At the same time, and as second round effects, this reduction of investment will lessen the revenues of many other companies, causing problems even in those entrepreneurial projects with a hedge financial structure (Minsky, 1982, p. 108)

In short, the normal functioning of a capitalist economy implies a tendency towards instability (Minsky 1982, p.111): something Minsky called “the financial instability hypothesis”. The more capitalistic an economy becomes, the more unstable the system becomes (Minsky 1986, p.222). And precisely, Minsky defends the intervention of Big Government to fight against

^

v

Page 68: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

69

this intrinsic instability of the capitalist system.On the one hand, Minsky advocates for the

intervention of central banks as lenders of last resort to avoid the collapse of asset prices which would degenerate into a deflationist stagnation (Minsky 1986, p.44). However, Minsky is not a fan of an unlimited and immediate refinancing of all maturing debt, since that would only contribute to prolong the fragile exuberance of the boom (Minsky 1986, p. 153)

On the other hand, Minsky advocates budget deficits to compensate the collapse in aggregate investment. As we have previously seen, both after tax profits and prices of consumer goods (needed to create a surplus that remunerates investment industries) depend both on the volume of aggregate investment as well as on the amount of budget deficits. This means that the transitory collapse of aggregate investment can be compensated through increases in budget deficits, thus preventing a reduction of aggregate profits and, as a consequence, also allowing the repayment of inherited debts and the remuneration of shareholders. In other words, the effects of an investment collapse can be compensated through a large enough budget deficit that avoids the reduction of profits and the resulting contraction of production and employment. How large should that budget deficit be? As large as the reduction of aggregate investment (Minsky 1986, p. 330):

As a rule of thumb, Minsky proposes that the weight of the budget deficit in the GDP should be “at least the same order of magnitude as investment” (Minsky 1986, p. 332). Given that aggregate investment in modern economies tends to be around 20% and 30% of GDP, the size of the government should match at least that same percentage.

These two mechanisms of Big Government —central banks and budget deficits— will allow Minsky to state the following: “Big Government capitalism is more stable than Small Government capitalism” (Minsky, p. 325). The financial instability hypothesis vindicates Big Government.

The Problems of Big Government

The problems of Big Government as a stabilizing mechanism for the economy can be classified into three groups: short term problems (problems with a given capital structure), long term problems (problems with a variable capital structure within a given institutional framework) and very long term problems (problems with both a variable institutional framework and capital structure).

• Short-termproblems

As previously described, Minsky classifies consumer goods according to the degree of capital intensity with which they are produced. Capital intensity can be measured through “the ratio to the technologically determined wage bill of the after-tax profits that are required to validate the prices that were paid for capital assets” (Minsky 1986, p. 187).

In other words, prices of consumer goods produced by more capital-intensive methods will exhibit larger mark-ups than those produced by less capital-intensive ones: i.e., the weight of the wage bill in total costs will be smaller in the former than in the later since their more intense investing in labor-saving assets (on the contrary, the weight of the depreciation charge will be larger).

The choice among more or less capital-intensive production processes will depend on their relative profitability: that production process with a higher capitalized value (higher

Page 69: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

70

demand price) will be chosen. Therefore, if s is a less capital-intensive production process than t, s will only be chosen as an investment if P SK > P TK. The two elements that determine the capitalized values of the different possible structures of production are, on the one hand, the interest rates and, on the other, the expected cash flows from the final sale of the consumer goods that they contribute to produce. Thus two elements that might seem rather independent variables, but which are in fact deeply interdependent.

Capital-intensive production processes provide consumer goods much later than less capital-intensive production processes, since the former need to previously produce those capital goods that will replace workers: or, in financial terms, those more capital-intensive production processes will exhibit a longer Macaulay duration (Lewin and Cachanosky 2013). Hence, both an increase in the demand for present consumer goods or an increase in interest rates will tend to affect capital structure intensiveness in the same way: they will drive the relatively more capital-intensive production processes out in favor of the relatively less capital-intensive ones (and vice versa: a decrease in the demand for present consumer goods or a reduction in interest rates will drive relatively less capital-intensive production processes out). In the end, an increase of interest rates will reduce the present value of later future cash flows; at the same time, an increase in the demand for present goods in relation to future goods will raise the value of earlier cash flows.

Therefore, all sustainable increase of investment should go in hand with an increase

in savings. Following the Minskyan equation about consumer goods price determination, we can show that the only way to assure that an increase of investment does not yield an increase in the price of present consumer goods (PC) — which, in consequence, drives out the

more capital intense production processes —is through more government savings (deficit reduction: Df), more entrepreneurial savings (less corporate taxes or less consumption out of after tax profits: Tπ * π; c * π̇) or with savings among workers (∆s * Ẇ).

Since:

Then:

All this means that, according to Minsky, in a non exogenously manipulated capital market where hedge finance dominates speculative and Ponzi finance (and where, therefore, short term savings are not channeled into long term investments), intertemporal coordination between consumption and production will be achieved through the movements of interest rates: an increase in long term savings will lower long term interest rates and will stimulate investment in more capital-intensive processes (i.e., long duration processes); a reduction in long term saving will increase long term interest rates and that will stimulate the investment in less capitalintensive production processes (i.e. short term duration processes).

On the contrary, in an exogenously manipulated capital market or in a market where

there is a preponderance of speculative or Ponzi finance, investment in capitalintensive production processes will be increased without a corresponding delay in present consumption (long term investments will not be financed by long term savings).

Page 70: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

71

That intertemporal imbalance will increase the relative prices of consumer goods which will in turn promote investment in less capital-intensive production processes, when actually the exact opposite is needed to validate the previous increase in long-term investment: i.e. a relative increase in the prices of those consumer goods produced by the more capital-intensive production processes (Minsky 1986, pp. 188-189). That is to say, in the absence of enough long term saving, an increase in investment to deepen the duration of the structure of production will reappraise those processes that are less capital-intensive. And that blowback will later lead to a reduction of investment in the more capital-intensive production processes, bringing about negative real and financial consequences on the whole economy.

With regards to the real consequences, the reduction of investment in the more capital-intensive industries will decrease wages and profits in those industries, loosening the excess demand which those incomes exercised over the consumer goods and which contributed to stimulate the less capital-intensive production processes: this is what Friedrich Hayek called the “Ricardo Effect” (Hayek 1937, pp. 9-10; O’Driscoll 1977, chap. 5). With regards to the financial consequences, the reduction of investment in the capital-intensive industries will not enable those industries to meet their debts and to remunerate their shareholders, bringing about a rise in general uncertainty that could spread over the rest of the economy (including the financial system): this is what Minsky called “the financial instability hypothesis”.

If, in that context of an economic crisis caused by the overinvestment in capitalintensive productive processes, the government incurs in large public deficits to sustain aggregate profits (and more particularly, the profits of those investment goods industries mostly affected by the crisis: meaning, the more capital-intensive

ones), Big Government will be contributing to contain financial instability at the cost of accentuating the Ricardo effect. That is necessarily true inside Minsky’s model as long as government deficit is one of the variables determining consumer goods prices: the larger the budget deficit, the larger the rise of consumer goods prices and therefore the larger the contracting pressure over the more capital-intensive industries.

In other words, the larger the budget deficit volume channeled into consumer goods demand (i.e., the larger the Keynesian multiplier effect), the larger the contraction of capital-intensive industries. This does not necessarily mean that the net effect on GDP of any budget deficit will always be negative: in cases where a deep economic depression has brought about huge volumes of idle resources, some public spending in the more capital-intensive production processes could be compatible with a parallel increase of investment in the less capital-intensive processes (Hayek 1937, pp.42-42). It does mean, however, that the Ricardo Effect fueled by budget deficits constitutes an additional crowding-out effect hardly mentioned in the mainstream economic literature.

After all, it is frequently argued that crowding-out occurs due to the higher interest rates stemming either from the issuance of public debt or from the increased money demand associated with the multiplier effect (Blanchard 2006). But the Ricardo Effect is a crowding-out effect which takes place even when interest rates are not increased (Hayek 1937, pp. 32-33): it is the necessary result from the direct competition among production processes with different time profiles which will necessarily end up raising the costs of those factors of production with a more inelastic supply (bottlenecks). As Hayek stated, the Ricardo Effect is a mechanism by which the increase in the demand of consumer goods forces the reduction of the demand of

Page 71: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

72

(some) investment goods even when interest rates remain constant (Hayek 1969).

Estimates of the multiplier effect of budget deficits during recessions should take into account this additional crowding-out effect: that which derives from fueling the demand of less capital-intensive production processes at the expense of decreasing investment in the more capital-intensive production processes. Given that current estimates of the multiplier effect —between 0.8 and 1.2 (Ramey 2011) or even lower under more realistic hypothesis (Cogan et alii 2010)— already accept that budget deficit contracts net private spending even without tax hikes (Ramey 2013), the inclusion of this additional crowding-out effect would lessen even more the relevance of Big Government’s public deficits as a short term stabilization mechanism. Under many contexts, it could actually make it wholly undesirable.

• Long-termproblems

The long-term problems of the stabilizerrole of Big Government are essentially three: its negative contribution to economic growth; the bail out of zombie industries, and the chronification of inflation.

With regards to the first problem, the market, as an institution for the decentralized and competitive allocation of resources, is better in solving the typical problems of incentives (Demsetz, 1967) and information (Hayek, 1945) which affect any division of labor economy than the state, which is an institution that allocates resources in a centralized and monopolized manner (Kornai 1992). In fact, there is ample empirical evidence showing the negative correlation between government consumption and economic growth (Barro 1989; Fölster and Henrekson 2001; Afonso and Furceri 2010). Therefore, even if Big Government could stabilize economic fluctuations in the

short run, this advantage would come at a large price in the form of lower potential economic growth: something Minsky does not take into consideration in his analysis.

With regards to the second problem, let us remember that, according to Minsky, entrepreneurial income (R) accomplishes three purposes within a capitalist system: covering the “technologically determined costs and overhead” (OV), the repayment of debt (D) and the remuneration of shareholders (S). All this means that any company must fulfill the following equation in order to remain viable (Minsky 1986, p. 177):

From this, we can arrive at three possible scenarios. The first one is that revenues allow the repayment of both debts and the costs of production, but are able to compensate shareholders’ cost of capital. In this case, the company will stay in business but shareholders will stop any reinvestment unless new profit opportunities arise: they will regret having invested originally in that company, but they will keep it afloat unless its liquidation value is higher than the present value of its future net profits.

The second possibility is that revenues allow the covering of the costs of production but neither the repayment of debt nor the remuneration of shareholders. In this case, the company will enter into bankruptcy and its debts will need to be restructured to stay in business under new financial arrangements:

Page 72: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

73

The final case is that where revenues do not even allow the covering of the costs of production: here even the business model of the company will need to be restructured or otherwise liquidated:

The first two cases mean that, although the company generates operating profits (R > OV), it is not correctly financed: consequently, its future sustainability depends on finding either new shareholders willing to accept lower yields or on finding new creditors willing to accept lower interest rates, so that the operating surplus

becomes large enough to cover the repayment of debt and the remuneration of shareholders (R - OV - D > S). Certainly, there might be no investors with the specific time and risk preferences compatible with type of funding needed by the business, in which case the company will cease its operations either in the long run (no new reinvestment) or in the short run (liquidation). But if those investors exist, this incorrectly financed company will become sustainable through financial restructuring.

On the contrary, the third scenario implies that the business model is not generating economic value: therefore the company will need to close under any possible financial arrangement (except if investors accepted negative yields).

In times of economic depression, aggregate investment falls due to a general rise of uncertainty. In such cases, the revenues of countless businesses will go down and many of them will be placed under one of the three previous situations. In this context, there could be some reasonable arguments for supporting a governmental bail out of those businesses that suffer from a transitory collapse of revenues: i.e., of those businesses that may regain profitability in times of financial tranquility (businesses that in normal times can achieve R>OV+D+S) or even

of those businesses that could be adequately refinanced in times of financial tranquility (businesses with R > OV in normal times but which need to change their financial structure to become sustainable).

What cannot be justified under any economic reasoning is bailing out either companies that would not manage to cover their costs of production (OV > R) or companies whose profitability would be so tiny that no investor would be willing to reinvest in them in normal, healthy economic times. In any of these cases, a governmental bail out would only contribute to consolidate an inefficient allocation of resources: an allocation that either does not cover the opportunity costs of production or does not cover the opportunity cost of financing that production process. These two types of businesses could be properly called “zombie businesses”.

When a financial crisis is not caused by an unfounded financial panic, but by some imbalance between the intertemporal preferences of savers and the intertemporal plans of investors (Lewin 2011, chap. 6; Manish and Powell 2014), then many business structures will be forced to readjustment when facing losses or even bankruptcy (Lachmann 1956, p.122). Bailing out such companies would only consolidate zombie businesses, and that would only shift the burden of the readjustment to the rest of the economy reducing overall productivity (Caballero et al 2008).

Minsky does not explain how public deficits ensure that only sustainable (nonzombie) business will be bailed out through budget deficits. He simply states that, once the deflationist collapse has been prevented through the stabilization of aggregate profits, the economic system will be able to absorb any individual bankruptcies (Minsky 1986, p. 354): but sustaining aggregate profits includes the possibility of sustaining the profits of zombie businesses that should have

Page 73: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

74

entered into bankruptcy. In other words, Minsky does not provide a solution to the problem that zombie businesses could become predominant in the whole of the economy thanks to Big Government’s deficits: as a matter of fact, Minsky’s work only considers the possibility of an economy becoming stagnant as a consequence of a deflationist depression, but not as a result of the spread of zombie businesses (Minsky 1970).

Up to this point, we have seen that Minsky does not take into account two of the most important long-run problems of Big Government. The same thing cannot be said with respect to the third one: chronic inflation. In fact, Minsky goes as far as to state (Minsky 1986,p. 315) that Big Government can be considered both a blessing (for its stabilizing role along the economic cycle) and a curse (for its role in generating inflation).

From Minsky’s basic price equation, , inflation in consumer goods

can happen either because wages increase more than productivity or because the mark-up goes up: in particular, Big Government’s influence is felt on the mark-up (p. 284). That mark-up depends, as we have previously analyzed, on the volume of investment, on public deficit, on consumption out of profits and on savings out of wages. Therefore, an increase in investment, deficit or consumption out of profits will raise the mark-up and, unless those movements are accompanied by an increase in productivity or in savings out of wages, prices of consumer goods will go up. Nonetheless, this initial inflationist movement will not continue on its own if nominal wages are not increased and therefore if real wages are reduced as a consequence of the higher prices (Minsky 1986, p.288).

Otherwise, if nominal wages are increased in parallel to consumer goods prices, then open-ended inflation will occur (Minsky 1986, p.290). Several reasons could lead to this result, since

the evolution of nominal wages after price increases depends on the interaction between the institutional framework and the expectations of economic agents. However, in Small Government capitalism, open-ended inflation cannot happen since the initial inflationist surge of investment will necessarily degenerate into a deflationist recession where prices will cease to increase (Minsky 1986, pp. 299-300). However, these natural limits to open-ended inflation disappear with Big Government, since budget deficits and central bank refinancing sustain the aggregate demand of consumer goods and thus they generalize the expectation that inflation will not be self-limited (Minsky 1986, p. 301).

Therefore, the risk of open-ended inflation is another important cost of Big Government: high inflation not only causes the famous menu and shoe-leather costs, but, when it is unevenly distributed throughout the economy, it also modifies the structure of relative prices and as a consequence the whole structure of production (this is the main lesson behind the well-known Cantillon Effect). At the same time, when a monetary standard becomes a bad store of value, the relative prices of the different classes of assets change, thereby lowering artificially the cost of those financial liabilities which act as close substitutes of money and which serve as partial hedges against inflation. Furthermore, a bad store of value also fuels the financial industry demand to handle the resulting monetary uncertainty, thereby absorbing real resources that would be otherwise available for the economy (Friedman 1986; Horwitz 2003). As a matter for fact, Minsky himself recognized that an unconstrained open-ended inflation derived from a structural public deficit could finally lead to the repudiation of the currency unit (Minsky 1986, p.337): an event with gigantic economic costs (Hutchison and Noy 2005; Paoli et alii 2009).

Given all the previous adverse effects, Minsky

Page 74: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

75

advocates the avoidance of an unconstrained open-ended inflation through a structural budget balance (he presupposes the existence of a non-Ricardian fiscal regiment where inflation depends essentially on fiscal and not monetary policy): i.e., price stability is guaranteed by ensuring that the outstanding public debt can be completely met with future budget surpluses (Minsky 1986, pp. 338-339). Therefore, Minsky thinks that budget deficits during recessions must be compensated by public surpluses during

expansions. However, let us remember that, according to Minsky, budget surpluses are contractionary as they depress aggregate profits. This means that, unless the initial stimulus is capable of bringing the economy sustainably out of recession in the short run, the need to guarantee price stability will end up enhancing a contractive fiscal policy during the depression: something that will slow down any potential recovery.

To sum up: in the long run, Big Government hinders economic growth and its large budget deficits contribute to bail out zombie industries and to chronify inflation unless those deficits are reverted in the short run. Therefore, governments could adopt two strategies: if they adhere to price stability, Big Government deficits would become an inefficient and insufficient policy to counteract those economic crisis characterized by long deleveraging processes and deep real readjustments. If, on the contrary, governments do no stick to price stability and therefore budget deficits are not reverted in the short run, then zombie industries will be perpetuated and inflation will be chronified, contributing to a suboptimal allocation of capital in an inflationary stagnation (stagflation). Consequently, Big Government deficits could

only contribute to stabilize economic activity when the crisis results from an unfounded collapse of expectations or from minor real and financial imbalances (and even in those cases

we should consider the crowding-out effects previously analyzed), but that would come at the cost of hindering economic growth.

• Verylongtermproblems

Big Government is not only destabilizing in the short run and stagnating in the long run: in the very long run, it also reshapes the institutional framework where economic agents operate. As a result, it is particularly interesting to study whether Big Government promotes the spread of speculative or Ponzi finance which eventually degenerate into the deflationist depression whose consequences Big Government tried to avoid initially.

Minsky does acknowledge that a Small Government institutional framework, where monetary and fiscal policies are not used to stabilize economic activity during recessions, would encourage economic agents to avoid and to correct fragile financial structures, as well as to learn from past financial mistakes. More specifically, without the expectation of governmental bailouts, lenders would possess the incentive to control borrowers’ over-indebtedness: in the case of banks, their level of short term leverage would be overseen by depositors and other financial institutions, who force a run on any insolvent bank (Minsky 1986, p. 271 and p. 282). Moreover, after an economic crisis, entrepreneurs would become more conservative, shifting away from speculative or Ponzi finance toward the kind of hedge financial structures which do not lead to a new inflationist boom (Minsky, p. 234), at least until both overconfidence returns to the markets and agents’ liquidity substantially increases (Minsky 1986, pp. 235).

Big Government notably worsens this institutional framework. On the one hand, the expectation of governmental bailouts (either by fiscal or monetary policy) reduces creditors’

Page 75: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

76

incentives to control their debtors, so that “there are no effective market barriers to bank expansion and thus to the destabilizing impact of banks upon demand” (Minsky 1986, p. 277). On the other hand, the reemergence of speculative or Ponzi finance after a crisis develops much faster when the government guarantees entrepreneurial profits and when the central bank commits to refinance any maturing debt, thus minimizing credit risk (Minsky p. 235 and p. 364).

To summarize: Minsky acknowledges that Big Government increases “moral hazard” and therefore the financial fragility of the whole economic system. It is tantamount to saying that Big Government’s promises to intervene in order to guarantee macroeconomic stability will endogenously create the conditions that make that intervention compulsory (Minsky 1985). How can these unintended institutional consequences of Big Government be prevented? According to Minsky, through regulation.

More specifically, Minsky defends that the central bank must acquire regulatory powers to determine the capital ratio of the financial institutions and to influence their liquidity structure through its discount window (Minsky 1986, pp. 356-358). However, Governments all around the world already possess those powers and they have been unable to prevent the so-called Great Recession (the worst deflationist depression since the Great Depression). Perhaps, the chief cause behind this failure has been that, as Minsky recognizes, regulatory innovation will always be faster than regulators’ mandates: “In a world of businessmen and financial intermediaries who aggressively seek profit, innovators will always outpace regulators” (Minsky 1986, p. 281). Decentralized financial innovation and regulatory arbitrage by millions of economic agents who possess a deep and specific knowledge on their market segment will tend to beat centralized government orders (Kling 2009, chap. 2).

As a consequence, the optimal strategy for achieving financial stability in the very long run should consist in spreading the institutional incentives that promote selfregulation of any economic agent: something that can be accomplished by linking financial survival to the maintenance of good liquidity and solvency standards (hedge finance, in Minsky’s term). Certainly, if the abuse of speculative or Ponzi finance led to notable losses of capital, economic agents would learn the lesson and would sooner or later adopt hedge financial structures to avoid future losses (something that even Minsky admits).

Therefore, the typical institutional incentives of a free market system would lead to the development of robust financial structures in the long run (Selgin 1989). On the contrary, Big Government’s interventions to sustain aggregate profits in the short run reward reckless financial behavior by insulating economic agents from the bad consequences of their bad financing choices. Big Government unintended failures ultimately create the necessity for further Big Government interventions that in turn degenerate into new problems that require new interventions (Mises 1929, p.28).

Conclusion

Minsky defends Big Government as a tool for stabilizing aggregate spending through budget deficits. However, this policy is not exempt of notable problems which are not taken into account by the postkeynesian economist: in the short run, the stabilization of aggregate spending is at best quite moderate once every crowding-out effect is considered; in the long run, Big Government hinders economic growth and tends to stagnate the economy by bailing out zombie industries and by chronifying inflation; in the very long run, it reshapes the institutional structure of incentives

Page 76: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

77

by rewarding financial imprudence.We should consequently consider Small

Government as a serious alternative for stabilizing the economy even within the minskyan framework of analysis. Small Government capitalism would promote hedge finance self-regulation thus minimizing economic crises. At the same time, its free, flexible and stable markets would allow the reabsorption of any past investment error and would ease the relaunch of private investment as soon as the economic agents discover new profit opportunities. Debt-deflation cycles would not ensue in case the expectation of future profits does not collapse thanks to the ability of the whole economy to readjust failed investment and to restructure inherited debts. Obviously, one cannot absolutely discard any failure of coordination which could eventually be solved faster by some moderate budget deficit: but the costs of institutionalizing Big Government seem to be much higher than any occasional differential advantage that it could provide.

Bibliography:Afonso, António and Furceri, Davide.

“Government size, composition, volatility and economic growth”, European Journal of Political Economy, vol. 26, no. 4, 2010.

Barro, Robert. “A Cross-Country Study of Growth, Saving, and Government”, in National Saving and Economic Performance, Bernheim and Shoven, University of Chicago Press, 1989.

Blanchard, Olivier. “Crowding Out”, The New Palgrave Dictionary of Economics, 2006.

Caballero, Ricardo; Hoshi, Takeo; Kashyap, Anil. “Zombie Lending and Depressed Restructuring in Japan”, American Economic Review, vol. 95, no. 5, 2008.

Cogan, John; Cwik, Tobias; Taylor, John; and Wieland, Volker. “New Keynesian versus old Keynesian government spending multipliers”, Journal of Economic Dynamics and Control, 2010.

Demsetz, Harold. “Toward a Theory of Property Rights”, The American Economic Review, Vol. 57, No. 2 (1967).

Fölster, Stefan and Henrekson, Magnus, “Growth effects of government expenditure and taxation in rich countries”, European Economic Review, vol. 45, no. 8, 2001.

Friedman, Milton. “The Resource Cost of Irredeemable Paper Money”, Journal of Political Economy, vol. 94, No. 3, 1986.

Keynes, John Maynard. The End of Laissez-Faire. L. & Virginia Woolf, 1926.

-- The General Theory of Employment, Interest and Money. Palgrave Macmillan, 1936.

Kling, Arnold. Unchecked and Unbalanced: How the Discrepancy Between Knowledge and Power Caused the Financial Crisis and Threatens Democracy. Hoover Studies in Politics, Economics, and Society, 2009.

Kornai, János. The Socialist System: The Political Economy of Communism. Princeton University Press, 1992.

Hayek, Friedrich. “The Use of Knowledge in Society”, American Economic Review, Vol. 35, No. 4, 1945.

-- Profits, Interest, and Investment. Augustus M. Kelley Publishers, 1939 [1975]

-- “Three Elucidations of the Ricardo Effect”, Journal of Political Economy, Vol. 77, No. 2, 1969.

Horwitz, Steven. “The Costs of Inflation Revisited”, The Review of Austrian Economics, vol. 16, no. 1, 2003.

Hutchison, Michael and Noy, Ilan. “How Bad Are Twins? Output Costs of Currency and Banking Crises”, Journal of Money, Credit and Banking, Vol. 37, No. 5, 2005.

Page 77: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

78

Lachman, Ludwig. Capital and its Structure. Institute for Humane Studies, 1956 [1978]

Lewin, Peter. Capital in Disequilibrium. Ludwig von Mises Institute, 2011.

-- & Cachanosky, Nicolas. “Roundaboutness is Not a Mysterious Concept: A Financial Application to Capital Theory”, Review of Political Economy, 2013.

Manish G. and Powell, Benjamin. “Capital Theory and the Process of Inter-Temporal Coordination: The Austrian Contribution to the Theory of Economic Growth”, Atlantic Economic Journal, vol. 42, no. 2, 2014.

Minsky, Hyman. “Financial Instability Revisited: The Economics of Disaster”, Hyman P. Minsky Archive, 1970.

-- Can “It” Happen Again? Essays on Instability and Finance. Taylor&Francis, 1982.

-- “Money and the Lender of Last Resort”, Hyman P. Minsky Archive, 1985.

-- “Fragility and Resiliance of the International Financial Structure: Some General Conditions and their Applicability to Current Conditions”, Hyman P. Minsky Archive, 1990.

-- “Prices in a Financially Sophisticated Capital-Using Capitalist Economy”, Hyman P. Minsky Archive, 1992.

-- Stabilizing an Unstable Economy. McGraw-Hill Books, 1986 [2008].

Mises, Ludwig von. A Critique of Interventionism. Ludwig von Mises Institute, 1929 [2011].

O’Driscoll, Gerald. Economics as a Coordination Problem: The Contributions of Friedrich A. Hayek. Sheed Andrews and McMeel, Inc., 1977.

Paoli, Bianca de; Hoggarth, Glenn; Saporta, Victoria. “Output costs of sovereign crises: some empirical estimates”, Bank of England Working Paper, No. 362, 2009.

Ramey, Valerie. “Can Government Purchases Stimulate the Economy?”, Journal of Economic Literature, Vol. 39, No. 3, 2011.

-- “Government Spending and Private Activity”, in Fiscal Policy After the Financial Crisis, Alesina and Giavazzi, University of Chicago Press, 2013.

Selgin, George. “Legal Restrictions, Financial Weakening, and the Lender of Last Resort”, Cato Journal, vol. 9, no. 2, 1989.

Yellen, Janet. “A Minsky Meltdown: Lessons for Central Bankers”, FRBSF Economic Letter, 2009.

Notes:1. In an economy with no savings out of

wages, with no consumption out of profits and no government, the revenues in the consumer goods industries (Pc * Qc) must be equal to the total wage bill in both the consumer (Wc * Nc) and the investment goods industries (Wi * Ni): Allowing for the government, then the revenues in the consumer goods industries must be equal to the total after-tax wage bill in the consumer and investment goods industries plus the after-tax wage bill in the governmental sector (Wg * Ng) plus governmental transfers (TR):

Page 78: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

79

As budget deficit (Df) is just the difference between public spending and public revenues (including corporate taxes: Tπ * π): then: .

Lastly, allowing for consumption out of after-tax profits (c * π) and savings out of after-tax wages (s * W), then revenues in the consumer goods industries will have to cover also these two magnitudes: And therefore: .

2. In an economy with no savings out of wages, with no consumption out of profits and without government, the revenues in the consumer goods industries (Pc * Qc) must be equal to the total wage bill in both the consumer (Wc * Nc) and investment goods industries (Wi * Ni): Therefore, profits in consumer goods industries (πC) will be equal to the wage bill in the investment goods industries, while profits in the investment goods industries

will be equal to the difference between total investment spending (I) and their wage bill: Hence: . Allowing for the government: .

Therefore, total after-tax profits will be: And lastly, allowing for consumption out profits and savings out of wages, total after-tax profits will be:

..

Page 79: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

80

Page 80: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

81

Journal of New Finance, January-March 2017

Macroeconomics and the Financial Cycle:Hamlet Without the Prince?

Claudio BorioBank for International Settlements

AbstractSince the early 1980s, the financial cycle has re-emerged as a major force driving the macroeconomy, but economic analysis has not caught up. This article argues that macroeconomics without the financial cycle is like Hamlet without the Prince. Economic analysis and policies – monetary, fiscal, and prudential – should be adjusted to fully account for the financial cycles, but here more analytic work is needed. The question of how we address the bust and balance-sheet recession that follow the boom deserves special attention.

Keywordsfinancial cycles, economic policy, economic analysis, balance sheet recessions

JEL ClassificationG01, G02, G12, G14, G18

We thought we knew; we have since forgotten. It is high time we rediscovered the role of the financial cycle in macroeconomics. In the environment that has prevailed for at least three decades now, it is not possible to understand business fluctuations and the corresponding analytical and policy challenges without understanding the financial cycle. This perspective was largely taken for granted as far back as in the 19th century and all the way up to the Great Depression (Overstone 1857); it barely survived at the periphery of economics in the post-war period; and it has slowly been regaining ground, in modern guise, after the Great Financial Crisis. Yet, sadly, it is still far from becoming part of our intellectual furniture.[1]

The Financial Cycle: Key Features

The financial cycle is best thought of as the self-

reinforcing interactions between perceptions of value and risk, attitudes towards risk, and financing constraints, which translate into booms followed by busts (Borio 2012a). These interactions can amplify economic fluctuations and possibly lead to serious financial distress and economic dislocations.

A growing body of empirical work, not least that carried out at the BIS (Drehmann et al 2012), suggests that the financial cycle has several key properties.

• First, its most parsimonious description is in terms of the behaviour of private-sector credit and property prices.

Equity prices can be a distraction: they exhibit shorter cycles and tend to be more closely related to short-term fluctuations in GDP, which may leave the financial sector largely unscathed.

Page 81: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

82

Source: Drehmann et al (2012)

Notes: The orange and green bars indicate peaks and troughs of the financial cycle measured by the combined behaviour of the component series (credit, credit-to-GDP ratio and house prices) using the turning-point method. The blue line traces the financial cycle measured as the average of the medium-term cycle in the component series using frequency-based filters. The red line traces the GDP cycle identified by the traditional shorter-term frequency filter used to measure the business cycle.

Figure1:ThefinancialandbusinesscyclesintheUS

• Second, the financial cycle has a much lower frequency than the traditional business cycle.

Since financial liberalisation, its typical length is of the order of 16 to 20 years; by contrast, as generally conceived in academic and policy work, business-cycle frequencies are up to eight years. Figure 1 illustrates this with reference to the US, based on both frequency filters and peak-to-trough analysis.

• Third, peaks in the financial cycle tend to coincide with episodes of systemic financial distress.

For example, in a sample of seven industrial countries (Australia, Germany, Japan, Norway, Sweden, the UK and the US), all post-financial liberalisation financial-cycle peaks are associated with either full-blown crises or serious financial strains. And those banking systems that

experienced stress away from the peak did so because they were exposed to cycles elsewhere (eg. Germany and Switzerland in 2008).

• Fourth, the financial-cycle regularities inform the construction of real-time leading indicators of banking crises that provide fairly reliable signals with quite a good lead – between two and four years, depending on the calibration (eg. Borio and Drehmann 2009).

Not surprisingly, such indicators are best based on the (private-sector) credit-to-GDP ratio and property prices jointly exceeding certain thresholds, which fall outside normal historical ranges. One can think of these indicators as proxies for the build-up of financial imbalances and as tools that help policymakers distinguish sustainable booms from unsustainable ones. The evidence also indicates that, during such credit booms, the cross-border component of

Page 82: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

83

credit tends to outpace the purely domestic one (eg., Borio et al 2011).

• Fifth, for much the same reasons, financial-cycle information also helps construct real-time estimates of sustainable output that, compared with traditional potential output estimates, are much more reliable in real time, and more statistically precise (Borio et al 2013).

Such estimates, for instance, would have shown that, during the boom that preceded the Great Financial Crisis, output in the US was growing well beyond sustainable levels. By contrast the more commonly used approaches, such as the production-function methodology, detected this pattern only well after the crisis took place, if at all.

• Finally, the financial cycle depends critically on policy regimes.

Financial liberalisation weakens financing constraints. Monetary-policy frameworks focused on short-term inflation control provide less resistance to the build-up of financial imbalances whenever inflation remains low and stable. And positive supply-side developments (eg. the globalisation of the real economy) fuel the financial boom while putting downward pressure on inflation. Not surprisingly, financial cycles have become twice as long since financial liberalisation in the early 1980s and have been especially virulent since the early 1990s (see Figure 1).

The Financial Cycle: Analytical Challenges

Analytically, modelling the financial cycle requires capturing three key features.

• The booms should not just precede but cause the busts: busts are fundamentally

endogenous, the result of the vulnerabilities and distortions built up during the boom.

• The busts should generate debt and capital stock overhangs – the natural legacy of the preceding unsustainable expansion.

• And potential output should not just be identified with non-inflationary output: as the previous evidence indicates, output may be on an unsustainable trajectory even if inflation is stable.

How could one best capture these features? Most likely, one would need to:

• Drop ‘rational’ (model-consistent) expectations.

• Allow for state-varying risk tolerance, ie. for attitudes towards risk that vary with the state of the economy, wealth, and balance sheets.

• And last but not least, capture more deeply the monetary nature of our economies: the banking sector does not just allocate given resources but creates purchasing power out of thin air. In all probability, all this may require us to rediscover the merits of disequilibrium analysis.[2]

The Financial Cycle: Policy Challenges

Dealing with the financial crisis calls for policies that are more symmetrical across booms and busts. Policies need to lean against the booms and tackle the debt-asset quality problems head on during the bust. A medium-term focus is essential.

During the boom, the key question is how to address the build-up of financial imbalances.

• For prudential policy, it means containing the procyclicality of the financial system through macroprudential measures (Borio 2009).

• For fiscal policy, it means extra prudence, fully recognising the hugely flattering

Page 83: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

84

effect of financial booms on the fiscal accounts: potential output and growth are overestimated; financial booms are tax revenue-rich; and large contingent liabilities are needed to address the bust.

• For monetary policy, it means leaning against the build-up of financial imbalances even if short-term inflation remains subdued.

During the bust, the key question is how to address the balance-sheet recession that follows, ie. how to prevent a stock problem from becoming a persistent and serious flow problem, in the form of anaemic output and expenditures. After having stabilised the system (crisis management phase), it is necessary to move swiftly to tackle the over-indebtedness and asset-quality problems head on (crisis resolution phase).

The crisis resolution phase is critical and less well understood.

For prudential policy, it means repairing banks’ balance sheets aggressively through the full recognition of losses, asset disposals, recapitalisations subject to strict conditionality, and the reduction of operational excess capacity necessary for sustainable profitability. This is what the Nordic countries did and what Japan failed to do following the bust in their respective financial cycles in the early 1990s; it is what partly explains their subsequent divergent economic performance.

For fiscal policy, it means creating the fiscal space needed to use the sovereign’s balance sheet to support private-sector balance-sheet repair while avoiding a sovereign crisis down the road. This can be done through bank recapitalisations, including via temporary public-sector ownership and selective debt relief for the non-financial sector (eg. households). In fact, contrary to received wisdom, pump-priming – where it can be afforded – may well be less effective in a balance-sheet recession, as agents tend to save

the extra money to repay debt, resulting in a low multiplier. By contrast, by relieving debt burdens and asset-quality problems, the alternative use of fiscal space could set the basis for a self-sustaining recovery.

For monetary policy, it means recognising its limitations and avoiding overburdening it. Monetary tools are blunt when overindebted sectors are unwilling to borrow, and banking system strains impair the transmission chain. As a result, when policymakers press harder on the gas pedal, the engine revs up without traction. Over time, this enhances any side effects that policy may have. These include the possibility of delaying balance-sheet adjustment, such as by facilitating evergreening; of undermining the profitability of banks, by compressing interest margins; of masking market signals; and of raising political-economy concerns, not least because of the quasi-fiscal nature of the large-scale deployment of central bank balance sheets.

The risk is that policies that do not address aggressively the balance-sheet problems can buy time but also make it easier to waste it. This can prolong weakness and delay a strong, self-sustaining recovery. Some new empirical evidence that carefully differentiates between the nature of recessions is broadly consistent with this picture (Bech et al 2012). It is as if the economy operated in a state of suspended animation.

The longer-term risk is that policies that fail to recognise the financial cycle will be too asymmetric and generate a serious bias over time. Failing to tighten policy in a financial boom but facing strong, if not overwhelming, incentives to loosen it during the bust would erode both the economy’s defences and the authorities’ room for manoeuvre. In the end, policymakers would be left with a much bigger problem on their hands and without the ammunition to deal with it – a new form of ‘time inconsistency’. The root causes here are horizons that are too

Page 84: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

85

short and a failure to appreciate the cumulative impact of flows on stocks. This could entrench instability in the system over successive cycles (Borio, 2012b).

Conclusion

Macroeconomics without the financial cycle is very much like Hamlet without the Prince: a play that has lost its main character. Post-crisis, both policymakers and academics are making efforts, to varying degrees, to understand and respond to the challenges the financial cycle poses. But these efforts are still falling short of the mark. The stakes are high; the road ahead a long one.

(This article was first published online by Voxeu (http://voxeu.org/article/macroeconomics-and-financial-cycle-hamlet-without-prince) on February 2, 2013. We express our gratitude to both the publisher as the author for their permission to republish this article.)

Bibliography:Adrian, T and H Shin (2010): “Financial

intermediaries and monetary economics”, in B Friedman and M Woodford (eds), Handbook of Monetary Economics, vol 3, Amsterdam: North Holland, pp 601–50.

Aikman, D, A Haldane and B Nelson (2010), “Curbing the credit cycle”, paper presented at the Columbia University Center on Capitalism and Society Annual Conference, New York, November.

Bech, M, L Gambacorta and E Kharroubi (2012): “Monetary policy in a downturn: are financial crises special?”, BIS Working Papers, no 388, September.

Bernanke, B, M Gertler and S Gilchrist (1999), “The financial accelerator in a quantitative business cycle framework”, in J Taylor and M Woodford (eds), Handbook of

Macroeconomics, Amsterdam, pp 1341–93.Borio, C (2009), “We are all macroprudentialists

now”, VoxEU.org, 14 April.Borio, C (2012a), “The financial cycle and

macroeconomics: what have we learnt?”, BIS Working Papers, no 395, December.

Borio, C (2012b), “On time, stocks and flows: understanding the global challenges”, lecture at the Munich Seminar series, CESIfo-Group and Süddeutsche Zeitung, 15 October, BIS Speeches.

Borio, C and P Disyatat (2011), “Global imbalances and the financial crisis: link or no link?”, BIS Working Papers, no 346, May.

Borio, C, P Disyatat and M Juselius (2013), “Rethinking potential output: embedding information about the financial cycle”, BIS Working Papers, forthcoming.

Borio, C and M Drehmann (2009), “Assessing the risk of banking crises – revisited”, BIS Quarterly Review, March, pp 29–46.

Borio, C, R McCauley, and P McGuire (2011), “Global credit and domestic credit booms”, BIS Quarterly Review, September, pp 43–57.

Claessens, S, M Kose and M Terrones (2011), “Financial cycles: What? How? When?”, IMF Working Paper no WP/11/76.

Drehmann, M, C Borio and K Tsatsaronis (2012), “Characterising the financial cycle: don’t lose sight of the medium term!”, BIS Working Papers, no 380, June.

Gertler, M and N Kiyotaki (2010), “Financial intermediation and credit policy in business cycle analysis”, in B Friedman and M Woodford (eds), Handbook of Monetary Economics, vol 3, Amsterdam: North Holland, pp 547–99.

Hayek, F (1933), Monetary theory and the trade cycle, Clifton, New Jersey: Augustus M Kelly reprint 1966.

Kindleberger, C (2000), Manias, panics and crashes, Cambridge: Cambridge University

Page 85: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

86

Press, 4th edition.Minsky, H (1982), “Can ‘it’ happen again?”,

Essays on Instability and Finance, Armonk: M E Sharpe.

Mises, L von (1912), The theory of money and credit, Foundation for Economic Education 1971, reprint, New York.

Overstone, Lord (1857), Tracts and other publications on metallic and paper currency, London: J R McCulloch. Reprinted in 1972, Clifton. N.J.: Augustus Kelley.

Reinhart, C and K Rogoff (2009): This time is different: Eight centuries of financial folly, Princeton University Press: Princeton.

Taylor, A (2012): “The Great Leveraging”, BIS Working Papers, no 398, December.

Notes:1. Well-known references go back to at least

Lord Overstone (1857). Aspects of the financial cycle, with special reference to credit, were later famously refined by the likes of von Mises (1912) and Hayek (1935). In the post-war period, Kindleberger (2000) and Minsky (1982) highlighted its role in financial instability. By contrast, in the mainstream pre-crisis literature, financial factors were seen as factors that at most enhanced the persistence of the exogenous shocks that buffeted the economy (eg, Bernanke et al (1999)). Recent, mainly empirical, work stressing again the importance of credit and financial cycles includes Reinhart and Rogoff (2009), Aikman et al (2010), Claessens et al (2011) and Taylor (2012). The prevailing theoretical literature has continued to try to incorporate financial factors within the DSGE mainstream; see Gertler and Kiyotaki (2010) for a recent review; this field is expanding exponentially. For a notable

exception, see Adrian and Shin (2010).

2. Borio and Disyatat (2011) elaborate on point (iii) in the context of the role (or non-role) of current-account imbalances in the Great Financial Crisis.

Page 86: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

87

Journal of New Finance, January-March 2017

Beyond Mechanical Markets: AssetPrice Swings, Risk, and the

Role of the State (Book Review)Adrian Ravier

National University of La Pampa

AbstractFrydman and Goldberg’s book lays out important methodological contributions in regard to questioning the hypothesis of rational expectations, constructed on the basis of the writings of Knight, Keynes and Hayek in its opening section. But in the second part, it does not propose a convincing model that would help avoid the formation of new financial bubbles. While accepting to some extent that the government entity has no greater knowledge than economic agents, it ignores the perverse public-sector incentives that James M. Buchanan and the School of Public Choice have explored in recent decades. Furthermore, although the repeated reference to Hayek is encouraging, the authors seem to have misunderstood the implications of his most important insight, namely the knowledge problem as it affects public sector decision making. Paradoxically, this oversight leads Frydman and Goldberg themselves to adopt a pretension of knowledge stance, despite explicitly criticising traditional models for this very same error.

Keywordsasset prices, price swings, financial bubbles, Friedrich Hayek, John Maynard Keynes, rational expectations, financial markets, imperfect knowledge, irrationality, knowledge problem

JEL ClassificationY30, G01, G02, G12, G14, G18

This book is not an isolated contribution by its authors, but constitutes a component of a broader assault on modern mainstream economics and the macro-finance paradigm it justifies. These frameworks assume rational expectations, but the unanticipated dot.com crisis of 2001 and the sub-prime crisis of 2008 refute the claim that agents have sufficient relevant information to make appropriate decisions. Future crises will inevitably serve to underline this fundamental inadequacy of New Classical Macroeconomics.

Frydman and Goldberg make a number of valid observations which refute prevailing mainstream thinking in economic theory, particularly those often categorized as ‘the knowledge problem’. They are not content, however, to merely criticize: they suggest improvements. The basis of the alternative model they advocate is an elaboration of work previously presented in a 2007 book entitled Imperfect Knowledge Economics.

It should be emphasized at the outset that the authors do not endorse a socialist position towards central and anti-market planning.

Page 87: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

88

On the contrary, and in concert with Keynes, they propose an intermediate route which acknowledges a tendency towards efficiency in markets, but then assign a complementary role to the State. Their consideration of what constitutes an effective or appropriate role for government in an imperfect knowledge economy is the most thought provoking contribution of the book.

To be effective, state action would have to avoid the financial bubbles which have been the hallmark of public policy justified by mainstream economics, sparing society their inevitable and adverse political, economic and social consequences. Unfortunately, there is a disconnect between the section which critiques traditional models and the section elaborating the alternative: the criticisms outlined in the former are apparently forgotten when the alternative model is developed.

The book comprises 12 chapters, plus an epilogue. The first six chapters together form the first part of the book. It critiques the standard “mechanistic models”, with their obvious “pretense of knowledge”, and problematic conceptions of “rationality” and “efficiency”. The second part is also divided into six chapters and deals with the design of the alternative model they label imperfect knowledge economics (IKE). The epilogue deals with the question of what economists can know, criticizing this quest for omniscience and thereby defining the limits of our knowledge and discipline.

In Chapter 1, Frydman and Goldberg set out their agenda:

“The central premise of this book is that the conceptual framework underpinning the debate triggered by the global financial crisis is grossly inadequate for understanding what went wrong with our economies and what should be done to reform them. The reason is simple: contemporary macroeconomic and finance theory attempts to account for risk and swings

in asset prices with models that suppose that nonroutine change is irrelevant, as if nothing genuinely new can ever happen.” (2011, p. 165)

One of the most remarkable aspects of this book is the number of positive references to both John Maynard Keynes and Friedrich Hayek.These authors are often interpreted as advancing completely incompatible interpretations of economic life. Yet Frydman and Goldberg (2011) cite Keynes or Keynesianism 156 times, and Hayek about 33 times in support of their IKE model. The reader may find it paradoxical that these intellectual enemies of the twentieth century are regarded as the common foundation upon which these authors construct their alternative model. But they recognize that both Keynes and Hayek were preoccupied with the disequilibrium conditions which inevitably develop when uncertainty is pervasive. It would benefit the discipline if such insights were more widely appreciated.

Of course, not all Keynesians are orthodox followers of their intellectual mentor. A prime example is the question posed by Axel Leijonhufvud, What have the moderns done with Keynes? He pointed out that the so-called ‘Keynesian’ neoclassical synthesis associated with John Hicks and Paul Samuelson resulted when the implications of uncertainty were ignored and rational expectations adopted instead as the point of departure.

A post-‘Keynesianism’ which rediscovered Keynes’ appreciation of the critical importance of uncertainty would share a common agenda with Hayek and the Austrian School as well as with Frydman and Goldberg, namely the need to develop a more realistic economic model, one that includes space for contingent events, irrationality, and imperfect knowledge. After all, people are not robots that carefully calculate the impact of each of their decisions (Lucas, 2002: 21): they are people who make decisions in a framework of uncertainty, decisions which are

Page 88: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

89

frequently recognised ex post as inappropriate means to the end sought.

It may seem a harsh assessment, but the comparison these authors draw between the divergent approaches evident in modern economics and the psychological distinction between neurotic and psychotic is apt.In psychology, neurotics are said to construct castles in the air; psychotics also live in the castles they have constructed in their minds. A great deal of modern macroeconomics is ‘psychotic’ for the same reason: those who believe in the rational expectations hypothesis “live” in their models and do not always know how to distinguish reality from the assumptions on which their models are based.

In other words, the defect that prevented economists wedded to the rational expectations hypothesis – including behavioural economists – from noticing the formation of the bubbles which burst in 2001 and 2007 was a model dependent on deterministic optimization in which markets are believed to act mechanically and in which economic change is completely predictable.

Frydman and Goldberg (2011) address the issues raised by behavioral theories in Chapter 6. While they agree that psychological factors are important, they cannot deny fundamentals as the guide around which prices will swing.

The IKE model, originally presented in the now-famous 2007 book, is put forward not only for its timely recognition of the fundamental deficiency of models that failed to predict the subsequent crisis, but also as an alternative which offers a more flexible, predictive model of empirical events. This brings us to what I consider the book’s major flaw: although Hayek and Keynes shared a common philosophy regarding the presumption of knowledge, their views diverged when they considered what would constitute effective government action consistent with that philosophy. At this point in

their argument, Frydman and Goldberg neglect the position argued by Hayek and concentrate their attention on the position associated with Keynes. Readers familiar with the Austrian school will not find their arguments persuasive. They will therefore be justified in considering Hayek’s views on appropriate state intervention more consistent with the presumption of knowledge philosophy Keynes and Hayek shared.

The critical issue which Frydman and Goldberg fail to appreciate is how markets really work. Adoption of a rational expectations framework leads inexorably to the conclusion that markets ‘work’, although it is usually a deep-seated belief that markets are effective that leads mainstream economist to adopt rational expectations as the explanation.

However, we agree with Frydman and Goldberg’s recognition that markets are imperfect, and that Keynes (1936) was correct to develop his ideas within a framework characterized by imbalance and uncertainty. Hayek also made the same point in his famous 1945 paper that markets coordinate on the basis of a price system determined by imperfect markets. His assertion (1980, p. 32) that ‘[m]arket prices are not perfect but the best available” succinctly summarized his position. He elaborated on this theme using tin as an example (Hayek, 1945).

It does not, however, follow logically that markets do not work if they are imperfect.Or, to be fairer to Frydman and Goldberg, acknowledging that markets do not coordinate perfectly is no justification for the conclusion that a State conversant with the implications of an imperfect knowledge economy can ensure they coordinate better.

This linkage hinges on the fundamental question of whether a government agency can obtain better information than that provided by markets, and whether it has tools at its disposal which can effectively warn market participants

Page 89: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

90

when asset prices deviate dangerously relative to their fundamentals (see, for example, Bernanke 2002, p. 5).Only when these preconditions exist, would Frydman and Goldberg be justified in demanding state-imposed limits on the purchase of certain assets in the midst of a speculative bubble, or even requiring banks to increase interest rates when the risk that an upward price trend may be reversed rises above a designated threshold.

Personally, I consider the arguments advanced in support of this contention weak. I am not persuaded that IKE enhances the ability of the state to detect the existence of bubbles. From our point of view, it is economic agents who must arbitrate, and the only effective role the state can play is to define the rules of the game and enforce the rule of law which enables markets to operate with maximum possible, albeit still imperfect, efficiency.

Almost without realizing it, these authors fall into the same pretense of knowledge on the part of government agents (the implications of which Hayek pointed out in 1974), while criticizing the rational expectations mainstream economists attribute to market participants. This inconsistency may be due to the emphasis they place on the allegedly divergent incentive structures faced by private and public sector decision makers.

The explanation offered by Frydman and Goldberg for their prescription runs as follows:

The need for state intervention in key asset markets arises not because policy officials have superior knowledge about asset values, but because profit-seeking market participants do not internalize the huge social costs associated with excessive upswings and downswings in these markets. (2011, p. 3318)

In fact, Chapter 10 shows how persistent profit-seeking on the part of economic agents creates a tendency for speculators to buy assets at prices inconsistent with their underlying value,

despite an awareness that this divergence is occurring. Such bubbles in turn justify an entity outside the market – a government agency – able to identify the associated risks and social costs and effectively intervene to prevent catastrophic failure and enhance ‘market efficiency’.

This returns us once again to the fundamental question: can the government entity identify the existence of deviations or bubbles which economic agents operating in the market are unable to observe without government help? Frydman and Goldberg rely heavily on Bernanke’s acknowledgement that he was aware that Robert Shiller of Yale University used dividend ratios to argue that the stock market was already overvalued in a presentation made to the Federal Reserve in December of 1996.

The case is interesting, but at the same time controversial if we recall that the Nasdaq bubble continued to inflate for at least another four years until 2000. The underlying problem may be illustrated by means of a thought experiment: suppose that economic agents have rational expectations and are warned about the existence of this stock market bubble. Assume they also have sufficiently adequate information to construct a model which indicates that the bubble will continue to inflate for another four years.Would they continue to buy assets in 1996? It would appear rational for them to do so, given that their objective is to realize profits. However, in order to capture those benefits, it is necessary to resell those same assets before the crash occurs. This raises a different issue: perfect rationality implies rational expectations which are homogeneous across all market analysts who heeded the warning. The question that therefore arises is who will be able to sell their assets in the year before the crash, if all were aware that the bubble would burst at a specified point in time. In this case, it is likely that no one would buy the assets after the warning issued in 1996, and as a consequence the bubble would not develop. In

Page 90: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

91

reality, information is inherently imperfect, that imperfection varies significantly over the universe of interested parties, and is not interpreted in the same way by each participant. But, nonetheless, this thought experiment has many possibilities, depending on the methodological limitations imposed at the outset.

Turning once again to Frydman and Goldberg, the fact is that they recognize that the government entity does not have better knowledge than the market when it comes to identifying deviations or the existence of bubbles.

No one can know precisely when an asset price swing becomes excessive, and as we argue shortly, policymakers will need to consider more than just departures from historical benchmark values. But the overinvestment in technology and communication companies and the sharp and prolonged downturn in stock prices that began in 2000 show that the upswing in stocks had indeed reached excessive levels. The market did eventually self-correct, but it did so too late. This boom-bust dynamic led to an economic recession and a prolonged period of subpar rates of private investment and employment. Only the state and collective action can minimize the social costs of such delayed corrections to excessive asset price swings (2011, p. 3329).

The explanation does not seem very clear and even less convincing. The authors may accept that the government entity will not necessarily have better knowledge than agents, and instead they emphasize the different incentives faced by the public and private actors. Given the interest in obtaining accounting benefits, private actors will pursue those gains to the point where psychological attributes of herd mentality become evident, and without regard to the unreasonable social costs that loom ahead. A government entity aware of these social costs and lacking the ‘perverse’ incentives of the private sector might conceivably intervene to avoid the worst of the bubble.

However, it seems even more likely that the government entity will be unable to assess the potential social costs associated with a future event of some unknowable probability and, furthermore, that government intervention unwittingly goes a long way towards magnifying rather than reducing those costs. In fact, with regulations in place to prevent the externalization of social costs, both the Nasdaq and the housing market would have experienced a timely correction and avoided the bubbles that did in fact occur. Lax credit policy was prominent among the interventions which inflated these respective bubbles and exaggerated the social costs when they burst.

But the Hayekian argument about the inadequacy of knowledge available to public sector actors, which these authors recognize, cannot be swept under the carpet by switching the argument to the divergent incentives which public and private decision makers face. The work of James Buchanan and the Public Choice school of economics demonstrates that ‘perverse’ incentives are not restricted to the private sector, and public sector decision makers may also lack the motivation to avoid these costs.Unfortunately, Frydman and Goldberg make no reference to the public choice literature.

Modern economists frequently try to formulate a variety of rational expectations alternatives among which agents can choose. To adequately account for all the possibilities, they must develop as many models of heterogeneous rational expectations as there are people participating in the market, and this is clearly unhelpful. Each person shapes their expectations according to their own experience, that is, their past, their present and how they interpret the future, which can hardly be synthesized in any formal way. It was Ludwig Lachmann, an author whose contribution is also ignored by Frydman and Goldberg, who first connected Hayek’s theory of capital and economic cycles without recourse

Page 91: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

92

to rational expectations. Lachmann identified subjective expectations as a potential cause of financial bubbles when they are exaggerated by counterproductive public policy. Subjective expectations may well be an old concept, but it is – as I see it – a model compatible with IKE which avoids the need to pretend that some large number of heterogeneous “rational” expectations which are impossible to specify.

Nor can we agree with Frydman and Goldberg when they apportion blame for causing the formation of the bubbles. They fail to understand the fundamental role that the US government and the Federal Reserve played in feeding these bubbles by unwittingly creating the incentives which fueled their development. In the absence of such policies, as John Taylor puts it, the housing bubble would have been just a knoll (Lewin and Ravier, 2012).

Yet in the epilogue they acknowledge that lax monetary policy, or low interest rates, may fuel a bubble (p. 579), but do not relate this idea to what happened in the pre-2007 phase. Rather, they insist – following Keynes – that bubbles are phenomena determined primarily by psychological factors:

Bubbles are thought to arise because, instead of trading rationally on the basis of movements in fundamental factors, many markets participants succumb to waves of market psychology, indulge in irrationalities of various kinds, or engage in technical trading based on charts of asset-price movements. According to bubble models, markets behave like casinos, often allocating society´s capital haphazardly. (p. 281)

Contrary to behavioral economics (e.g., Akerlof, 2001), these authors also recognize a certain value which underpins asset prices (Chapter 6). Although psychological factors can divert asset prices from their fundamentals for a while, in the end these values prevail and prices fluctuate close to them. Following Keynes, “[w]

e should not conclude from this that everything depends on waves of irrational psychology.” (Keynes, 1936, p. 162)

If the fundamentals ultimately point in the opposite direction, it will not be possible for the “waves of irrational psychology” to keep asset prices on a divergent trajectory indefinitely. In fact, the authors acknowledge that the fundamental value of assets will mediate psychological feelings and risks over time (see especially Chapter 7 for examples).

This leads Frydman and Goldberg to assert that in normal times the role of government must be minimal, merely supervisory. But if they detect bubbles, then they must act.

In our proposed scheme, so long as asset-price fluctuations remain within reasonable bounds, the state´s involvement is limited to setting the rules of the game: ensuring transparency and adequate competition, and eliminating other market distortions (such as those that the recent crisis exposed). But officials should also devise guidance ranges for asset prices. In doing so, they should not rely solely on historically based valuations, which, because they ignore nonroutine change, are unreliable as a guide to likely thresholds of excess during asset-price swings. Once prices move beyond such a nonroutine guidance range, Imperfect Knowledge Economics suggests that policy officials should cautiously and gradually implement dampening measures, as well as requiring banks to prepare for the eventual reversal by increasing their loan-loss provisions. (p. 377)

Again, the same questions arise: How does the government entity determine that prices are disengaged from an ‘appropriate’ range? Who sets these limits? Under what criteria do they set them? If we assumed above that such knowledge does not surpass that of the market, how do they expect to be able to restrain the market better than the market itself by ad hoc

Page 92: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

93

intervention?If we consider once again the subprime crisis,

it was an excess of regulation of the banks that generated the crisis, and not a lack of regulation (Lewin and Ravier, 2012). There is a wealth of literature explaining the credit origin of this bubble, but there are also numerous authors who have shown how this credit was channeled especially to the real estate market. Paul Krugman even broached the idea of a housing bubble to replace the Nasdaq bubble as early as 2002, suggesting that the most recent bubble may well have been an intentional policy choice. Would it not be appropriate to question the incentives of those responsible? Inevitably, additional post-crisis questions of the moral hazard type would follow.

In conclusion, Frydman and Goldberg’s book lays out important methodological contributions in regard to questioning the hypothesis of rational expectations, constructed on the basis of the writings of Knight, Keynes and Hayek in its opening section. But in the second part, it does not propose a convincing model that would help avoid the formation of new financial bubbles. While accepting to some extent that the government entity has no greater knowledge than economic agents, it ignores the perverse public-sector incentives that James M. Buchanan and the School of Public Choice have explored in recent decades. Furthermore, although the repeated reference to Hayek is encouraging, the authors seem to have misunderstood the implications of his most important insight, namely the knowledge problem as it affects public sector decision making. Paradoxically, this oversight leads Frydman and Goldberg themselves to adopt a pretension of knowledge stance, despite explicitly criticising traditional models for this very same error.

Bibliography:Akerlof, George A. (2001), “Behavioral

Macroeconomics and Macroeconomic Behavior”, Nobel Address, Stockholm: Nobel Foundation.

Bernanke, Ben S. (2002), “Asset-Price ‘Bubbles’ and Monetary Policy,” speech at the New York Chapter of the National Association for Business Economics, New York, October 15.

Frydman, Roman, and Michael D. Goldberg (2007), Imperfect Knowledge Economics: Exchange Rates and Risk, Princeton, NJ: Princeton University Press.

Hayek, Friedrich A. (1945), “The Use of Knowledge in Society”, American Economic Review 35: 519-30.

Hayek, Friedrich A. (1948), Individualism and Economic Order, Chicago: University of Chicago Press.

Hayek, Friedrich A. (1978), “The Pretence of Knowledge”, 1974 Nobel Address in New Studies in Philosophy, Politics, Economics and History of Ideas, Chicago: University of Chicago Press.

Hayek, Friedrich A. (1984) 1980s Unemployment and the Unions, The Institute of Economics Affairs, 2nd edition.

Keynes, John Maynard (1936), The General Theory of Employment, Interest and Money, Harcourt, Brace and World.

Lewin, Peter and Adrián Ravier (2012), “The Subprime Crisis”, The Quarterly Journal of Austiran Economics, Vol. 15, Num. 1, Spring 2012, pp. 45-74.

Lucas, Robert E. Jr. (2002), Lectures on Economic Growth, Cambridge, MA: Harvard University Press.

Page 93: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

94

Page 94: Published by THE FINANCE RESEARCH CENTER - …...Hamlet Without the Prince? Claudio Borio Beyond Mechanical Markets: Asset Price Swings, Risk, and the Role of the State (Book Review)

95

Journal NewFinance

of

For correspondence: Escuela de NegociosCalle Manuel F. Ayau (6 Calle final), zona 10 Edificio Escuela de NegociosGuatemala, Guatemala 01010

https://frc.ufm.edu