On June 10, 2000, Queen Elizabeth II opened the high-tech Millennium Bridge, which traverses the River Thames from the Tate Modern to St. Paul’s Cathedral. Thousands of people lined up to walk across the new structure, which consisted of a narrow aluminum footbridge surrounded by steel balustrades projecting out at obtuse angles. Within minutes of the official opening, the footway started to tilt and sway alarmingly, forcing some of the pedestrians to cling to the side rails. Some reported feeling seasick. The authorities shut the bridge, claiming that too many people were using it. The next day, the bridge reopened with strict limits on the number of pedestrians, but it began to shake again. Two days after it had opened, with the source of the wobble still a mystery, the bridge was closed for an indefinite period.
Some commentators suspected the bridge’s foundations, others an unusual air pattern. The real problem was that the designers of the bridge, who included the architect Sir Norman Foster and the engineering firm Ove Arup, had not taken into account how the footway would react to all the pedestrians walking on it. When a person walks, lifting and dropping each foot in turn, he or she produces a slight sideways force. If hundreds of people are walking in a confined space, and some happen to walk in step, they can generate enough lateral momentum to move a footbridge—just a little. Once the footway starts swaying, however subtly, more and more pedestrians adjust their gait to get comfortable, stepping to and fro in synch. As a positive-feedback loop develops between the bridge’s swing and the pedestrians’ stride, the sideways forces can increase dramatically and the bridge can lurch violently. The investigating engineers termed this process “synchronous lateral excitation,” and came up with a mathematical formula to describe it.
What does all this have to do with financial markets? Quite a lot, as the Princeton economist Hyun Song Shin pointed out in a prescient 2005 paper. Most of the time, financial markets are pretty calm, trading is orderly, and participants can buy and sell in large quantities. Whenever a crisis hits, however, the biggest players—banks, investment banks, hedge funds—rush to reduce their exposure, buyers disappear, and liquidity dries up. Where previously there were diverse views, now there is unanimity: everybody’s moving in lockstep. “The pedestrians on the bridge are like banks adjusting their stance and the movements of the bridge itself are like price changes,” Shin wrote. And the process is self-reinforcing: once liquidity falls below a certain threshold, “all the elements that formed a virtuous circle to promote stability now will conspire to undermine it.” The financial markets can become highly unstable.
This is essentially what happened in the lead-up to the Great Crunch. The trigger was, of course, the market for subprime-mortgage bonds—bonds backed by the monthly payments from pools of loans that had been made to poor and middle-income home buyers. In August, 2007, with house prices falling and mortgage delinquencies rising, the market for subprime securities froze. By itself, this shouldn’t have caused too many problems: the entire stock of outstanding subprime mortgages was about a trillion dollars, a figure dwarfed by nearly twelve trillion dollars in total outstanding mortgages, not to mention the eighteen-trillion-dollar value of the stock market. But then banks, which couldn’t estimate how much exposure other firms had to losses, started to pull back credit lines and hoard their capital—and they did so en masse, confirming Shin’s point about the market imposing uniformity. An immediate collapse was averted when the European Central Bank and the Fed announced that they would pump more money into the financial system. Still, the global economic crisis didn’t ease up until early this year, and by then governments had committed an estimated nine trillion dollars to propping up the system.
A number of explanations have been proposed for the great boom and bust, most of which focus on greed, overconfidence, and downright stupidity on the part of mortgage lenders, investment bankers, and Wall Street C.E.O.s. According to a common narrative, we have lived through a textbook instance of the madness of crowds. If this were all there was to it, we could rest more comfortably: greed can be controlled, with some difficulty, admittedly; overconfidence gets punctured; even stupid people can be educated. Unfortunately, the real causes of the crisis are much scarier and less amenable to reform: they have to do with the inner logic of an economy like ours. The root problem is what might be termed “rational irrationality”—behavior that, on the individual level, is perfectly reasonable but that, when aggregated in the marketplace, produces calamity.
Consider the freeze that started in August of 2007. Each bank was adopting a prudent course by turning away questionable borrowers and holding on to its capital. But the results were mutually ruinous: once credit stopped flowing, many financial firms—the banks included—were forced to sell off assets in order to raise cash. This round of selling caused stocks, bonds, and other assets to decline in value, which generated a new round of losses.
A similar feedback loop was at work during the boom stage of the cycle, when many mortgage companies extended home loans to low- and middle-income applicants who couldn’t afford to repay them. In hindsight, that looks like reckless lending. It didn’t at the time. In most cases, lenders had no intention of holding on to the mortgages they issued. After taking a generous fee for originating the loans, they planned to sell them to Wall Street banks, such as Merrill Lynch and Goldman Sachs, which were in the business of pooling mortgages and using the monthly payments they generated to issue mortgage bonds. When a borrower whose home loan has been “securitized” in this way defaults on his payments, it is the buyer of the mortgage bond who suffers a loss, not the issuer of the mortgage.
This was the climate that produced business successes like New Century Financial Corporation, of Orange County, which originated $51.6 billion in subprime mortgages in 2006, making it the second-largest subprime lender in the United States, and which filed for Chapter 11 on April 2, 2007. More than forty per cent of the loans it issued were stated-income loans, also known as liar loans, which didn’t require applicants to provide documentation of their supposed earnings. Michael J. Missal, a bankruptcy-court examiner who carried out a detailed inquiry into New Century’s business, quoted a chief credit officer who said that the company had “no standard for loan quality.” Some employees queried its lax approach to lending, without effect. Senior management’s primary concern was that the loans it originated could be sold to Wall Street. As long as investors were eager to buy subprime securities, with few questions asked, expanding credit recklessly was a highly rewarding strategy.
When the subprime-mortgage market faltered, the business model of giving loans to all comers no longer made sense. Nobody wanted mortgage-backed securities any longer; nobody wanted to buy the underlying mortgages. Some of the Wall Street firms that had financed New Century’s operations, such as Goldman Sachs and Citigroup, made margin calls. Federal investigators began looking into New Century’s accounts, and the company rapidly became one of the first major casualties of the subprime crisis. Then again, New Century’s executives were hardly the only ones who failed to predict the subprime crash; Alan Greenspan and Ben Bernanke didn’t, either. Sharp-dealing companies like New Century may have been reprehensible. But they weren’t simply irrational.
The same logic applies to the decisions made by Wall Street C.E.O.s like Citigroup’s Charles Prince and Merrill Lynch’s Stanley O’Neal. They’ve been roundly denounced for leading their companies into the mortgage business, where they suffered heavy losses. In the midst of a credit bubble, though, somebody running a big financial institution seldom has the option of sitting it out. What boosts a firm’s stock price, and the boss’s standing, is a rapid expansion in revenues and market share. Privately, he may harbor reservations about a particular business line, such as subprime securitization. But, once his peers have entered the field, and are making money, his firm has little choice except to join them. C.E.O.s certainly don’t have much personal incentive to exercise caution. Most of them receive compensation packages loaded with stock options, which reward them for delivering extraordinary growth rather than for maintaining product quality and protecting their firm’s reputation.
Prince’s experience at Citigroup provides an illuminating case study. A corporate lawyer by profession, he had risen to prominence as the legal adviser to Citigroup’s creator, Sandy Weill. After Weill got caught up in Eliot Spitzer’s investigation of Wall Street analysts and resigned, in 2003, Prince took over as C.E.O. He was under pressure to boost Citigroup’s investment-banking division, which was widely perceived to be falling behind its competitors. At the start of 2005, Citigroup’s board reportedly asked Prince and his colleagues to develop a growth strategy for the bank’s bond business. Robert Rubin, the former Treasury Secretary, who served as the chairman of the board’s executive committee, advised Prince that the company could take on more risk. “We could afford to seek more opportunities through intelligent risk-taking,” Rubin later told the Times. “The key word is ‘intelligent.’ ”
Prince could have rejected Rubin’s advice and told the board that he didn’t think it was a good idea for Citigroup to take on more risk, however intelligently it was done. But Citigroup’s stock price hadn’t moved much in five years, and maintaining a cautious approach would have involved forgoing the kind of growth that some of the firm’s rivals—UBS and Bank of America—were already enjoying. To somebody in Prince’s position, the risky choice would have been standing aloof from the subprime craze, not joining the crowd.
In July, 2007, he intimated as much, in an interview with the Financial Times. At that stage, three months after New Century’s collapse, the problems in the subprime market could no longer be ignored. But the private-equity business, in which Citigroup had become a major presence, was still thriving, and Blackstone, one of the biggest buyout firms, had just issued stock on the New York Stock Exchange. Prince conceded that a collapse in the credit markets could leave Citigroup and other banks exposed to the prospect of large losses. Despite the danger, he insisted that he had no intention of pulling back. “When the music stops, in terms of liquidity, things will be complicated,” Prince said. “But as long as the music is playing, you’ve got to get up and dance.”
The reference to the game of musical chairs was a remarkably candid description of the situation in which executives like Prince found themselves, and of the logic of rational irrationality. Whether Prince knew it or not, he was channelling John Maynard Keynes, who, in “The General Theory of Employment, Interest, and Money,” pointed to the inconvenient fact that “there is no such thing as liquidity of investment for the community as a whole.” Whatever the asset class may be—stocks, bonds, real estate, or commodities—the market will seize up if everybody tries to sell at the same time. Financiers were accordingly obliged to keep a close eye on the “mass psychology of the market,” which could change at any moment. Keynes wrote, “It is, so to speak, a game of Snap, of Old Maid, of Musical Chairs—a pastime in which he is victor who says Snap neither too soon nor too late, who passes the Old Maid to his neighbour before the game is over, who secures a chair for himself when the music stops.”
Keynes’s jaundiced view of finance reflected his own experience as an investor and as a director of an insurance company. Every morning, in his rooms at King’s College, Cambridge, he spent about half an hour in bed studying the financial pages and various brokerage reports. He compared investing to newspaper competitions in which “the competitors have to pick out the six prettiest faces from a hundred photographs, the prize being awarded to the competitor whose choice most nearly corresponds to the average preferences of the competitors as a whole; so that each competitor has to pick, not those faces which he himself finds prettiest, but those which he thinks likeliest to catch the fancy of the other competitors, all of whom are looking at the problem from the same point of view.” If you want to win such a contest, you’d better try to select the outcome on which others will converge, whatever your personal opinion might be. “It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest,” Keynes explained. “We have reached the third degree, where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees.”
The beauty-contest analogy helps explain why real-estate developers, condo flippers, and financial investors continued to invest in the real-estate market and in the mortgage-securities market, even though many of them may have believed that home prices had risen too far. Alan Greenspan and other free-market economists failed to recognize that, during a speculative mania, attempting to “surf” the bubble can be a perfectly rational strategy. According to orthodox economics, professional speculators play a stabilizing role in the financial markets: whenever prices rise above fundamentals, they step in and sell; whenever prices fall too far, they step in and buy. But history has demonstrated that much of the so-called “smart money” aims at getting in ahead of the crowd, and that only adds to the mispricing.
Markus Brunnermeier, an economist at Princeton, and Stefan Nagel, an economist at Stanford, obtained data from S.E.C. filings for fifty-three hedge-fund managers during the dot-com bubble. In the third quarter of 1999, they discovered, the funds raised their portfolio weightings in technology stocks from sixteen to twenty-nine per cent. By March of 2000, when the Nasdaq peaked, the funds had invested roughly a third of their assets in tech. “From an efficient-markets perspective, these results are puzzling,” Brunnermeier and Nagel noted. “Why would some of the most sophisticated investors in the market hold these overpriced technology stocks?” We know that many such investors had no illusions about the prospects of the financial products they traded. But their strategy was to capture the upside of the bubble while avoiding most of the downside—and, with timely selling, many of them succeeded.
Because financial markets consist of individuals who react to what others are doing, the theories of free-market economics are often less illuminating than the Prisoner’s Dilemma, an analysis of strategic behavior that game theorists associated with the RAND Corporation developed during the early nineteen-fifties. Much of the work done at RAND was initially applied to the logic of nuclear warfare, but it has proved extremely useful in understanding another explosion-prone arena: Wall Street.
Imagine that you and another armed man have been arrested and charged with jointly carrying out a robbery. The two of you are being held and questioned separately, with no means of communicating. You know that, if you both confess, each of you will get ten years in jail, whereas if you both deny the crime you will be charged only with the lesser offense of gun possession, which carries a sentence of just three years in jail. The best scenario for you is if you confess and your partner doesn’t: you’ll be rewarded for your betrayal by being released, and he’ll get a sentence of fifteen years. The worst scenario, accordingly, is if you keep quiet and he confesses.
What should you do? The optimal joint result would require the two of you to keep quiet, so that you both got a light sentence, amounting to a combined six years of jail time. Any other strategy means more collective jail time. But you know that you’re risking the maximum penalty if you keep quiet, because your partner could seize a chance for freedom and betray you. And you know that your partner is bound to be making the same calculation. Hence, the rational strategy, for both of you, is to confess, and serve ten years in jail. In the language of game theory, confessing is a “dominant strategy,” even though it leads to a disastrous outcome.
In a situation like this, what I do affects your welfare; what you do affects mine. The same applies in business. When General Motors cuts its prices or offers interest-free loans, Ford and Chrysler come under pressure to match G.M.’s deals, even if their finances are already stretched. If Merrill Lynch sets up a hedge fund to invest in collateralized debt obligations, or some other shiny new kind of security, Morgan Stanley will feel obliged to launch a similar fund to keep its wealthy clients from defecting. A hedge fund that eschews an overinflated sector can lag behind its rivals, and lose its major clients. So you can go bust by avoiding a bubble. As Charles Prince and others discovered, there’s no good way out of this dilemma. Attempts to act responsibly and achieve a coöperative solution cannot be sustained, because they leave you vulnerable to exploitation by others. If Citigroup had sat out the credit boom while its rivals made huge profits, Prince would probably have been out of a job earlier. The same goes for individual traders at Wall Street firms. If a trader has one bad quarter, perhaps because he refused to participate in a bubble, the results can be career-threatening.
As the credit bubble continued, even the credit-rating agencies, which exist to provide investors with objective advice, got caught up in the same sort of competitive behavior that had persuaded banks like Citigroup, UBS, and Merrill Lynch to plunge into the subprime sector. Instead of adopting an arms-length approach and establishing a uniform set of standards for issuers of mortgage securities, the big three rating agencies—Fitch, Moody’s, and Standard & Poor’s—worked closely with Wall Street banks, and ended up giving AAA ratings to financial junk. But under the rating industry’s business model, in which the issuers of securities pay the agencies for rating them, the agencies are dependent on Wall Street for their revenues.
Before Goldman Sachs, say, issued a hundred million dollars of residential-mortgage bonds, it would pay an agency like Moody’s at least thirty or forty thousand dollars to issue a credit rating on the deal. As the boom continued, investment bankers played the agencies off one another, shopping around for a favorable rating. If one agency didn’t think a bond deserved an investment-grade rating, the business would go to a more generously disposed rival. To stay in business, and certainly to maintain market share, credit analysts had to accentuate the positive.
The Prisoner’s Dilemma is the obverse of Adam Smith’s theory of the invisible hand, in which the free market coördinates the behavior of self-seeking individuals to the benefit of all. Each businessman “intends only his own gain,” Smith wrote in “The Wealth of Nations,” “and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.” But in a market environment the individual pursuit of self-interest, however rational, can give way to collective disaster. The invisible hand becomes a fist.
In February of 2002, the Millennium Bridge was reopened. The engineers at Ove Arup had figured out how the collective behavior of pedestrians caused the bridge to sway, and installed dozens of shock absorbers—under the bridge, around its supporting piers, and at one end of it. The embarrassing debacle of its début hasn’t entirely faded from memory, but there have been no further problems.
It won’t be as easy to deal with the bouts of instability to which our financial system is prone. But the first step is simply to recognize that they aren’t aberrations; they are the inevitable result of individuals going about their normal business in a relatively unfettered marketplace. Our system of oversight fails to account for how sensible individual choices can add up to collective disaster. Rather than blaming the pedestrians for swarming the footway, governments need to reinforce the foundations of the structure, by installing more stabilizers. “Our system failed in basic fundamental ways,” Treasury Secretary Timothy Geithner acknowledged earlier this year. “To address this will require comprehensive reform. Not modest repairs at the margin, but new rules of the game.”
Despite this radical statement of intent, serious doubts remain over whether the Obama Administration’s proposed regulatory overhaul goes far enough in dealing with the problem of rational irrationality. Much of what the Administration has proposed is welcome. It would force issuers of mortgage securities to keep some of the bonds on their own books, and it would impose new capital requirements on any financial firm “whose combination of size, leverage, and interconnectedness could pose a threat to financial stability if it failed.” None of these terms have been defined explicitly, however, and it isn’t clear what the new rules will mean for big hedge funds, private-equity firms, and the finance arms of industrial companies. If there is any wiggle room, excessive risk-taking and other damaging behavior will simply migrate to the unregulated sector.
A proposed central clearinghouse for derivatives transactions is another good idea that perhaps doesn’t go far enough. The clearinghouse plan applies only to “standardized” derivatives. Firms like JPMorgan Chase and Morgan Stanley would still be allowed to trade “customized” derivatives with limited public disclosure and no central clearing mechanism. Given the creativity of the Wall Street financial engineers, it wouldn’t take them long to exploit this loophole.
The Administration has also proposed setting up a Consumer Financial Protection Agency, to guard individuals against predatory behavior on the part of banks and other financial firms, but its remit won’t extend to vetting complex securities—like those notorious collateralized debt obligations—that Wall Street firms trade among themselves. Limiting the development of those securities would stifle innovation, the financial industry contends. But that’s precisely the point. “The goal is not to have the most advanced financial system, but a financial system that is reasonably advanced but robust,” Viral V. Acharya and Matthew Richardson, two economists at N.Y.U.’s Stern School of Business, wrote in a recent paper. “That’s no different from what we seek in other areas of human activity. We don’t use the most advanced aircraft to move millions of people around the world. We use reasonably advanced aircrafts whose designs have proved to be reliable.”
During the Depression, the Glass-Steagall Act was passed in order to separate the essential utility aspects of the financial system—customer deposits, check clearing, and other payment systems—from the casino aspects, such as investment banking and proprietary trading. That key provision was repealed in 1999. The Administration has shown no interest in reinstating it, which means that “too big to fail” financial supermarkets, like Bank of America and JPMorgan Chase, will continue to dominate the financial system. And, since the federal government has now demonstrated that it will do whatever is necessary to prevent the collapse of the largest financial firms, their top executives will have an even greater incentive to enter perilous lines of business. If things turn out well, they will receive big bonuses and the value of their stock options will increase. If things go wrong, the taxpayer will be left to pick up some of the tab.
Executive pay is yet another issue that remains to be tackled in any meaningful way. Even some top bankers have conceded that current Wall Street remuneration schemes lead to excessive risk-taking. Lloyd Blankfein, the chief executive of Goldman Sachs, has suggested that traders and senior executives should receive some of their compensation in deferred payments. A few firms, including Morgan Stanley and UBS, have already introduced “clawback” schemes that allow the firm to rescind some or all of traders’ bonuses if their investments turn sour. Without direct government involvement, however, the effort to reform Wall Street compensation won’t survive the next market upturn. It’s another version of the Prisoner’s Dilemma. Although Wall Street as a whole has an interest in controlling rampant short-termism and irresponsible risk-taking, individual firms have an incentive to hire away star traders from rivals that have introduced pay limits. The compensation reforms are bound to break down. In this case, as in many others, the only way to reach a socially desirable outcome is to enforce compliance, and the only body that can do that is the government.
This doesn’t mean that government regulators would be setting the pay of individual traders and executives. It does mean that the Fed, as the agency primarily responsible for insuring financial stability, should issue a set of uniform rules for Wall Street compensation. Firms might be obliged to hold some, or all, of their traders’ bonuses in escrow accounts for a period of some years, or to give executive bonuses in the form of restricted stock that doesn’t vest for five or ten years. (This was similar to one of Blankfein’s suggestions.) In one encouraging sign, officials from the Fed and the Treasury are reportedly working on the details of Wall Street pay guidelines that would explicitly aim at preventing the reëmergence of rationally irrational behavior. “You don’t want people being paid for taking too much risk, and you want to make sure that their compensation is tied to long-term performance,” Geithner told the Times recently.
The Great Crunch wasn’t just an indictment of Wall Street; it was a failure of economic analysis. From the late nineteen-nineties onward, the Fed stubbornly refused to recognize that speculative bubbles encourage the spread of rationally irrational behavior; convinced that the market was a self-regulating mechanism, it turned away from its traditional role, which is—in the words of a former Fed chairman, William McChesney Martin—“to take away the punch bowl just when the party gets going.” A formal renunciation of the Greenspan doctrine is overdue. The Fed has a congressional mandate to insure maximum employment and stable prices. Morgan Stanley’s Stephen Roach has suggested that Congress alter that mandate to include the preservation of financial stability. The addition of a third mandate would mesh with the Obama Administration’s proposal to make the Fed the primary monitor of systemic risk, and it would also force the central bank’s governors and staff to think more critically about the financial system and its role in the broader economy.
It’s a pity that economists outside the Fed can’t be legally obliged to acknowledge their errors. During the past few decades, much economic research has “tended to be motivated by the internal logic, intellectual sunk capital and esthetic puzzles of established research programmes rather than by a powerful desire to understand how the economy works—let alone how the economy works during times of stress and financial instability,” notes Willem Buiter, a professor at the London School of Economics who has also served on the Bank of England’s Monetary Policy Committee. “So the economics profession was caught unprepared when the crisis struck.”
In creating this state of unreadiness, the role of free-market ideology cannot be ignored. Many leading economists still have a vision of the invisible hand satisfying wants, equating costs with benefits, and otherwise harmonizing the interests of the many. In a column that appeared in the Times in May, the Harvard economist Greg Mankiw, a former chairman of the White House Council of Economic Advisers and the author of two leading textbooks, conceded that teachers of freshman economics would now have to mention some issues that were previously relegated to more advanced courses, such as the role of financial institutions, the dangers of leverage, and the perils of economic forecasting. And yet “despite the enormity of recent events, the principles of economics are largely unchanged,” Mankiw stated. “Students still need to learn about the gains from trade, supply and demand, the efficiency properties of market outcomes, and so on. These topics will remain the bread-and-butter of introductory courses.”
Note the phrase “the efficiency properties of market outcomes.” What does that refer to? Builders constructing homes for which there is no demand? Mortgage lenders foisting costly subprime loans on the cash-strapped elderly? Wall Street banks levering up their equity capital by forty to one? The global economy entering its steepest downturn since the nineteen-thirties? Of course not. Mankiw was referring to the textbook economics that he and others have been teaching for decades: the economics of Adam Smith and Milton Friedman. In the world of such utopian economics, the latest crisis of capitalism is always a blip.
As memories of September, 2008, fade, many will say that the Great Crunch wasn’t so bad, after all, and skip over the vast government intervention that prevented a much, much worse outcome. Incentives for excessive risk-taking will revive, and so will the lobbying power of banks and other financial firms. “The window of opportunity for reform will not be open for long,” Hyun Song Shin wrote recently. Before the political will for reform dissipates, it is essential to reckon with the financial system’s fundamental design flaws. The next time the structure starts to lurch and sway, it could all fall down. ♦