Using models within economics or within any other social science, is especially treacherous. That's because social science involves a higher degree of complexity than the natural sciences. The reason why social science is so complex is that the basic unit in social science, which economists call agents, are strategic, whereas the basic unit of the natural sciences are not. Economics can be thought of the physics with strategic atoms, who keep trying to foil any efforts to understand them and bring them under control. Strategic agents complicate modeling enormously; they make it impossible to have a perfect model since they increase the number of calculations one would have to make in order to solve the model beyond the calculations the fastest computer one can hypothesize could process in a finite amount of time.
Put simply, the formal study of complex systems is really, really, hard. Inevitably, complex systems exhibit path dependence, nested systems, multiple speed variables, sensitive dependence on initial conditions, and other non-linear dynamical properties. This means that at any moment in time, right when you thought you had a result, all hell can break loose. Formally studying complex systems requires rigorous training in the cutting edge of mathematics and statistics. It's not for neophytes.
This recognition that the economy is complex is not a new discovery. Earlier economists, such as John Stuart Mill, recognized the economy's complexity and were very modest in their claims about the usefulness of their models. They carefully presented their models as aids to a broader informed common sense. They built this modesty into their policy advice and told policy makers that the most we can expect from models is half-truths. To make sure that they did not claim too much for their scientific models, they divided the field of economics into two branches-one a scientific branch, which worked on formal models, and the other political economy, which was the branch of economics that addressed policy. Political economy was seen as an art which did not have the backing of science, but instead relied on the insights from models developed in the scientific branch supplemented by educated common sense to guide policy prescriptions.
In the early 1900s that two-part division broke down, and economists became a bit less modest in their claims for models, and more aggressive in their application of models directly to policy questions. The two branches were merged, and the result was a tragedy for both the science of economics and for the applied policy branch of economics.
It was a tragedy for the science of economics because it led economists away from developing a wide variety of models that would creatively explore the extraordinarily difficult questions that the complexity of the economy raised, questions for which new analytic and computational technology opened up new avenues of investigation. Instead, the economics profession spent much of its time dotting i's and crossing t's on what was called a Walrasian general equilibrium model which was more analytically tractable. As opposed to viewing the supply/demand model and its macroeconomic counterpart, the Walrasian general equilibrium model, as interesting models relevant for a few limited phenomena, but at best a stepping stone for a formal understanding of the economy, it enshrined both models, and acted as if it explained everything. Complexities were just assumed away not because it made sense to assume them away, but for tractability reasons. The result was a set of models that would not even pass a perfunctory common sense smell test being studied ad nauseam.
Some approaches working outside this Walrasian general equilibrium framework that I see as promising includes approaches using adaptive network analysis, agent based modeling, random graph theory, ultrametrics, combinatorial stochastic processes, cointegrated vector autoregression, and the general study of non-linear dynamic models.
Initially macroeconomics stayed separate from this broader unitary approach, and relied on a set of rough and ready models that had little scientific foundation. But in the 1980s, macroeconomics and finance fell into this "single model" approach. As that happened it caused economists to lose sight of the larger lesson that complexity conveys -that models in a complex system can be expected to continually break down. This adoption by macroeconomists of a single-model approach is one of the reasons why the economics profession failed to warn society about the financial crisis, and some parts of the profession assured society that such a crisis could not happen. Because they focused on that single model, economists simply did not study and plan for the inevitable breakdown of systems that one would expect in a complex system, because they had become so enamored with their model that they forgot to use it with common sense judgment.
Models and Macroeconomics
Let me be a bit more specific. The dominant model in macroeconomics is the dynamic stochastic general equilibrium (DSGE) model. This is a model that assumes there is a single globally rational representative agent with complete knowledge who is maximizing over the infinite future. In this model, by definition, there can be no strategic coordination problem-the most likely cause of the recent crisis-such problems are simply assumed away. Yet, this model has been the central focus of macro economists' research for the last thirty years.
Had the DSGE model been seen as an aid to common sense, it could have been a useful model. When early versions of this model first developed back in the early 1980s, it served the useful purpose of getting some intertemporal issues straight that earlier macroeconomic models had screwed up. But then, for a variety of sociological reasons that I don't have time to go into here, a majority of macroeconomists started believing that the DSGE model was useful not just as an aid to our understanding, but as the model of the macroeconomy. That doesn't say much for the common sense of rocket economists. As the DSGE model became dominant, important research on broader non-linear dynamic models of the economy that would have been more helpful in understanding how an economy would be likely to crash, and what government might do when faced with a crash, was not done.
Among well known economists, Robert Solow stands out in having warned about the use of DSGE models for policy. (See Solow, in Colander, 2007, pg 235.) He called them "rhetorical swindles." Other economists, such as Post Keynesians, and economic methodologists also warned about the use of these models. For a discussion of alternative approaches, see Colander, ed. (2007). So alternative approaches were being considered, and concern about the model was aired, but those voices were lost in the enthusiasm most of the macroeconomics community showed for these models.
Similar developments occurred with efficient market finance models, which make similar assumptions to DSGE models. When efficient market models first developed, they were useful; they led to technological advances in risk management and financial markets. But, as happened with macro, the users of these financial models forgot that models provide at best half truths; they stopped using models with common sense and judgment. The modelers knew that there was uncertainty and risk in these markets that when far beyond the risk assumed in the models. Simplification is the nature of modeling. But simplification means the models cannot be used directly, but must be used judgment and common sense, with a knowledge of the limitations of use that the simplifications require. Unfortunately, the warning labels on the models that should have been there in bold print-these models are based on assumptions that do not fit the real world, and thus the models should not be relied on too heavily-were not there. They should have been, which is why in the Dahlem Report we suggested that economic researchers who develop these models be subject to a code of ethics that requires them to warn society when economic models are being used for purposes for which they were not designed.
How did something so stupid happen in economics? It did not happen because economists are stupid; they are very bright. It happened because of incentives in the academic profession to advance lead researchers to dot i's and cross t's of existing models, rather than to explore a wide range of alternative models, or to focus their research on interpreting and seeing that models are used in policy with common sense. Common sense does not advance one very far within the economics profession. The over-reliance on a single model used without judgment is a serious problem that is built into the institutional structure of academia that produces economic researchers. That system trains show dogs, when what we need are hunting dogs.
The incorrect training starts in graduate school, where in their core courses students are primarily trained in analytic techniques useful for developing models, but not in how to use models creatively, or in how to use models with judgment to arrive at policy conclusions. For the most part policy issues are not even discussed in the entire core macroeconomics course. As students at a top graduate school said, "Monetary and fiscal policy are not abstract enough to be a question that would be answered in a macro course" and "We never talked about monetary or fiscal policy, although it might have been slipped in as a variable in one particular model." (Colander, 2007, pg 169).
Let me conclude with a brief discussion of two suggestions, which relate to issues under the jurisdiction of this committee, that might decrease the probability of such events happening in the future.
Include a wider range of peers in peer review
The first is a proposal that might help add a common sense check on models. Such a check is needed because, currently, the nature of internal-to-the-subfield peer review allows for an almost incestuous mutual reinforcement of researcher's views with no common sense filter on those views. The proposal is to include a wider range of peers in the reviewing process of NSF grants in the social sciences. For example, physicists, mathematician, statisticians, and even business and governmental representatives, could serve, along with economists, on reviewing committees for economics proposals. Such a broader peer review process would likely both encourage research on much wider range of models and would also encourage more creative work.
Increase the number of researchers trained to interpret models
The second is a proposal to increase the number of researchers trained in interpreting models rather than developing models by providing research grants to do that. In a sense, what I am suggesting is an applied science division of the National Science Foundation's social science component. This division would fund work on the usefulness of models, and would be responsible for adding the warning labels that should have been attached to the models.
This applied research would not be highly technical and would involve a quite different set of skills than the standard scientific research would require. It would require researchers who had an intricate consumer's knowledge of theory but not a producer's knowledge. In addition it would require a knowledge of institutions, methodology, previous literature, and a sensibility about how the system works. These are all skills that are currently not taught in graduate economics programs, but they are the skills that underlie judgment and common sense. By providing NSF grants for this work, the NSF would encourage the development of a group of economists who specialized in interpreting models and applying models to the real world. The development of such a group would go a long way toward placing the necessary warning labels on models, and make it less likely that fiascos like a financial crisis would happen again.