Coda - Modeling and Public Policy


1.

In 1989, a relatively young software company released their first hit video game, which dealt with the unlikely topic of urban planning. Players of the game—which was called SimCity—took on the role of a semi-omnipotent mayor: sort of a cross between an all-powerful god, a standard city planner, and a kid playing in a sandbox. The player could set tax rates, construct (or demolish) various structures, set up zoning ordinances, and so on, all while trying to keep the city’s residents happy (and the budget balanced). Even in its first iteration (the success of the original spawned generations of successor games that continue to be produced today), the simulation was startlingly robust: incorrect tax rates would result in bankruptcy for the city (if they were too low), or stagnation in growth (if they were too high). If you failed to maintain an adequate power grid—both by constructing power plants to generate enough electricity in the first place and by carefully managing the power lines to connect all homes and businesses to the grid—then the city would experience brownouts or blackouts, driving down economic progress (and possibly increasing crime rates, if you didn’t also carefully manage the placement and tasking of police forces). Adequate placement (and training) of emergency forces were necessary if your city was to survive the occasional natural disaster—tornados, earthquakes, space-monster attacks[1], &c..

The game, in short, was a startlingly well thought-out and immersive simulation of city planning and management, though of course it had its limitations. As people played with the game, they discovered that some of those limitations could be exploited by the clever player: putting coal power-plants near the edge of the buildable space, for instance, would cause a significant portion of the pollution to just drift “off the map,” with no negative impact on the air quality within the simulation. Some of these issues were fixed in later iterations of the game, but not all were: the game, while a convincing (and highly impressive) model of a real city, was still just that—an imperfect model. However, even imperfect models can be incredibly useful tools for exploring the real world, and SimCity is a shining example of that fact. The outward goal of the game—to construct a thriving city—is really just a disguised exercise in model exploration. Those who excel at the game are those who excel at bringing their mental models of the structure of the game-space into the closest confluence with the actual model the designers encoded into the rules of the game.

The programmers behind the Sim-series of games have given a tremendous amount of thought to the nature of their simulations; since the first SimCity, the depth and sophistication of the simulations has continued to grow, necessitating a parallel increase in the sophistication of the mechanics underlying the games. In a 2001 interview,[2] lead designer Will Wright described a number of the design considerations that have gone into constructing the different simulations that have made up the series. His description of how the design team viewed the practice of model building is, for our purposes, perhaps the most interesting aspect of the interview:

The types of games we do are simulation based and so there is this really elaborate simulation of some aspect of reality. As a player, a lot of what you’re trying to do is reverse engineer the simulation. You’re trying to solve problems within the system, you’re trying to solve traffic in SimCity, or get somebody in The Sims to get married or whatever. The more accurately you can model that simulation in your head, the better your strategies are going to be going forward. So what we’re trying to as designers is build up these mental models in the player. The computer is just an incremental step, an intermediate model to the model in the player’s head. The player has to be able to bootstrap themselves into understanding that model. You’ve got this elaborate system with thousands of variables, and you can’t just dump it on the user or else they’re totally lost. So we usually try to think in terms of, what’s a simpler metaphor that somebody can approach this with?

This way of looking at models—as metaphors that help us understand and manipulate the behavior of an otherwise intractably complicated system—might be thought of as a technological approach to models. On this view, models are a class of cognitive tools: constructions that work as (to borrow a turn of phrase from Daniel Dennett) tools for thinking[3]. This is not entirely at odds with mainstream contemporary philosophy of science either; van Fraassen, at least, seems to think about model building as an exercise in construction of a particular class of artifacts (where ‘artifact’ can be construed very broadly) that can be manipulated to help us understand and predict the behavior of some other system[4]. Some models are straightforwardly artifacts (consider a model airplane that might be placed in a wind tunnel to explore the aerodynamic properties of a particular design before enough money is committed to build a full-scale prototype), while others are mathematical constructions that are supposed to capture some interesting behavior of the system in question (consider the logistic equation as a model of population growth). The important point for us is that the purpose of model-building is to create something that can be more easily manipulated and studied than the system of interest itself, with the hope that in seeing how the model behaves, we can learn something interesting about the system the model is supposed to represent.

All of this is rather straightforward and uncontroversial (I hope), and noting that simulations like SimCity might work as effective models for actual cities is not terribly interesting—after all, this is precisely the purpose of simulations in general, and observing that the programmers at Maxis have created an effective simulation of the behavior of a real city is just to say that they’ve done their job well. Far more interesting, though, is a point that Wright makes later in the interview, comparing the considerations that go into the construction of models for simulation games like SimCity and more adversarial strategy games.

In particular, Wright likens SimCity to the ancient board game Go,[5] arguing that both are examples of games that consist in externalizing mental models via the rules of the game. In contrast to SimCity, however, Go is a zero-sum game played between two intelligent opponents, a fact that makes it more interesting in some respects. Wright suggests that Go is best understood as a kind of exercise in competitive model construction: the two players have different internal representations of the state of the game,[6] which slowly come into alignment with each other as the game proceeds. Indeed, except at the very highest level of tournament play, games of Go are rarely formally scored: the game is simply over when both players recognize and agree that one side is victorious. It’s not unusual for novice players to be beaten by a wide margin without recognizing the game is over—a true beginner’s mental model of the state of play might be so far off that he might not understand his defeat until his more skilled opponent shows him the more accurate model that she is using. A large part of becoming proficient at playing Go consists in learning how to manipulate the relevant mental models of the board, and learning how to manipulate the pieces on the board such that your opponent is forced to accept your model.

Of course, disagreement about model construction and use has consequences that range far beyond the outcome of strategy games. In the late 1990s, the designers behind the Sim series created a project for the Markle Foundation called “SimHealth.” SimHealth worked much like SimCity, but rather than simulation the operation of a city, it simulated the operation of the national healthcare system—hospitals, doctors, nurses, ambulances, &c. Even more interestingly, it exposed the assumptions of the model, and opened those up to tinkering: rather than working with a single fixed model and tinkering with initial/later conditions (as in SimCity), SimHealth’s “players” could also change the parameters of the model itself, experimenting with how the simulation’s behavior would change if (for example) hospitals could be efficiently run only a dozen doctors, or if normal citizens only visited the emergency room for life-threatening problems. Wright argued that tools of this type made the process of health care policy debate explicit in a way that simple disagreement did not—that is, it exposed the fact that the real nature of the disagreement was one about models.

WW: When people disagree over what policy we should be following, the disagreement flows out of a disagreement about their model of the world. The idea was that if people could come to a shared understanding or at least agree toward the model of the world, then they would be much more in agreement about the policy we should take.

CP: So in a way, a system like that could be used to externalize mental models and create a collective model….you have an externalized model that everyone agrees to abide by.

WW: Yeah, which is exactly the way science works[7].

There’s a fantastically deep point here: one that (it seems to me) has been underemphasized by both philosophers of science and political philosophers: to a very great extent, policy disagreement is model disagreement. When we disagree about how to solve some social problem (or even when we disagree about what counts as a social problem to be solved), our disagreement is—at least in large part—a disagreement about what model to apply to some aspect of the world, how to parameterize that model, and how to use it to guide our interventions[8]. Nowhere is this clearer than when public policy purports to be guided by scientific results. Taking the particular values that we do have as given,[9] a sound public policy that aims to make the world a certain way (e.g. to reduce the heavy metal content of a city’s drinking water) is best informed by careful scientific study of the world—that is, it is best informed by the creation and examination of a good model of the relevant aspects of the world.

One consequence of this is that some of the difficulties of designing good public policy—a practice that we can think of, in this context, as a kind of social engineering—are inherited from difficulties in model building. In our deliberations about which laws to enact, or which policies to reform, we may need to appeal to scientific models to provide some relevant data, either about the way the world is now, or about how it will be after a proposed intervention is enacted. We may need to rely on models to allow us to explore the consequences of some proposed intervention before we try out a new policy in socio vivo; that was the intended application of SimHealth, but the model in question need not be so explicit as a computer simulation. If we disagree about which model to use, what the model implies, or how to tune the model parameters, then it may be difficult (or even impossible) to come to a policy agreement. In many cases, the lack of scientific consensus on a single model to be used (or at least on relatively small family of models to be used) when working with a particular system is a sign that more work needs to be done: we may not agree, for instance, about whether or not the Standard Model of particle physics is the one we ought to work with in perpetuity, but this disagreement is widely appreciated to be an artifact of some epistemic shortcoming on our part. As we learn more about the world around us, the scientific community will converge on a single model for the behavior of sub-atomic systems.

However, this is not always the case. Suppose we have a pressing public policy decision to make, and that the decision needs to be informed by the best science of the day. Suppose further that we have good reason to think that the sort of singular consensus trajectory that (say) sub-atomic particle models seem to be on is unlikely to appear in this case. Suppose, that is, that we’re facing a policy decision that must be informed by science, but that the science seems to be generating a plethora of indispensable (but distinct) models rather than converging on a single one. If we have good reason to think that this trend is one that is unlikely to disappear with time—or, even more strongly, that it is a trend that is an ineliminable part of the science in question—then we will be forced to confront the problem of how to reform the relationship between science and policy in light of this new kind of science. Wright’s pronouncement that model convergence is “just how science works” might need to be reexamined, and we ignore that possibility at our peril. As we shall see, policies designed to deal with complex systems buck this trend of convergence on a single model, and thus require a novel approach to policy decision-making.

If there is any consensus at all in climate science, it is this: the window for possibly efficacious human intervention is rapidly shrinking, and if we don’t make significant (and effective) policy changes within the next few years, anthropogenic influence on the climate system will take us into uncharted waters, where the best case scenario—complete uncertainty about what might happen—is still rather unsettling. Critics of contemporary climate science argue that the uncertainty endemic to our “best” current models suggests that we should adopt a wait-and-see approach—even if the climate is warming, some argue[10] that the fact that our current models are scattered, multifarious, and imperfect mandates further work before we decide on how (or if) we should respond.

This position, I think, reflects a mistaken assumption about the trajectory of climate science. The most important practical lesson to be drawn here is this: if we wait for climate scientists to agree on a single model before we try to agree on policy, we are likely to be waiting forever. Climate scientists seem interested in diversifying, not narrowing, the field of available models, and complexity-theoretic considerations show that this approach is conceptually on firm ground. Our policy-expectations must shift appropriately. This is not to suggest that we should uncouple our policy decisions from our best current models—quite the opposite. I believe that the central point that Will Wright makes in the quotation from his discussion of SimCity and SimHealth is still sound: disagreement about policy represents disagreement about models. However, the nature of the disagreement here is different from that of the past: in the case of climate science, we have disagreement not about which model to settle on, but about how to sensibly integrate the plurality of models we have. The disagreement, that is, revolves around how to translate a plurality of models into a unified public policy.

My suggestion is: don’t. Let the lessons learned in attempts to model the climate guide our attempts to shape our influence on the climate. Rather than seeking a single, unified, top-down public policy approach (e.g. the imposition of a carbon tax at one rate or another), our policy interventions should be as diverse and multi-level as our models. Those on both sides of the climate policy debate sometimes present the situation as if it is a choice between mitigation—trying to prevent future damage—and adaptation—accepting that damage is done, and changing the structure of human civilization to respond. It seems to me that the lesson to be drawn here is that all these questions (which strategy is best? Should we mitigate or adapt?) are as misguided as the question “which climate model is best?” We should, rather, take our cue from the practice of climate scientists themselves, encouraging innovation generally across many different levels of spatio-temporal resolution.

By way of a single concrete example, consider the general emphasis (at least at the political level) on funding for alternative energy production (e.g. solar, hydrogen fuel cells). It is easy to see why this is a relevant (and important) road to explore—even if the possible threat of climate change turns out to (so to speak) blow over, fossil fuels will not last forever. However, engineering viable replacements to fossil fuel energy is an expensive, long-term investment. While important, we should not allow ourselves to focus on it single-mindedly—just as important are more short-term interventions which, though possibly less dramatic, have the potential to contribute to an effective multi-level response to a possible threat. For instance, directing resources toward increases in efficiency of current energy expenditure might be more effective (at least in the short run) at making an impact. Innovations here can, like EMICs, take the form of highly specialized changes: the current work on piezoelectric pedestrian walkways (which harvest some of the kinetic energy of human foot impacting sidewalk or hallway and store it as electrical energy) is an excellent example[11]. Unfortunately, research programs like this are relatively confined to the sidelines of research, with the vast majority of public attention (and funding) going to things like alternative energy and the possibilities of carbon taxes. A more appropriate response requires us to first accept the permanent pluralism of climate science models, and to then search for a similarly pluralistic set of policy interventions.

2.

There’s one last point I’d like to make connecting complexity modeling and public policy. In a way, it is the simplest point of the whole dissertation, and it has been lurking in the background of all of the preceding 200-some-odd pages. Indeed, it was perhaps best phrased way back in the first chapter: the world is messy, and science is hard. We’ve examined a number of senses in which that sentence is true, but there’s one sense in particular that’s been illuminated in the course of our discussion here. I want to close with a brief discussion of that sense.

The advent of what the loosely related family of concepts, methods, theories, and tools that I’ve been referring to collectively as “complexity science” or “complexity theory” has changed the face of scientific practice in ways that are only beginning to be appreciated. Just as when quantum theory and relativity overthrew the absolute rule of classical physics in the first part of the 20th century, much of what we previously took ourselves to know about the world (and our place in it) is now being shown to be if not exactly wrong then at least tremendously impoverished. The view that I’ve associated variously with traditions in reductionism, eliminativism, and mechanism--the view that the world consists in nothing over and above, as Hume put it, “one little thing after another”--is proving increasingly difficult to hold onto in the face of contrary evidence. Novel work in a variety of fields--everything from ecology to network science to immunology to economics to cognitive science--is showing us that many natural systems exhibit behavior that is (to put it charitably) difficult to explain if we focus exclusively on the behavior of constituent parts and ignore more high-level features. We’re learning to think scientifically about topics that, until recently, were usually the province of metaphysicians alone, and we’re learning to integrate those insights into our model building.

While this complexity revolution has changed (and will continue to change) the practice of scientific model building, it must also change the way we talk about science in public, and the way we teach science in schools. The future impact of complexity must be neither confined to esoteric discussions in the philosophy of science, nor even to changes in how we build or scientific models. Rather, it must make an impact on how the general public thinks about the world around them and their place in that world. Moreover, it must make an impact on how the general public evaluates scientific progress, and what they expect out of their scientific theories.

I’ve emphasized a number of times here that many of the criticisms of climate science are, to some extent, founded on a failure to appreciate the unique challenges of modeling such a complex system. The scientists at work building working climate models, of course, by and large appreciate these challenges. The public, however, very clearly does not. The widespread failure to accept the urgency and immediacy of the call to act to avert a climate change disaster is one symptom of this failure to understand.

This is not just a matter of clear presentation of the data, or of educating people about what climate models say--though these are certainly very important things. Instead, the disconnect between the scientific consensus and the public opinion about the reliability and effectiveness of climate models is a symptom of science education and science journalism that has been left behind by scientific progress. The demands for more data, better models, further research, a stronger consensus, and so on would be perfectly sensible if we were dealing with predictions about a less complex system. Science is presented to the public--both in primary/secondary education and in most popular journalistic accounts--as aiming at certainty, analytic understanding, and tidy long term predictions: precisely the things that complexity theory often tells us we simply cannot have. Is it any wonder, then, that the general public fails to effectively evaluate the reliability of climate predictions and models? Climatology (like economics, another widely mistrusted complex systems science) does great violence to the public perception of what good science looks like. The predictions and methods of science bear little resemblance to the popular paradigm cases of science: Issac Newton modeling the fall of an apple with a neat set of equations, or Jonas Salk working carefully in a forest of flasks and beakers to isolate a vaccine for polio.

If we’re to succeed in shifting the public opinion of climate science--and if we’re to avoid engaging in a precisely analogous public fight over the reliability of the next complex system science breakthrough--then we need to communicate the basics of complexity-based reasoning, and we need to help the public understand that science is a living enterprise. We need to communicate to the average citizen the truth of the maxim from Chapter One: the world is messy and science is hard.



12/07/2010 - 8/05/2014

  1. If the player was feeling malicious (or curious), she could spawn these disasters herself and see how well her police and fire departments dealt with a volcanic eruption, a hurricane, Godzilla on a rampage, or all three at the same time.
  2. Pearce (2001)
  3. Dennett (2000)
  4. See, e.g., Van fraasen (2009)
  5. Go is played on a grid, similar to a chess board (though of varying size). One player has a supply of white stones, while the other has a supply of black stones. Players take turns placing stones at the vertices of the grid (rather than in the squares themselves, as in chess or checkers), with the aim of capturing more of the board by surrounding areas with stones. If any collections of stones is entirely surrounded by stones of the opposite color, the opponent “captures” the stones on the inside, turning them into the stones of her color. Despite these simple rules (and in contrast to chess, with its complicated rules and differentiated pieces), incredibly complex patterns emerge in games of Go. While the best human chess players can no longer defeat the best chess computers, the best human Go players still defeat their digital opponents by a significant margin.
  6. It’s important to note that this is not the same as the players having different models of the board. Go (like chess) is a game in which all information is accessible to both players. Players have different functional maps of the board, and their models differ with regard to those functional differences—they might differ with respect to which areas are vulnerable, which formations are stable, which section of an opponent’s territory might still be taken back, and so on.
  7. Ibid., emphasis mine
  8. This is not to suggest that policy can be straightforwardly “read off” of scientific models. Understanding relevant science, however, is surely a necessary condition (if not a sufficient one) for crafting relevant public policy. See Kitcher (2011) for a more detailed discussion of this point. For now, we shall simply take it as a given that understanding scientific models play a role (if not the only role) in deciding public policy.
  9. I want to avoid becoming mired in debates about the fact/value distinction and related issues. None of what follows rests on any particular theory of value, and the reader is encouraged to substitute his favored theory. Once we’ve identified what we in fact ought to do (whether by some utilitarian calculus, contemplation of the virtues, application of Kant’s maxim, an appeal to evolution, or whatever), then we still have the non-trivial task of figuring out how to do it. Public policy is concerned with at least some of the actual doing.
  10. Again, Isdso & Singer (2009) is perhaps a paradigm case here, given the repeated criticism of climate modeling on the grounds that no single model captures all relevant factors. This argument has also been repeated by many free-market-leaning economists. Dr. David Friedman (personal communication), for instance, argued that “even if we were confident that the net effect was more likely to be negative than positive, it doesn't follow that we should act now. It's true that some actions become more difficult the longer we wait. But it's also true that, the longer we wait, the more relevant information we have.” Reading this charitably (such that it isn’t trivially true), it suggests a tacit belief that climate science will (given enough time) converge on not just more particular information, but a better model, and that the gains in predictive utility in that model will make up for losses in not acting now.
  11. See, for example, Yi et. al. (2012)