Global Modeling: Origins, Assessment and Alternative Futures Richard W. Chadwick



Download 164.61 Kb.
Page1/3
Date conversion20.04.2016
Size164.61 Kb.
  1   2   3
(Draft. For quotable, published version, see Simulation & Gaming, Vol. 31, No. 1 (March, 2000), pp. 50-73.)
Global Modeling: Origins, Assessment and Alternative Futures
Richard W. Chadwick

University of Hawaii



This essay reviews the origins of key concepts used in most global models. Philosophical preconditions for validity are examined. A framework for critically evaluating existing global models is suggested. A philosophy for global modeling is outlined. Global modeling is a much deeper enterprise than its critics seem to be aware. However, since global modeling lacks an academic or institutional home, key issues that have arisen with early global models are left essentially untouched. In order to critique global models or global modeling meaningfully, at least three perspectives are essential, those of science, philosophy and practical application (praxis). A firm foundation for global modeling requires that its practitioners adopt an academic paradigm, i.e., university research institutes, schools and departments, a professional association, and a professional, peer-reviewed journal. Global modeling is the only methodology capable of helping humanity to self-consciously envision itself and its environment in a time frame long enough and a scale large enough to provide an effective guide in its transition from a global to an interstellar species.
Keywords: decision-making; global models; instability; international relations; simulation.

Origins of Global Modeling
Despite the proliferation of global modeling and models, most people doing simulation and gaming are in fact unaware of this development. The history of global modeling paradoxically suggests that the same may be true among global modelers in terms of their own political and cultural roots. What are the nature and purposes of global modeling? To my knowledge, efforts to present answers to these questions in the past have been focused on particular threads of development and particular purposes associated with specific projects rather than focused on the field as a whole. So the first task is to acquaint this audience with that history, albeit as briefly and as cursorily as space dictates. After this review, we will be in a position to evaluate the field critically and explore the merits of alternative directions for its further development.
A Definition of Global Modeling
What is global modeling? My old professor Donald Campbell used to remark that the best way to traumatize a discussion is to ask people to define their terms! For the sake of offering some focus, let us assume that for a model to be "global" in scope it must be meant to characterize some features of human thought, behavior and the human environment that are held to be typical if not universal, across cultures and history. Specifically, it must represent these features in logical and mathematical forms amenable to describing dynamics. It must offer a causal explanation for the interactions modeled. It must be "testable" in the sense that it can be edited through empirical investigation. And it must be intended to be useful in practical application, that is, have variables or parameters that a user of the model can identify as factors amenable to being manipulated to produce desired effects in some aspects of the world being modeled. By contrast, let us consider a very good alternative definition. Brecke (1995) suggests the features of global models to be: geographically global in scope, of long duration (25-50 years), and integrative of diverse sectors (population dynamics, economic dynamics, politics and the environment). I prefer a fuzzier conceptualization. Klein's LINK model, Leontief's WIOM and Onishi's FUGI model are primarily economic and shorter term, for instance, and would seem not be included in Brecke's definition. The Meadows team WORLD3 model excludes political dynamics and its treatment of economics as a single world system obviates the need to represent trade. Bremer's GLOBUS model excludes environmental considerations. I have purposely left the features of time frame, geographic coverage, and scope of coverage out of the definition in order to accommodate the wide variety of global modeling taking place. Instead I have focused on scientific universality, logical and mathematical construction, explanatory power, testability, and practical, applied usefulness as characteristics of this type of simulation, and left the geographic and sectoral scope, and time-horizon as issues to be addressed rather than as criteria for exclusion.
Let us see how well this rather complicated definition holds up not only as a descriptive but an assessment tool by reviewing some works commonly labeled "global models." First, however, let us examine some intellectual roots and precursors to see whether "global modeling" has been in existence longer than commonly supposed in a recognizable albeit "pioneering" form.

Lewis Fry Richardson (1880-1953) and Arms Race Modeling
The origins of global modeling are commonly dated with the Club of Rome's support of Jay Forrester's (1971) creation of WORLD2. But there is reason to extend this date back decades, at least to the turn of the century. The system dynamics approach to modeling is much earlier, and as applied to several global systems, war and weather, begins with the work of Lewis Fry Richardson. Richardson was a Quaker who earned a living as a respected meteorologist in the UK and whose equations for atmospheric churning were used up until very recently. By 1913, however, his interests were broadened and applied to include making an effort to forestall the "Great War," World War I. He had completed a comprehensive statistical and mathematical study of war. The essence of his understanding is formulated in a simple family of equations:
dy = ax - by + c, where

y: change in Y's resource allocations for coercion, defense, or war

x: X's current allocations; "X" is one or more opponents or target

groups opposing Y, from Y's perspective;

a: "fear;" produced by the difference in two factors: conflict - cooperation;

b: "fatigue," the result of competition for resources given other values; and

c: "ambition," "revenge," or other motivations endogenous to a

leadership group, Y.


Now you might think as Rapoport (1957, p. 297) did that sets of such equations, used to indicate the direction and properties of arms races implied a deterministic philosophy ("…gross determinism of the classical physical type assumed by Richardson." Rapoport, loc.cit). The type of equations Richardson used are certainly that in form, and reflect the prevailing philosophical view in Richardson's youth that human society might lend itself well to being modeled by such equations. Richardson himself, however, introduces his philosophy and equations in quite a different way, thus:
Critic: Can you predict the date at which the next war will break out?

Author: No, of course not. The equations are merely a description of

what people would do if they did not stop to think. ...they follow their

traditions, ...and their instincts…because they have not yet made a

sufficiently strenuous intellectual and moral effort to control the situation.

The process described by the ensuing equations is not to be thought of

as inevitable. It is what would occur if instinct and tradition were

allowed to act uncontrolled.

Richardson, Arms and Insecurity, 1960.
We certainly cannot conclude from this that Richardson was philosophically a determinist. His equations are deterministic in form because that was what he had available to him to represent his ideas formally, not because he believed in the inevitability of a particular outcome given sufficient information. He did not attempt to quantify "sufficiently strenuous intellectual and moral effort," but rather to transform the thinking of the leaders of his day to avoid what would otherwise be an inevitable catastrophe. In 1913, he published 300 copies of his book at his own expense, and sent it to world leaders with a warning that they were headed towards war and should do something about avoiding it.
It took more than 50 years for global modelers to begin adopting equations similar to his, but they still have not adopted his philosophy. For instance, in an early evaluation of the precursor to GLOBUS, Bremer examined some possibilities as to why his SIPER simulation exhibited a tendency to produce arms races:
...three possibilities seem worth considering. First, it may be that we

have successfully captured the essence of the decisional calculus

decision-makers use in making their assessments of national security needs.... A second possibility is that some fine-tuning is required.

That is, the decision processes are correct but some of the parameter

values are in error. A third possibility is that the processes in this part

of the model are wrong in a fundamental respect. For example, the

acquisition of armaments...is largely attributable either to the needs of

the military-industrial complex, or to the defense bureaucracy specifically.



Simulated Worlds (1977), 205.
His examination of the SIPER model's behavior is limited to achieving a good fit to real world behavior, to what Richardson referred to as "habit and instinct." The usefulness of the model is as a way to "gain insights into the over-all impact of environmental factors upon the behavior of nations." It is "a means of integrating new knowledge into a coherent and consistent framework, making it possible to give a preliminary assessment of the larger and long-term implications of a new discovery" (p. 208). The "strenuous intellectual and moral effort to control the situation" -- Richardson's raison d'être for modeling was to guide such an effort -- lies entirely outside the paradigm. Bremer's paradigm at the time -- and I do most certainly appreciate this; this is not a criticism -- was that of science. Science ever seeks, through data generation, empirical analysis, theoretical inference, and logical reconstruction, to reduce gaps between models of the possible, generated by theory, and models of the real as made operational through data generation. The limitation of this approach -- widely accepted -- is that it does not and can not address the philosophical and practical problems that motivate its supporters to fund such efforts.
Wassily Leontief (1906-) and the WIOM (World Input-Output Model)
Leontief's 1973 Nobel Prize in economics was for modeling economies is a manner analogous to S-R psychologists' (popularly characterized) "rat experiments." S-R models and I-O (input-output) models are formally identical. Oi = f{bijOj}j=1,n, where Oi is a unit of output, i, Oj is a set of inputs in the production process, and bij are the set of weights in a series of linear, homogeneous, simultaneous equations, for instance, a typical multiple regression equation. The matrix of bij-coefficients is expected to vary from time to time as new technologies and other factors change the types of inputs, outputs, and relations between them. They are not expected to change so much so rapidly as to make any given set of coefficients useless for forecasting purposes. Also to be noted is that the output can be a service as well as a good, therefore it is possible to say that the behavior of people as well as the quantity of goods is predictable contingent on a given application of inputs.
This way of thinking, also as formally deterministic as Richardson's, lies at the core of all global models representing national and global economies. For example, Onishi's FUGI (Future of Global Interdependence, financed by the Japanese government), the British SARUM (Systems Analysis Research Unit Model), and of course Leontief's own WIOM (World Input Output Model), announced in 1977 as the UN World Model, all include such I-O models at their core.
In an application to arms production estimates aimed not at warning of war potential or likelihood but at economic performance, Leontief and Duchin (1983) suggest a number of scenarios under which consumer goods per capita could increase or decrease depending on rates of military spending on armaments. Note that such a model could have included Richardson-like equations, but did not. Also interestingly, although this study makes projections to the year 2000, no scenario considers the economically destructive uses to which arms might be put. Nor is the survivability of any of the actors considered (e.g., the possible disintegration of the USSR). Further, all applications assumed the same I-O coefficients for all arms produced in all countries. Leontief justifies this particular assumption as follows:
Considering the significant increase in the overseas transfer of U.S.

military technology since the early 1970s and given the paucity of

information on military technology in other countries, it was assumed

that the same input structures held for all regions. The same

technologies were also assumed for future years.

Military Spending (1983), 15.


The limitations of this type of analysis seem clear enough. Data are acknowledged to be incomplete and to some extent incorrect, with consequentially poor parameter estimates for the model. Choices of actions are therefore based on reasonable but likely incorrect speculation. The accuracy of projections depends in part on policies remaining constant and other general cateris paribus caveats that accompany all such applied research (e.g., the requirement that applied technology remain constant).
Unlike purely scientific research, illustrated by Bremer's early work, this type of modeling gives a clear and central place to policy choices and consequences. Nevertheless, questions can be but are not raised, as to which equations and which parameters represent historically specific and ungeneralizable situations and dynamics, and which are more broadly applicable.
There seems to be a paradigm limitation as regards what is fair to ask of this type of research, that is, to what standards it should be held accountable. Research reports themselves are usually mute on this subject. Results and how they were achieved are simply presented. This style of presentation is characteristic of applied research or praxis. It is meant to achieve a particular, applied purpose. The Leontief and Duchin work was supported by the U.S. Arms Control and Disarmament agency and later published under U.N. auspices. Such work is part of a larger enterprise, namely policy advising. William Webster stated a succinct paradigm for such applied research in the context of describing work in the National Security Council:
Questions are asked: Can we do it? Can we do it lawfully? Are the

benefits worth the cost? And subjectively: Is it a reasonable project?

Is it consistent with American values?

Peter Maas, Parade Magazine, May 19, 1991, p.5f, interviewing Judge William Webster, former head of the CIA.


In such a context, refinements of science and philosophy have no priority per se. What is a priority is performance: making a contribution to achieving goals consistent with legal and moral criteria and a high benefit/cost ratio in terms of desired ends such as survival and security, and means to these ends such as power and wealth. Issues such as how rapidly are the variables and coefficients changing in the real world represented by an I-O model are of technical interest only. Issues such as whether habit and instinct will dominate over strenuous moral effort have no priority per se; in the context of strategy and tactics it is only the expected value of an outcome, a situation resulting from a policy or action, that matters. This explains why Leontief and Duchin seemed less concerned about technical refinements of their applied research than they were with making definite estimates of arms spending and the effects of reduction in spending levels. In a policy development context, you must discipline yourself to contract deadlines and the limitations of available resources, to reach an acceptable conclusion.

Harold Guetzkow (1915-), the Inter-Nation Simulation (INS),

and the Simulated International Processes (SIP) Project
It is my understanding from many personal dialogs with Harold Guetzkow (he was the chair of my Ph.D. Committee in the 1960s) that he was as concerned as Richardson was with preventing war, although he was and remains far more circumspect than Richardson was. In Guetzkow's case, I surmise it was a concern with the spread of nuclear weapons and the likelihood of nuclear war or nuclear blackmail, or what Herman Kahn (1965) came later to call "spasm" nuclear war. Decision sciences had not developed adequately to study these or similar problems, much less to help teach leaders how to avoid global nuclear war; for this and other reasons Guetzkow embarked on a three-decade effort to promote modeling research that could be of such use. The major product of this effort was the first large scale foreign policy simulation tool, the INS (Inter-Nation Simulation), reported on in his book, Simulation in International Relations (1963), and by his then doctoral student, Richard Brody (1963). Guetzkow's promotion of global modeling, in this case political systems modeling, was motivated neither by a desire to advance science per se (though as a scientist he certain had that interest) nor by a specific application of already proven science, but by a passion for removing a fear. Specifically, this was the fear of world nuclear annihilation.
While the methodologies employed (mathematics, statistics, data generation, empirical analysis) shared much in common with the other paradigms (science and praxis), their application raised philosophical issues the others did not. In what sense, for instance, is the INS a simulation, a simulation of what? and how would one test this assertion? Guetzkow characterized simulation as a "reduced and simplified form" of some phenomenon. But how would one know whether the reduction and simplification resulted in a product that was not analogous to the real phenomenon in ways too important to ignore? Lacking adequate theory to apply to the problem being addressed, and lacking adequate data to edit or "test" a developing theory of decision making, how could one answer these questions?
It would at first appear that Guetzkow's simulation enterprise was, in a fundamental sense, a philosophical rather than a scientific enterprise. Consider that the goal of science broadly conceived is to reduce dissonance between models of the possible (theory application) and models of the real (data gathering and organization). Consider that the goal of praxis (practical application) is to reduce dissonance between models of what is desirable (derivative of prevalent culture) and of the real as given by data. In a parallel construction then, one can envision the aim of philosophy, again broadly conceived, as to reduce dissonance between culture-generated models of the desirable and theory-generated models of the possible. Guetzkow was trying to find a way to make possible and even likely what was desirable, minimizing the likelihood of nuclear holocaust. But he recognized that there was no theory sufficiently developed to guide him. And there was no way to do the basic research on decision making that seemed so necessary. Snyder, Bruck and Sapin's then contemporary text, Foreign Policy Decision Making (1962), for instance, laid out a research agenda that included hundreds of variables in as yet unknown, unquantified, untested relations with one another.
My understanding is that Guetzkow reasoned that if you could create a decision making environment for people who in some sense shared the culture of real decision makers, you could "black box" the decision process, much as Leontief did with economies, by using an S-R like I-O model. You would have inputs to the decision process resembling inputs to referent system ("real world") decisions, and outputs resembling referent system outputs. In a sense, Guetzkow's simulation was a collective "mental experiment" played out by real people in their heads rather than in Guetzkow's head.
Yet, the very construction of the INS simulation went beyond the "mental experiment." Acculturated human beings -- Naval Petty Officers in the Western Behavioral Sciences Institute in La Jolla, diplomats at Arlie House, high school students from the Chicago area, college students at Northwestern University -- were doing things, using the language and implements, that were similar to the real thing. Thus questions could be raised about "verisimilitude," and standards such as "passing the laugh test" ("face validity" as Hermann said) could be put forth, and no one laughed too loudly or long. The object was to develop a theory that would tell us nuclear war did not have to occur, and then a practical strategy for preventing it. Terms grew up such as "islands of theory" for which Guetzkow (1950) is today widely quoted. "Islands of theory" embedded the hope that links between decision making, small group dynamics, and systems theory in international relations would eventually be invented; and that those bridges could ultimately result in desired explanations as to how and why nuclear holocaust could, and perhaps would, be prevented.
That this was an enterprise that hinged on the success of a new philosophy for empirical investigation was perceived by some. The relations between science, philosophy, and praxis or applied policy research, as paradigms, were not carefully examined in those terms, however. Many articles were written trying to come to grips with some of the questions raised above (e.g., Campbell, Raser and Chadwick (1963), Hermann (1963), Chadwick (1972)). These focused on facets of the enterprise that were too limited in scope, such as the comparison of "simulation data" with real world data. Simulation data were treated as if they were generated from an experiment (Brody (1963)). But whether such data were from a simulated experiment or from an actual experiment was ultimately unclear, because the question of whether the phenomena constituting the simulation were representative of the phenomena theorized about, remained unanswered. We were told simply to wait and see if what was predicted would actually occur (Brody, 1963).
To complicate this philosophical and empirical issue, Guetzkow's INS began to be absorbed by representatives of the very subculture it purported to simulate. It was the inspiration for William Coplin's WPS, the World Politics Simulation used at the U.S. Department of State's Foreign Service Institute when Coplin was at Wayne State University. It was the foundation of Charles Elder's WPS II, used at ICAF, the Industrial College of the Armed Forces. It was modified and extended by Paul Smoker in his design of the IPS, the International Processes Simulation (the IPS embodied a major revision of the INS, and served for simulation "experiments" similar to the INS use). And it inspired a "factbook" effort for a simulation project I participated in at System Development Corporation in 1967-68 under a DARPA contract to Gerald Shure.
These developments are precisely what one would expect if a new philosophy was catching hold. INS-type simulations were entering the education programs of the military, intelligence and diplomatic communities. In a small and fleeting way perhaps, the culture of the real world was changing in response to the new philosophy of simulation. Although there were serious efforts at grounding the INS equations in empirical analysis and tests (Chadwick (1967, 1969, and 1972), Smoker (1973), and Bremer (1977), and Cobb and Elder (1981) for instance), this work had virtually no impact. Indeed the major early conclusions were that the INS equations needed to be thoroughly overhauled, but they were nevertheless used for another decade until Bremer abandoned his SIPER model in the early '80s as a core model for GLOBUS (see below).
Guetzkow himself ceased work on human-computer simulation in favor of the traditional all-computer modeling methodology that had grown up around him during the decades of the INS. The INS began when only mainframes existed and complex calculation was still very very tedious. It ended about the time of the advent of the PC. Today it is possible to redesign INS-style games that are global and interactive via the Internet. The beginnings of such may be found at the University of Hawaii in my international relations simulation, currently on the Web, used in an educational rather than research or policy development context. In research and policy development contexts, models of decision-making, some Richardson-like, have replaced the human players. In others, the decision processes are represented primarily by parameters analogous to keys on a modern analog to the old player piano, a clavinova. The user decides the variations on the tune (policy) but in the absence of an exogenous decision, a default tune (policy) is there to be played. We will now discuss these newer models.
  1   2   3


The database is protected by copyright ©essaydocs.org 2016
send message

    Main page