Reflections on Collaborative Model Building
by
Murray Turoff

Department of computer and Information Science
New Jersey Institute of Technology
Newark NJ, 07102
Homepage: http://eies.njit.edu/~turoff/
email: turoff@eies.njit.edu

Short Paper prepared for the Workshop W9 Strategies for Collaborative Modeling and Simulation at CSCW 96, Boston, November 16.

Objective

Collaborative Model Building has been a real world activity for quite some time. In the late sixties and early seventies it was augmented by terminals brought into meeting rooms and by direct decision support software for individual contributors. There is a considerable history and understanding of this area that has resulted from these experiences. The goal of this workshop note is to remind people of the understandings that resulted from those experiences and to aid in the avoidance of “ "reinvention of the wheel" which has been a rather common practice in the computer industry.

Methods abound

There is a long history of sophisticated techniques and methods that support the development of various types of models. The real world is a “"many body problem"” and modeling it can rarely be done exactly for social and industrial systems. As a result most modeling methods are approximations to the real world and those facilitating the collaborative modeling effort should have an ethic of understanding the limitations of the methods that are employed.

In the early days of linear programming we developed input formats which allowed individuals with no mathematical understanding of linear programming to developed models. The results were that many non linear situations where modeled and often resulted in misleading decision support information.

One of the early breakthroughs on the use of computers was Forester's Systems Dynamics approach to the development of non liner feedback models. Few people are aware that Forester also developed simple procedures and front end formats to teach this modeling process to public school children and to facilitate the development of models by classes.

If one remembers that the “Club of Rome Model” was a collaborative effort to develop a predictive world model utilizing System Dynamics, one should realize that a modeling effort, for all its specificity and exactness can result in results that can be considered quite suspect and even controversial.

One of the earliest truly collaborative efforts was by John Warfield (1976) and his development of Interpretive Structural Modeling (ISM) as an approach to the group modeling of the relationships among goals, subgoals, and objectives. This relied on the ability of the contributors to estimate whether a relationship (0 or 1) existed between two goals. The resulting graph can then be analyzed to develop a linear hierarchy of clusters of nodes from the original graph. Warfield would bring a terminal to meetings and the computer analysis would be supplied as the group worked. Clearly individuals can answer the simple question of whether two items are related but when one puts together a resulting graph of hundreds of items it is impossible for the average human to make any sense of it. This is why various forms of modeling and network analysis tools (ISM, cluster analysis, multidimensional scaling, cluster analysis methods (Anderberg, 1973), cross impact, decision payoff matrices, etc.) can and do play such a key role in the collaborative process.

However, each method has its specific limitations and criteria that governs its appropriateness to a given situation. The ISM method can be demonstrated to have a great deal of sensitivity of the existence or non existence of a particular link in the graph. Therefore if there is any controversy among the group about the existence of this single relationship among the hundreds that can exist the result can have tremendous variation in changing the relative positions of most of the objects in the resulting clusters. This further points out the need for incorporating the ability to do sensitivity analysis on any result in order for the individual estimator or the group to test the range of validity for their results.

One area that has always emphasized various collaborative model building processes is the Delphi Method (Linstone and Turoff, 1975). Methods such as Cross Impact Analysis (the relationships among one time events utilizing subjective probability estimates) was evolved to function as an integral part of a Delphi process. It has also been extensively used as a collaborative model building face to face meetings. The Delphi Method books reviews three different analytical formulations that may be used support the analysis of inputs for this type of model (papers by Dalkey, Kane, and Turoff). Cross Impact Analysis is very relevant to such issues as relating decisions to consequences without having to make the linearity assumptions of conversion to dollars inherent in the more constrained decision payoff type matrix models.

Many of the modeling approaches may be viewed as special cases of the general area of “"structural modeling"” (Lendaris, 1980; Geoffrion, 1987). The concept of structural modeling is the formulation of model structures that users can utilize to build general purposes models without having to learn to program. The Lendaris tutorial oriented article should be required reading for those interested in understanding the model building process.

Consistency Problems

There are two types of consistency problems that are inherent in the collaborative model building area. The first is the problem of a single estimator being consistent in the totality of the estimates and judgments needed to build their individual view of the world situation. This leads to the fact that one has to provide the individual whatever decision support tools that are required to aid in their arriving at their own consistent model. Until the individual participant has obtained a level of confidence in their own model it is rather useless and misleading to incorporate their views into a collaborative model.

The second problem is consistency in the estimates made by the respondents. For example if too people say a certain even is "very likely".” Does one mean .7 probability and the other mean .9 probability. In the classical area of the analysis of subjective judgments this is the problem of "scaling" and the collaborative model builder has to be very cognizant of what types of scaling methods (Torgerson, 1957) maybe needed to insure the mutual understanding among the collaborators. The more multidisciplinary the group is the more likely this is a major concern. Unfortunately most scaling methods have been evolved for one time survey applications and not for iterative feedback in a group communication processes. There is still considerable adaptation and tailoring of the methods that are required for this application area.

Even when groups are very homogenous there are major consistency problems. In a Delphi on the future of the Steel Industry (paper by Goldstein, in the Delphi Method book) involving about 45 planners in the industry, the respondents were given a flow model developed by three experts. This was a model of about 45 flow links and only about 15 are regularly measures. The respondents were asked to fill in the missing 30 estimates for the flows in the industry the prior year. Instead of doing this as asked about 25 of the respondents redrew the model because they DID NOT AGREE with what the initial three experts had conceived. Too many computer people have an idealized impression that model building of a real world situation should be a straightforward process of collecting known knowledge in that domain.

Arrow’s Paradox Works for Model Building

The fact that no consistent collective judgment function is mathematically possible does not prevent us from utilizing various voting and subjective estimation functions. However, we do know that one of the causes of misguided results due to Arrow’s Paradox is the use of averaging to hide and ignore disagreements. Polarized distributions in votes and estimates and the lack of adequate scaling or anchoring of estimates are exactly what cause spurious results. As a result one has to be particularly concerned with the exposure and the treatment of disagreement by collaborative groups (Turoff, 1991).

This means further that the quantitative aspects of model building must be linked and justified by a foundation of qualitative observations that serve to justify and explain both consistencies and consensus. This is one of the reasons why the integration of Hypertext functionality (Rao & Turoff, 1990) into collaborative model building holds a great deal of promise for the integration of qualitative and quantitative aspects of model building.

Observations and Ethics

A great many early Delphis (with paper and pencil) were in fact forms of collaborative model building. These varied from simple trend extrapolation and forecasting substitution models, to conceptual relationship models and on to the more quantitative network oriented models. Most of these early efforts relied upon some form of graphics to organized the material associated with the solution process of a complex problem. It is only now that we are approaching the point where most of the individuals we seek to involve we will have the hardware to deal with graphics. This means that the history of the Delphi area presents a collection of designs for various applications of collaborative model building.

Those of us that have worked with the Delphi method know that one of the consequences of its use is that one can make some fairly good judgments about the relative expertise of the respondents as a result of the process. Ethically this is one of the reasons why those that conduct Delphis need to have strong ethical values about the maintenance of the property of anonymity in the compilation of the results. One may utilize various methods to add confidence or scoring methods to the individual judgments into an appropriate group result.

When it comes to model building the potential for possible misuse is even more severe. There is rather convincing results in the literature that points out one can judge the expertise of an individual by the complexity of the model structure that they express (Hopkins, et. al., 1987). It has to be clear to the contributor just how his contributions will be used and the degree of confidentiality that will be available to the participants.

Work on collaborative model building should rest on a strong foundation in areas such as structural modeling and scaling. Furthermore it should take account of prior experience and design work with collaborative modeling in such areas of Delphi and Nominal Group Techniques.

References

Anderberg, M.R.,
Cluster Analysis for Applications, Academic Press, 1973.
Geoffrion, A.M.,
An Introduction to Structured Modeling, Management Science, (33:5), May 1987, 547-588.
Hopkins, R.H., K.B. Cambell and N.S. Peterson,
Representations of Perceived Relations Among the Properties and Variables of a Complex System, IEEE Transactions on Systems, Man and Cybernetics, (SMC-17:1), January/February 1987, 52-60.
Lendaris, G.,
Structural Modeling: A Tutorial Guide, IEEE Transactions on Systems, Man & Cybernetics, (SMC-10:12), December 1980, 807-840.
Linstone, H. and M. Turoff,
The Delphi Method: Techniques and Applications, Addison-Wesley, 1975.
Rao, U., and M. Turoff,
Hypertext Functionality: A Theoretical Framework, International Journal of Human-Computer Interaction, (2:4), 1990, 333-358.
Torgerson, W.S.
Theory and Methods of Scaling, Wiley, 1958.
Turoff, M.,
Computer Mediated Communication Requirements for Group Support, Organizational Computing, (1:1), 1991, 85-113.
Turoff, Murray, & Starr Roxanne Hiltz,
Computer Based Delphi Processes, in Michael Adler and Erio Ziglio, editors., Gazing Into the Oracle: The Delphi Method and Its Application to Social Policy and Public Health, London, Kingsley Publishers, 1995.
Warfield, John N.,
Societal Systems, John Wiley and Sons, 1976.


Back to homepage of Roxanne Hiltz
Back to homepage of Murray Turoff