III.B. 2. Delphi Inquiries and Knowledge Utilization




I. Introduction

The development of methods to obtain, refine, and communicate the informed judgments of knowledgeable people is one of the most crucial problems in planning and decisionmaking. The task is particularly challenging in the Michigan Sea Grant Program, which emphasizes a systems approach by a multidisciplinary group of researchers. Some of these researchers are experts in extremely specialized areas, representing a wide range of technical, economic, social, legal, and political disciplines.

From its inception the general goal of the Michigan Sea Grant Program1 has been to provide the common management effort necessary to develop and bring to bear university expertise on short-and long-term resources management problems in the Great Lakes. The major approach of the program has been the development of basic information and predictive models for resolution of resource problems, followed by applications and/or demonstrations of such information and models to appropriate agencies and groups. Over 120 research and faculty personnel from practically every major school or college in the university are presently active in the program. Research and planning groups representing federal, state, and local government agencies, industry, and concerned citizen groups are also part of the problem-solving team.

The Grand Traverse Bay watershed region was selected as the focus of pilot efforts to develop research and planning methodologies that will be applicable in dealing with problems and opportunities of all the Great Lakes, and in particular Lake Michigan. In the area of finding mechanisms to improve the coordination of the Sea Grant effort at the University of Michigan it was decided to investigate the potential for utilizing the Delphi technique.

The Michigan Sea Grant Delphi inquiries2 were designed to obtain and refine an interdisciplinary group of researchers' judgments about issues and developments that should be considered when planning for intelligent management of the water resources of the Great Lakes.

An important objective of the exercises was to convey the judgments of the researchers to the communities which are to benefit from the research. One approach toward this objective was to include on the panels-on the same basis as the researchers-people who were believed to be influential in the political processes through which regional planning is accomplished. Their knowledge of the issues and the region was beneficial to the deliberations, but more importantly, their participation was judged to be an effective way of communicating information to regional planners and decisionmakers.

Two of the three panels were made up of researchers who were designated as technicians and behaviorists. The third group was made up of concerned citizens who were designated as decisionmakers, In addition to forecasting, the method was used in several other roles involving the quantification of subjective judgments. The exercises were designed to be progressive and cumulative, with an emphasis on an orderly development of informed judgments,

The Delphi inquiries were one of several Michigan Sea Grant projects related to the general task of transmitting new knowledge to people and organizations in a way that results in effective use. Respondents in these exercises-a group with exceptional qualifications-served as the primary resource in evaluating the methodology.

The technical panel was composed of thirty-three individuals whose expertise was primarily in the physical sciences and who were divided about equally between Sea Grant researchers and faculty, graduate students, and others in the School of Engineering. A second panel included Sea Grant researchers who were not selected for the technical panel. Generally their academic backgrounds and interests were oriented more to the behavioral sciences, and for this reason they were labeled behaviorists. They represented a wide range of ages, academic disciplines, and university schools and laboratories. Participants for the third panel were randomly selected from groups of Grand Traverse Bay area residents believed to be influential in the following fields: civics, business, planning, politics, natural resources, government, education.

The names associated with the panels, although somewhat arbitrary, are reasonably consistent with the roles each group would be expected to play in planning the management of regional water resources. The technical panel operated independently of the other two panels and its output was fed into the deliberations of two broader-based panels, which operated independently in the earlier rounds and as a combined panel in the final round. The nature of their participation is summarized in Table 1.

In order to provide continuity, a person's judgments on the previous round were used whenever he or she could not respond on a particular round. Several significant modifications and refinements in the basic Delphi methodology were tested in the Michigan Delphi inquiries. These changes were motivated by the perceived threat of a manipulated consensus, the desire for constrained or conditional judgments, and recognition of desirable aspects of interpersonal methods not obtainable using the Delphi technique exclusively. The concept of informed judgments as contrasted with

Table 1
Participation in Michigan's Sea Grant Delphi Probes


Technical Panelists



Contact established with Delphi administrator




Unavailable after the start of the Delphi probe




Written comments and evaluations made on at least one round




Written comments and evaluations made on 3 or more rounds




Written comments and evaluations on final round




Written evaluation of methodology or evaluation interview




expert opinion provided the rationale for the inclusion of politicians and concerned citizens on the panels; it also provided an opportunity to exploit an inherent characteristic of the method-to inform during the process of soliciting judgments.

II. Outline of the Procedures: Social, Political, and Economic Trends

The portion of the Delphi inquiry concerned with social, political, and economic trends was designed to provide respondents on the broader-based panels with some basic reference points in making subsequent judgments regarding future social and technical developments.

The information package for round one presented the trends for eight measures which have commonly been used to indicate the social and economic development of a region. Curves were plotted from 1950 to 1970, taking advantage of the 1970 census and the standardized enumeration procedures of the Bureau of the Census. Panel members were asked to extend the curves through 1990 and to indicate the numerical values for 1980 and 1990 [2].

In the second round, curves representing the medians and interquartile ranges were provided for the panelists, as well as pertinent comments submitted by respondents on the previous round. Panelists were asked to reconsider their estimates, and if any of the new estimates were outside the designated consensus range for the previous round they were asked to support their position briefly.

On this round the graphs of three additional statistical measures were introduced for consideration. A cumulative summary of the group response was provided in the information package for round three to serve as background information for other panel deliberations.

Important Developments and Requisite Technology

The Delphi method has had its greatest application and acceptance as a means of compiling a list of future technical events or developments and collecting subjective judgments regarding them. In the Michigan inquiries social, political, and economic developments were also solicited and evaluated so that panelists would be encouraged to consider all environments in making judgments regarding water quality, waste-water treatment systems, and research priorities.

The initial evaluation matrix for the technical panel did not present a list of potential developments, something which is usually done in order to facilitate participation and generate additional items. It was believed that this unstructured approach would result in a wider range of suggestions; however, the information feedback of the second round did include-in addition to items suggested by respondents-thirteen events that were taken from Delphi exercises conducted at Rand and the Institute for the Future. These events covered areas considered by the researcher to be of interest to the panel and were also good examples of how developments should be specified to avoid ambiguity, particularly with respect to occurrence or nonoccurrence.

The evaluation matrix for the third round provided the respondent with his estimates for the second round and a summary of the group's response. Comments submitted by respondents were also provided, as were the median estimates for technical and economic feasibility if they differed significantly. The evaluation matrix for the third round was designed so that a panel member could easily determine if his reassessed estimates for a specific development were outside the group's consensus range-arbitrarily identified as the group's median 25 percent and 75 percent estimates. If a respondent's latest estimate was outside the consensus range for the previous round he was asked to support this "extreme" position briefly.

The evaluation matrix for the fourth round presented a more comprehensive summary of the previous round than had been provided up to this point in the exercises. Statistical summaries were presented not only for all the respondents but also for those who rated their competence relatively high and for those in the latter group who indicated a familiarity with the Grand Traverse Bay area. In addition, the persons arguing for an earlier or later probability date than that indicated as the consensus were identified by a number which correlated to a list of biographical sketches.

On the final round of the technical-panel exercises, respondents were also asked to make specific conditional probability estimates for pairs of events that panel members had suggested were closely related. First they were to consider the effects of the occurrence of the conditioning event and then the effects of the nonoccurrence of the conditioning event (see Fig. 1). One of the objectives of this procedure was to encourage panelists to reexamine their estimates for individual events in the light of the influence and probabilities of related events. Analysis of all individual responses reveals that a relatively high percentage of respondents altered their final estimates for those developments included in the set of events which was subjected to conditional probability assessments. Since this was the third iteration of feedback and reassessment for many of these developments, it is not unreasonable to assume that the change in estimates primarily resulted from the evaluation of relationships among events-relationships which previously had not been fully considered. This assumption is further supported by the fact that these respondents made almost no changes in their estimates of other developments, which were not subjected to the specific routine of estimating conditional probabilities (but were given the benefit of the feedback of all of the other types of information used in these exercises). In view of the fact that the relationships among events were stressed throughout these exercises, any movement in the final estimates as a result of the consideration of specific conditioning effects is believed to be significant.

Developments and Events that Respondents Have Suggested Are Interrelated


50% Probability

D-32 Requirement by the state, calling for tertiary treatment of municipal sewage for Traverse City


Your Previous Estimates



Panel Estimates, Round 3

75 (50-85)*

1977 (1975-80)

Those Who Rated Competence >= 3

83 (62-95)

1978 (1975-80)

Your Next Estimates for D-32


D-31 Construction of a spray irrigation system for waste water disposal in the Grand Traverse Bay region


Your Previous Estimates



Panel Estimates, Round 3

50 (50-50)

1980 (1980-90)

Those Who Rated Competence >= 3

50 (15-50)

1980 (1978-80)

Your Next Estimates for D-31


If you were certain that D-32 would occur before 1980, your estimates for D-31 would be


If you were certain that D-32 would not occur in 1971- 1980, your estimates for D-31 would be


* Interquartile Range

Fig. 1. Example of interrelated developments.

An analysis of the estimates of the technical panel showed that some respondents appeared to have considerable difficulty making probability estimates both for a fixed period (1971-80) and for fixed levels of probability (25, 50, and 75 percent). In some cases inconsistent estimates were made (for example, the probability of occurrence during 1971-80 was estimated to be greater than 50 percent, but the year associated with a 50 percent probability was later than 1980).

Fixed probabilities of 25, 50, and 75 percent were selected for personal probability assessments by the broader-based panelists for several reasons:

  1. There was strong agreement among the three groups involved in the exercises-technical, behavioral, and decision makers--on the words and phrases that they associated with the numerical probabilities of 25, 50, and 75 percent [3].
  2. Individual distributions provided the decisionmakers with more information than single probability estimates and were believed to be helpful to the estimator in making assessments that were consistent with his judgment [4].
  3. The 25, 50, and 75 percent levels of probability were ideal for using a betting rationale, that is, systematically dividing the future into equally attractive segments.
  4. It was believed that group medians associated with these fixed probabilities would provide an easily identifiable consensus range.

Since it was likely that many of the decisionmakers would have had little experience with the notion of personal probabilities, a guide for making personal estimates of probability was sent to all members of the broad panelsresearchers as well as decisionmakers. The guide presented a systematic method for arriving at the timing estimates for each technical and social development. The assessor was asked to visualize a movable pointer below a sequence of numbers representing years, as in the diagram below. He was asked to move the pointer mentally so as to divide the future into two periods in which the development was equally likely to occur.

1 1 1
9 9 9
7 8 9
1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 Later
50% Probability

If the result appeared as it does in the diagram above, 1983 should be entered as the 50 percent probability date. It could also be described as the "1-to-1" odds or "even chance" date. If the pointer came to rest beyond 1990, "Later" would be recorded, and the assessor would go on to consider the next development.  The assessor was then instructed how to divide up the results to estimate the "3-to-1" and "1-to-3" odds.

Because of the interest in technology transfer and knowledge utilization in Michigan's Delphi inquiries, there was a special interest in the judgment patterns of the technicians and decisionmakers, which were displayed as in Fig. 2. For each round the panel medians (connected by a solid line) and the interquartile ranges (connected by dashed lines) were shown. The rounds were numbered from left to right for the researchers and from right to left for the decisionmakers, to facilitate the comparisons. The average judgments of respondents in each group who rated their competence in the area being considered relatively high were indicated by asterisks. For most items generally, each group's median estimate for the final round was very close to the median estimates of those who considered themselves relatively competent in the subject. Also, the consensus-as measured by the interquartile range narrowed and the average estimates of the two groups tended to come closer together. Some of the other patterns, while not ideal from the standpoint of movement toward a narrower consensus, provided a decisionmaker with information as to a course of further inquiry.

Sources of Pollution

A crucial consideration in planning for intelligent management of water resources is the identification of the most important sources of pollution. In making their judgments, panelists were asked to assume a future social and political environment consistent with present trends. However, it was expected that concurrent Delphi inquiries regarding important developments and requisite technology would influence their estimates.

On the first round the technical panel was provided with a list of sources of pollution and specific pollutants thought to be important. Panelists were requested to add other items that they felt would affect a body of water comparable to Grand Traverse Bay in the next twenty years. The collated responses identified seventeen additional sources of pollution and eighteen additional pollutants for the panel to consider. Since there were too many alternatives to present in a matrix designed to encourage the careful consideration of several evaluation factors, the primary objective of round two was to narrow the number of alternatives. The evaluation matrix of round three presented the ten most important sources of pollution as determined by a statistical summary of the estimates made in round two. Panelists were asked to distribute 100 points among the sources of pollution, according to each one's relative importance, for two future periods. The information feedback for the following round provided statistical summaries for Group A, all respondents; Group B, those who rated their competence on sources of pollution relatively high; and Group C, respondents in Group B who were also relatively familiar with the Grand Traverse Bay watershed area. Although Group B differed considerably in size from Group C, the average estimates of the two were remarkably close. This finding might suggest that technical competence is a more important requisite for panel membership than familiarity with a specific region, an idea that could have important implications for interdisciplinary programs such as Sea Grant, in which research methodologies developed for a subregion are to be applied to a larger socioeconomic system.

In the broad panel exercises the evaluation matrix for round two was similar to the final matrix used in the technical panel. The evaluation matrix for the following round provided statistical summaries of the estimates of both the technical panel and the broader-based panels (Fig. 3).

A significant difference in the final estimates regarding the relative importance of the effluent from the Traverse City sewage system suggests that a series of estimates conditional on specific social, political, or technical developments could be used to determine the assumptions on which the evaluators based their estimates. The reason for the differences in estimates could also be sought through interviews and other means of communication.

Recommended Waste-Water Treatment and Disposal Systems

Many communities in the Great Lakes basin are confronted with decisions on waste-water treatment and disposal systems that will have important consequences for the future socioeconomic development of their region. This is a highly technical and complex issue, and decisionmakers must intuitively assess the judgments of experts in many specialized areas.

A systematic consideration of the available alternatives and the identification of areas of agreement and disagreement within and between the three general groups involved in these exercises may aid planners from this region as well as those from many other communities in the Great Lakes region facing similar problems and decisions.

Included in the technical panel's round-three information package was an evaluation matrix that listed six alternative waste-water treatment and disposal systems. Panel members were asked to suggest other alternatives and to evaluate each of them in terms of two different starting dates for the construction of the necessary facilities. Variances in the estimates were to be attributed to assumptions about the technology that would be available at the two starting dates. Panel members were instructed to give 100 points to their first choice for each time period and a portion of 100 points to the remaining alternatives according to their value relative to the first choice.

The round-four information package provided panel members with a summary of the estimates made in the third round. The evaluation matrix for that round (Fig. 4) requested two evaluations for the six alternative waste-water treatment and disposal systems for two different starting dates. In the first evaluation the respondents were asked to consider all factors, in particular the technology available at the start of construction; in the second evaluation they were to consider only ten-year operating costs.

The broad panels used the same evaluation matrix as the technical panel in their final round of estimates, and they were also given a summary of the results of the technical panel's evaluation of all factors except for cost estimates. The broad panelists were advised that the technical panel probably emphasized technical factors in making their estimates. They were also told that the recommendations applied to a region similar to the Grand Traverse Bay area and could differ significantly if the technical panel had considered a specific situation.

A comparison of the average estimates of those on the technical panel who rated their competence relatively high with the average estimates of the respondents on the broad panels showed a very close agreement for both planning periods. This agreement was evident when panelists considered all factors and also when they considered ten-year operating costs, although the values assigned to each alternative relative to operating costs varied considerably from the values assigned when all factors were considered.

The judgments of the technical experts are believed to embody risk considerations applied to a general situation, whereas the judgments of the broad panels are thought to be more oriented to the benefits of alternative approaches for their specific region. Cost estimates include operating costs only; the consideration of investment costs and financing methods could be equally important to the decision maker.

The waste-water treatment and disposal system issue was undertaken primarily to educate the participants and to explore the problem of gathering a representative group of people and interesting them in the problem. The results could provide important material for gaming techniques and background information for deliberations using a variety of methods of information exchange and analysis.

Regional Opportunities, Problems, and Planning Strategies

A Delphi methodology was used to generate and evaluate suggestions regarding regional opportunities, problems, and planning strategies. The group summaries represent initial individual judgments in terms of a Delphi methodology in that these items were suggested on one round and evaluated on a subsequent round, but not subjected to iterative cycles of reassessments based on statistical feedback. However, many of the assessments were influenced by prior consideration of the following in other phases of the Delphi exercises:

  1. the trends of statistical measures which have traditionally been used to describe social and economic development;
  2. the probabilities and importance associated with potential technical, social, economic, and political developments;
  3. the relative importance of future sources of pollution; and
  4. alternative waste-water treatment and disposal systems.

On the final round a list of suggestions regarding opportunities, problems, and planning strategies was presented to the broader-based panels. Panel members were asked to indicate whether an individual item should be singled out for special consideration by regional planners using a six-point scale and associated descriptive words as shown in Fig. 5. In interpreting the group means a value of 3.5 was viewed as the neutral point. The boundaries for the descriptive phrases are as shown.

disagree somewhat
agree strongly
1 2 3 4 5 6
1.5 2.5 3.5 4.5 5.5

Fig. 5. Singling out individual item for special consideration

Although the primary interest in these exercises was to identify areas of disagreement and the underlying reasons for them, a Delphi inquiry provides an accounting of the complete set of items that was considered by the respondents-an important concept when an interdisciplinary team of researchers is involved.

III. Evaluation of the Methodology

There is far from universal agreement on the merits of the Delphi techniques. Rand believes that Delphi marks the beginning of a whole new field of research, which it labels "opinion technology"[5]. However, a paper presented at the joint statistical meetings of the American Statistical Association in August 1971 described the Delphi techniques as the antithesis of scientific forecasting and of questionable practical credibility [6].

According to a recent Wall Street Journal article, the Delphi technique is gaining rather widespread use in technological forecasting and corporate planning, although the same article cautions:

It's easy enough to see the shortcomings of the Delphi procedure; it's much harder to rectify them, as many are struggling to do. Remedial work must be done if the method is to be used in good conscience [7].

The Sea Grant Delphi exercises offered an exceptional opportunity for a critical evaluation of the Delphi techniques in an operational environment. The panelists-the main resource in evaluating the methodology-were interested in the improvement of techniques to integrate the judgments of a multidisciplinary research team and to convey its informed insights to society. Their evaluations were not biased by a strong emotional involvement in the success of the Delphi exercises, as has been true with many of the individual assessments of the method that have been published. From both a program budgeting standpoint and demands on researchers' time, the Delphi exercises competed with a wide variety of other methods for securing and disseminating information.

The primary instrument in evaluating the effectiveness of the method and its potential in other applications was a formal questionnaire. It was developed almost entirely by the respondents themselves using the Delphi technique of feeding back collated individual suggestions to generate additional suggestions. "This procedure somewhat reduces the vulnerability of the questionnaire to the biases and shortcomings of the investigator. The six-point scale and associated descriptive words shown in Fig. 5 were used to quantify degrees of agreement and disagreement. To supplement the formal questionnaire, over thirty-five interviews with panelists were conducted.

Summaries were made for the three general groups participating in the Sea Grant Delphi exercises: technicians (Group 1), behaviorists (Group II), and decisionmakers (Group III). For some issues the summaries for technical panelists under forty years of age and panelists with previous experience with the Delphi method were shown. Using the sample results, tests of significance were made to test the hypothesis that the distributions of the judgments of the Delphi method are homogeneous across the groups (the test procedure was based on the chi-square test statistic) and to test the null hypothesis that the means of the judgments of the population represented by the groups are identical (based on analysis of variance and the F-test). The results of these tests were used to support the discovery of basic differences in judgments made by different groups which had been formed on the basis of similar backgrounds and experiences. Their evaluations provided evidence that the method is effective not only in its designed role but in two other roles that are important and challenging from a management standpoint: encouraging greater involvement and facilitating communication between researchers and decisionmakers. The evaluations also showed that among the carefully selected samples of people the techniques were more highly regarded among groups which were formed on the basis of broad ranges in training and experience than among technicians-the group most administrators of the techniques have focused on.

The reliability of the method was demonstrated by the fact that the performance of the respondents, as measured by group statistical summaries, was similar for the three groups. Respondents from all three groups were generally willing to suggest future developments, sources of pollution, and research priorities; to utilize scaled descriptors to quantify subjective judgments; to accept a statistical aggregation of weights supplied by a group; and to reassess their judgments on the basis of feedback of information supplied by the group.

Some insight into the nature of the difference between judgments based on panelists' experience in the Sea Grant Delphi inquiries and the panelists' conception of an ideal application of the Delphi techniques can be gained by examining a cumulative summary of the evaluation of the effectiveness of the method in three specific roles shown in Table 2.

Table 2
Comparison of Evaluation Based on Experience in Sea Grant
Exercises with Evaluations
Based on the Potential of the Delphi Method

1: strongly disagree.
6: strongly agree.

The greatest difference in judgments based on experience and potential was found in the behaviorists' group, and the least among the decisionmakers. On the basis of the average judgments for all respondents the Sea Grant Delphi exercises corresponded rather closely to the panelists' conception of an ideal treatment, and there is a similar spread in the judgments of the subgroup which had previous experience with the method.

Table 3
Effectiveness of Delphi in Obtaining, Combining, and Displaying
The Opinion of Informed People

1: strongly disagree.
6: strongly agree.

A summary of the evaluation of the effectiveness of the Delphi method in obtaining, combining, and displaying the opinions of informed people is shown in Table 3. It indicates that the technicians, on the average, agreed somewhat that the method was effective compared to alternative methods. However, there was considerable dispersion in their estimates; some respondents strongly disagree that the method was effective in this role. The behaviorists agreed that it was effective in the role, and the decisionmakers displayed strong agreement. Dispersion in the estimates of the two latter groups was much less than it was in the technicians' estimates. An analysis of variance on these data gives a value of 4.6233 for the F statistic. The probability of obtaining an F value larger than 4.6233 when the groups are identical is .0165. A chi-square analysis gives a value for chi-square of 8.3588. The probability of obtaining a larger value, when in fact the distributions come from a homogeneous population, is .2130-the descriptive level of significance. A combination of the judgments of the decisionmakers and the behaviorists resulted in P(c2 > 9.165) =.0025.

Evaluation of the method's effectiveness in encouraging greater involvement in Sea Grant activities provided similar results. The average judgments of effectiveness were XT = 4.0, XB = 4.7, and XDM = 5.1, and the dispersion in the estimates of the technicians was much greater than for the other two groups. The descriptive levels of significance are .0026 based on the F distribution, and .0099 based on the chi-square distribution.

Average judgments regarding the method's effectiveness in communicating information to regional planners and decisionmakers were XT=4.1, XB=5.4, and XDM = 5.3. The j udgments of the behaviorists were exceptionally high and may reflect this group's special concern for the psychological and sociological barriers associated with alternate methods. The descriptive level of significance is .0003 based on the F distribution, and .0108 based on the chi-square distribution.

For all three roles there appears to be little difference in the average estimates aggregated according to the respondent's age and the average estimates of all respondents. However, respondents with previous Delphi experience showed substantially higher average estimates of the method's effectiveness in obtaining, combining, and displaying subjective judgments and in encouraging involvement.

Average judgments regarding specific applications that are appropriate for a Delphi methodology are shown in Table 4. The support of specific applications is generally strongest among the decisionmakers and weakest among the technicians. Only the behaviorists and decisionmakers evaluated the method as an aid in decisionmaking, and both groups supported its use in this role.

In the evaluation of positive and negative aspects of the method that had been suggested by respondents, the panels agreed that there should be more emphasis on the idea that an expert should "not feel obligated to express an opinion on every issue." However, the Sea Grant Delphi inquiries stressed the concepts of a systems approach and multidisciplinary teams. Therefore it was desired that each respondent consider all items and attempt estimates on those with which he had some familiarity. A self-evaluation index was provided so that a panelist could assess his competence regarding each item. The competence index was a control factor in developing the statistical summaries that were part of the information feedback. This procedure allows an informed person to evaluate such things as relative importance and desirability evaluations which he can make without being an expert in the area-and gives the administrator additional assurance that panelists considered items outside their specialized areas. In addition, there was some interest in comparing the estimates of experts and nonexperts on specific issues.

The suggestion that the method "can result in a manipulated and arbitrary consensus" received a neutral judgment from all three groups, perhaps indicating that the respondents felt this danger to be no greater than it would be in alternative techniques for securing group judgments. However, it is this administrator's opinion that the Delphi techniques could be a powerful tool for manipulating opinion and policy [8].

Table 4
Suitable Applications for a Delphi Methodology

IV. Summary of Findings and Recommendations

The design and administration of a Delphi exercise in which the concept of a multidisciplinary team and a systems approach is desired can best be handled as a project within a professional research organization. The scope of the exercise is generally determined by the respondents, and, as interesting and unexpected issues are suggested, flexibility is needed in designing evaluation matrices and in determining the composition of the panels. Experts knowledgeable in specialized areas should be available on an adhoc basis to formulate questions and collate responses in order to minimize redundancy and ambiguity. The demand for their services in the course of a Delphi exercise is very uneven, as is the need for designing, editing, typing, and distribution services. There are significant start-up and learning costs associated with the Delphi techniques that can be justified only if the technique will become a routine management tool to be used on a continuing basis. This is particularly true if the benefits of computer processing are to be realized.

The following are some general observations that are consistent with the items suggested and evaluated by respondents in the Sea Grant exercises and with the information gained in personal interviews with panelists.

  • Respondents will be more receptive if the techniques are tailored to specific groups on the basis of their training and experience.
  • The administrator should consistently emphasize the distinction between the characteristics of a Delphi interrogation and those of conventional questionnaires and polls.
  • Panelists-particularly those with technical backgrounds-must be convinced that judgments often have to be made about issues before all facets of the problems have been researched and analyzed to the extent they would like. (For these situations they must be persuaded that their subjective judgments may be a decisionmaker's most valuable source of information.)

There are several procedural recommendations that may be helpful to designers and administrators of future Delphi exercises.

  • Interpersonal techniques, such as interviews and seminars, should be interspersed with the rounds of questionnaires and information feedback.
  • The source of a suggested item should be identified (for example, panel member number and basic biographical information), taking care not to compromise the anonymity of specific inputs.
  • Standardized scaled measures should be available to a respondent so that he can qualify his response to specific questions. Such measures are relative competence in a technical area, familiarity with a geographical region, or confidence in an estimate.
  • If a multidisciplinary approach is desired, respondents should be encouraged to consider all items but to make estimates only on those scaled descriptive phrases with which he feels comfortable. For example, in these exercises it was helpful when respondents indicated their familiarity with a specialized area or the importance of an item even though they did not make probability estimates.
  • The panelists should decide through their suggestions and evaluations what items should be considered. The criteria for retaining an item for further evaluation should be made clear at the outset of the exercise.
  • Personal comments and arguments submitted by respondents should be part of the information feedback.

The Delphi inquiries have complemented the Michigan Sea Grant gamingsimulation activities by providing the following types of inputs:

  • Data which can be helpful in describing social, economic, and political forces affecting the region's development during the next twenty years.
  • Regional planning strategies, listed in order of preference for both university researchers and regional planners.
  • Problems and issues which provide the link between the simulated regional area and a set of decision roles which are gamed [9].

Integration of a Delphi methodology with the Michigan Sea Grant gamingsimulation exercises will give them a more dynamic aspect and provide greater motivation for the participants. Some particularly interesting applications would be in cost-benefit analyses similar to those used in the Delphi inquiries to evaluate waste-water treatment and disposal systems, selection of research projects through an evaluation of effectiveness in terms of basic objectives and risk factors associated with various levels of funding, and the development of alternate scenarios for a region such as the Grand Traverse Bay watershed area.

According to Michael [10], Delphi inquiries and gaming-simulation exercises are techniques for introducing customers in a nonthreatening way to a more complex way of thinking and a better way of perceiving their needs in terms of the kind of knowledge we have. Knowledge utilization depends upon discovering the nature of the awareness of the problem among potential customers both as a set of variables and as a system of interrelationships; getting new knowledge absorbed by individuals and then by the organizations these individuals are in; and developing a capability and inclination to plan rather then employing an ad lib approach.

Concerned citizens were included as panelists in the Delphi inquiries not only for the purpose of informing them but also to accord the other panel members the benefits of the citizens' knowledge of the area, to take into account political and institutional considerations, and to communicate findings in such a way that the acceptance and implementation of policies and actions on which there appeared to be a reasonable consensus would be encouraged. The behavioral sciences provide support for this type of approach in effective communication [11,12].

The Michigan Delphi inquiries have provided some carefully formulated judgments of a multidisciplinary team of researchers and potential users of research data regarding: the importance and effects of technical, social, economic, and political developments; sources of pollution and recommended waste-water treatment and disposal systems; and regional opportunities, problems, and planning strategies. More important, a critical evaluation of the method has shown the potential of a Delphi inquiry for improving the dialogue between researchers and regional problem solvers. It has also provided empirical evidence to support further investigation of several innovations which may bring the methodology closer to Jantsch's idealized concept of a forecasting technique, wherein exploratory and normative components are joined in an iterative cycle in which informed citizens can work with researchers in planning for the future [13].


  1. Murray Turoff, "Delphi and Its Potential Impact on Information Systems," Paper 81, Proceedings of the Fall Joint Computer Conference, Vol. 39. AFIPS Press, Washington, D. C., November 1971.
  2. The techniques and procedures used in this series of interrogations and information feedback are similar to those described in "Some Potential Societal Developments--1970-2000" by Raoul De Brigard and Olaf Helmer, IFF Report R-7. Institute for the Future, Middletown, Conn,, April 1970.
  3. In one phase of the Delphi inquiries panelists were asked to assign numerical probabilities to commonly used words or phrases to indicate the likelihood of an event. The Delphi technique of reassessment based on the feedback of a group response was extremely effective in narrowing the dispersion of the estimates. Verbal phrases associated with numerical probabilities were believed to encourage respondents to think about a probability scale in similar terms and might be more appropriate than numerical probabilities in expressing the likelihood of socioeconomic developments.
  4. The Michigan Delphi inquiries provided empirical evidence that the feedback and reassessment techniques which are inherent in the basic Delphi method reduced the number of inconsistencies in personal estimates of probability as the rounds progressed. It also indicated a tendency for a learning "curve" for respondents with respect to the technique itself.
  5. "Forecasters Turn to Group Guesswork," Business Week, March 14, 1970.
  6. Gordon A. Welty, "A Critique of the Delphi Technique" (summary of paper presented at the joint Statistical Meetings of the American Statistical Association, Colorado State University, Fort Collins, Colorado, Aug. 23-26, 1971).
  7. "Futuriasis: Epidemic of the '70s," Wall Street Journal, May 11, 1971.
  8. For a discussion of the dangers associated with a Delphi devoted to policy issues see Turoff's article, above (Chapter 111. B. 1.). For a discussion of deliberate distortion see "A Critique of Some Long-Range Forecasting Developments," by Gordon Welty (paper presented at 38th session of the International Statistical Institute, Washington, D. C., August 1971).
  9. The gaming-simulation concept for the Sea Grant Program is presented in "Developing Alternative Management Policies," unpublished report, University of Michigan Sea Grant Office, 1971.
  10. Donald Michael is presently program director, Center for Research on Utilization of Scientific Knowledge, Institute for Social Research, University of Michigan. These are a summary of his unpublished remarks to a site visit team of government officials and academicians in Ann Arbor, Michigan, March 4, 1972.
  11. Douglas McGregor, "The Professional Manager" (New York: Harper & Row, 1967), p. 153: ... My conception of a two-way communication is that it is a process of mutual influence. If the communicator begins with the conviction that his position is right and must prevail the process is not transactional but coercive."
  12. Peter F. Drucker, '"Technology, Management and Society" (New York: McGraw-Hill Book Co., 1970), pp. 22-23: "... They must understand it because they have been through it, rather than accept it because it is being explained to them."
  13. Erich Jantsch, "Technological Forecasting in Perspective," Organization for Economic Cooperation and Development, Paris, 1967

Other References

Examples of the key instruments used in the Michigan Sea Grant Delphi inquiries can be found in "Substantive Results in the University of Michigan's Sea Grant Delphi Inquiry," by John D. Ludlow, Sea Grant Technical Report No. 23, University of Michigan, Ann Arbor, 1972, and in "Evaluation of Methodology in the University of Michigan's Sea Grant Delphi Inquiry," by John D. Ludlow, Sea Grant Technical Report No. 22, University of Michigan, Ann Arbor, 1972. Complete sets of the information packages can be found in "A Systems Approach to the Utilization of Experts in Technological and Environmental Forecasting," by John D. Ludlow, Ph.D. dissertation for the University of Michigan, 1971. Available through University Microfilms, Inc., Ann Arbor.

1 The term "Sea Grant Program" was derived from the National Sea Grant College and Program Act, whose intent was to involve the nation's academic community in the practical problems and opportunities of the marine environment, including the Great Lakes.

2 The term "Delphi inquiry" was propounded by Turoff and refers to the complete Delphi process. He observed that any particular Delphi design can be characterized in terms of the "inquiring systems" specified in Churchman's writings. See reference 1 at the end of this article.








©2002 Murray Turoff and Harold A. Linstone