COMPUTER BASED DELPHI PROCESSES

by

Murray Turoff and Starr Roxanne Hiltz

A version will appear as an INVITED BOOK CHAPTER for Michael Adler and Erio Ziglio, editors., Gazing Into the Oracle: The Delphi Method and Its Application to Social Policy and Public Health, London, Kingsley Publishers (in press).

CONTENTS

INTRODUCTION

The name "Delphi" was never a term with which either Olaf Helmer or Norman Dalkey (the founders of the method) were particular happy. Since many of the early Delphi studies focused on utilizing the technique to make forecasts of future occurrences, the name was first applied by some others at Rand as a joke. However, the name stuck. The resulting image of a priestess, sitting on a stool over a crack in the earth, inhaling sulfur fumes, and making vague and jumbled statements that could be interpreted in many different ways, did not exactly inspire confidence in the method.

The straightforward nature of utilizing an iterative survey to gather information "sounds" so easy to do that many people have done "one" Delphi, but never a second. Since the name gives no obvious insight into the method and since the number of unsuccessful Delphi studies probably exceeds the successful ones, there has been a long history of diverse definitions and opinions about the method. Some of these misconceptions are expressed in statements such as the following that one finds in the literature:

It is a method for predicting future events.

It is a method for generating a quick consensus by a group.

It is the use of a survey to collect information.

It is the use of anonymity on the part of the participants.

It is the use of voting to reduce the need for long discussions.

It is a method for quantifying human judgement in a group setting.

Some of these statements are sometimes true; a few (e.g. consensus) are actually contrary to the purpose of a Delphi. Delphi is a communication structure aimed at producing detailed critical examination and discussion, not at forcing a quick compromise. Certainly quantification is a property, but only to serve the goal of quickly identifying agreement and disagreement in order to focus attention. It is often very common, even today, for people to come to a view of the Delphi method that reflects a particular application with which they are familiar. In 1975 Linstone and Turoff proposed a view of the Delphi method that they felt best summarized both the technique and its objective:

"Delphi may be characterized as a method for structuring a group communication process, so that the process is effective in allowing a group of individuals, as a whole, to deal with complex problems." (page 3)

The essence of Delphi is structuring of the group communication process. Given that there had been much earlier work on how to facilitate and structure face-to-face meetings, the other important distinction was that Delphi was commonly applied utilizing a paper and pencil communication process among groups in which the members were dispersed in space and time. Also, Delphis were commonly applied to groups of a size (30 to 100 individuals) that could not function well in a face-to-face environment, even if they could find a time when they all could get together.

Additional opportunity has been added by the introduction of Computer Mediated Communication Systems (Hiltz and Turoff, 1978; Rice and Associates, 1984; Turoff, 1989; Turoff, 1991). These are computer systems that support group communications in either a synchronous (Group Decision Support Systems, Desanctis et. al., 1987) or an asynchronous manner (Computer Conferencing). Techniques that were developed and refined in the evolution of the Delphi Method (e.g. anonymity, voting) have been incorporated as basic facilities or tools in many of these computer based systems. As a result, any of these systems can be used to carry out some form of a Delphi process or Nominal Group Technique (Delbecq, et. al., 1975).

The result, however, is not merely confusion due to different names to describe the same things; but a basic lack of knowledge by many people working in these areas as to what was learned in the studies of the Delphi Method about how to properly employ these techniques and their impact on the communication process. There seems to be a great deal of "rediscovery" and repeating of earlier misconceptions and difficulties.

Given this situation, the primary objective of this chapter is to review the specific properties and methods employed in the design and execution of Delphi Exercises and to examine how they may best be translated into a computer based environment.

GO TO START


ASYNCHRONOUS INTERACTION

Perhaps the most important and least understood property of the Delphi method is the ability of members of a group to participate in an asynchronous manner. This property of asynchronous interaction has two characteristics:

A person may choose to participate in the group communication process when they feel they want to.

A person may choose to contribute to that aspect of the problem to which they feel best able to contribute.

It does not matter what time of the day or night Delphi participants think of good ideas to include in their response. They can fill out a Delphi survey when they wish to, or they can go to a computer terminal to contribute when they wish to. This can be done at whatever point in time the individual feels he or she has thought of significant things to include in response to the issues involved. Participants can revise and add to their responses over time, before sending them to the group monitor for dissemination to the others.

A good Delphi survey attempts to tackle the problem from many different perspectives. Sometimes this is referred to as including questions in the Delphi survey which approach the problem both from the "bottom-up" and from the "top-down" perspectives. This allows different individuals in the group to focus on the approach to problem solving with which they feel most comfortable.

In a normal face-to-face group process, and in the environment characterized by face-to-face Group Decision Support Systems, all the members of the group are forced into a lockstep treatment of a problem. When the group is considering the subject of "goals," those who have difficulty dealing with "abstraction" may feel at a disadvantage, because they do not have as much to contribute. Conversely, when focusing on specific solution approaches, those who deal better with "abstraction" may not feel they are contributing.

One of the specific advantages of groups is to allow individuals with differing perspectives and/or differing cognitive abilities to contribute to those parts of a complex problem for which they have both the appropriate knowledge and appropriate problem solving skills. A typical model for a group problem solving process is:

Recognition of the problem

Defining the problem

Changing the representation of the problem

Developing the goals associated with solving the problem

Determining the strategy for generating the possible solutions

Choosing a strategy

Generating the evaluation criteria to be applied to solutions

Evaluating the solution criteria

Generating the solutions

Evaluating the solutions

The literature on cognitive abilities and human problem solving does confirm that individuals differ considerably, based upon their cognitive abilities (Benbasat and Taylor, 1982; Streitz, 1987), in their ability to deal with different aspects of a problem solving situation. This depends upon such psychological dimensions as their ability to deal with Abstraction - No Abstraction, Search - No Search, Data Driven - Conceptually Driven, and Deductive - Inductive cognitive processes.

In most face-to-face approaches, the group is forced as a whole to take a sequential path through a group problem solving process. In the Delphi process, we try to design a communication structure that allows any individual to choose the sequence in which to examine and contribute to the problem solving process. This is the single most important criterion by which we should evaluate the design of a Delphi oriented communication structure. Does it allow the individual to exercise personal judgement about what part of the problem to deal with at any time in the group problem solving process?

It is actually easier to accomplish this using a computer system than it has been with paper and pencil based Delphi studies. The "round" structure and the need to limit the physical size of any paper and pencil survey places severe constraints on the degree to which one can carry out the above approach. Hence, paper and pencil Delphis are usually limited by the "top-down/bottom-up" dichotomy rather than allowing more complete parallel entry to any aspect of the problem. For example, in a single Delphi one might explore on the first round "goals" (a top view) and specific "consequences" (a bottom view). Relating goals to consequences requires developing the relationships inherent in alternative actions and states of nature. These would be put off to a later round. In the computerized environment individuals could be free to tackle any aspect of the problem according to personal preferences.

This particular objective of Delphi design is also characterized by two other practices commonly applied to Delphi studies. First, it should be clear to the respondents that they do not have to respond to every question, but can decide to take a "no judgement" view. Secondly, one usually solicits the respondent's confidence in their judgements, particularly when they are quantified judgements. This has been found to improve the quality of the estimates made in Delphi exercises (Dalkey, 1970). This allows the respondents to estimate their own degree of expertise on the judgements they are supplying. The fact that contributions can be made anonymously also means a person does not have to feel embarrassment if he or she does not feel able to confidently contribute to a specific aspect of the problem.

This advantage for the Delphi approach comes at an obvious price. With material being supplied in parallel, it is clear that the need to structure and organize it in a manner that it makes sense to the group is a primary requirement (Turoff, 1974, 1991; Hiltz and Turoff, 1985). The need to carefully define the total communication structure and put it into a framework that produces both a group view and a synchronization of the group process is the most difficult part of a good Delphi design. We will treat this in following sections. In paper and pencil Delphis, this is the effort that must be undertaken by the design team in processing the results of each round and producing a proper summary. In a computer based Delphi process, this has a somewhat different connotation in that the round structure disappears, replaced by a continuous feedback process which may or may not involve human intervention for the processing.

The most significant observation resulting from the above considerations, is that most of the attempts to understand the group problem solving process in the computer based environment are still based upon models that were developed from studying face-to-face groups. Thus, what are often thought of as being "ideal" group problem solving structures are based upon the "sequential" treatment of a problem by a group (Turoff, 1991). There has been little work to date to develop models of the group problem solving process that are based upon parallel and asynchronous activities by the individuals within the group. There is need for a model which integrates the individual problem solving process with the group process. It is only within the context of such a model that we can come to a deeper understanding of the design process that goes beyond the trial and error evolution of the method that has occurred to date.

GO TO START


ANONYMITY

Perhaps the property that most characterizes the Delphi method in the mind of most people is the use of anonymity. Typically, in paper and pencil Delphis there is no identification of who contributed specific material or who made a particular evaluative judgment about it. This property is not one that should be considered a hard and fast rule for all aspects of a Delphi. Moreover, the computer makes possible variations in anonymity not possible in a paper and pencil environment. Before we explore these, we should look at the primary reasons for anonymity:

Individuals should not have to commit themselves to initial expressions of an idea that may not turn out to be suitable.

If an idea turns out to be unsuitable, no one loses face from having been the individual to introduce it.

Persons of high status are reluctant to produce questionable ideas.

Committing one's name to a concept makes it harder to reject it or change one's mind about it.

Votes are more frequently changed when the identity of a given voter is not available to the group.

The consideration of an idea or concept may be biased by who introduced it.

When ideas are introduced within a group where severe conflicts exist in either "interests" or "values," the consideration of an idea may be biased by knowing it is produced by someone with whom the individual agrees or disagrees.

The high social status of an individual contributor may influence others in the group to accept the given concept or idea.

Conversely, lower status individuals may not introduce ideas, for fear that the idea will be rejected outright.

In essence, the objective of anonymity is to allow the introduction and evaluation of ideas and concepts by removing some of the common biases normally occurring in the face-to-face group process. Sometimes the use of anonymity has been carried too far. For example, it is important that the members of a Delphi exercise believe that they are communicating with a peer group. An individual participant must feel that the other members of the group will be able to contribute valuable insight about the problem being examined. This is a primary factor in motivating participation. It is usual to inform the participants about who is actually involved in the group of Delphi respondents. Only when there are strong antagonisms among group members would one consider not doing this.

Delphi panelists are motivated to participate actively only if they feel they will obtain value from the information they receive as a result of the process. This value received needs to be at least equal, in their minds, to the effort expended to contribute information. This is one reason why blanket invitations to participate in a Delphi that do not specify who will be involved and what the feedback will be to the group members often result in very low participation rates.

When one introduces the concept of conducting a Delphi through a Computer Mediated Communication System, there are more options available for handling the process of anonymity. First, one can easily incorporate the use of pen-names (Hiltz, Turoff, and Johnson, 1989). While this does not identify who a person is, it does allow a person to be identified with a set of related contributions. This allows the other members of a group to obtain more understanding of why specific individuals are agreeing or disagreeing with certain concepts. For example, knowing all the arguments a person has made about accepting or rejecting a given position allows people to better tailor what needs to be said to perhaps change an individual's viewpoint. It also allows the expression of more complex individual viewpoints. This coherency is hard to observe or utilize when everything is anonymous.

As a result, it is probably desirable in most computer based Delphis to impose the default use of pen-names rather than anonymity on qualitative type statements made in the discussion. In some cases it is also possible to provide the privilege of allowing the respondents to choose when they wish to use pen-names and when they wish to use their real name. The more the individuals know one another and have a history as a "social" group, the more likely that good results will result from allowing participants to freely choose to use their real names, pen-names, or anonymity on qualitative type information.

There have been studies of computer based message systems which have attempted to conclude that the use of anonymity leads to "flaming" and antagonism (Kiesler, Siegel, and McGuire, 1984). Most of these observations have been based upon studying student groups who have no prior history or knowledge about one another. Flaming and disinhibition have not been problems among groups that already have a social history or social structure. When utilizing a computer based system with groups who are not familiar with one another, it may be important to provide a separate conference devoted to socializing among the group members. This would serve the same purpose as coffee breaks serve for co-located groups that work together. In the computer based communication environment, it has been observed that social-emotional exchanges are helpful in facilitating consensus development and eliminating potential misunderstanding (Hiltz, Johnson, and Turoff, 1986).

Anonymity for voting and estimates of subjective quantitative information is probably desirable to maintain in most circumstances. However, it is desirable that the coordinator for a Delphi exercise on the computer system be able to identify people with extreme votes or estimates. A Delphi coordinator should have no vested interest in the outcome and should be in a facilitation role. The facilitator may feel it is desirable to encourage individuals with extreme positions to explain them. Sometimes the observation that one is in a minority position can negatively affect participation unless there is such encouragement.

In some cases it may be desirable to allow voter identification. For example, in the final steps of a budget allocation task, it could be felt that everyone should assume final accountability for the recommended decision. Even in face-to-face committees, committee reports where no identified individuals assume responsibility have sometimes led to a lack of group commitment when it comes to implementing the results. Also, when no one is accountable, one can sometimes get more risky recommendations than would otherwise result. This decision must be based upon the nature of the application and the group. In any case, the identification of a member's voting position should only apply to the final evaluation phase of a group process.

GO TO START


MODERATION AND FACILITATION

In Computer Mediated Communication Systems, aside from message systems, if one wants to conduct group oriented communications, there is still a basic need for moderation and facilitation, just as in face-to-face meetings. However, the nature of leadership in the online environment is different from that in the face to face environment.

In the online environment it is much easier to separate the role of process facilitation from that of content leadership. It is also quite easy to develop a number of different leaders for different areas of a problem.

In the paper and pencil Delphi every contribution first goes to the coordinator of the exercise and then is integrated into a single summary provided to all of the participants. Clearly, in the computer based environment, this is not necessary. Whether or not given contributions need to be screened ahead of time is a function of the application and the nature of a particular contribution. Since the individual members can update themselves on what is new before making a contribution, the amount of duplication is minimized in a computer based Delphi.

For example, it may be desirable to hold certain types of contributions until the group is at a point in the deliberations where they are ready to deal with them. Also, information such as voting results should not be provided until a sufficient number of votes about an item have been accumulated. In situations dealing with very strong controversies, it may be necessary to screen and edit the wording of certain contributions to try to minimize emotional biases and tactics such as name calling and insulting remarks.

While a lot of material in an online Delphi can be delivered directly to the group, the specific decisions on this still need to be made by the person or team in control of the Delphi process. In Computer Mediated Communications, the activity level and actions of a conference moderator can be quite critical to the success of an asynchronous conference and specific guidelines for moderators can be found in the literature (Hiltz, 1984).

There have been many Delphis where the material is summarized based upon the breakdown of the respondents into various specialized expert subgroups or differing interests and perspectives. In the computer based environment it becomes possible to consider multiple group environments. This is where there are separate communication structures or separation of the respondents into separate Delphi groups. On top of this structure would be a higher level one that synthesizes or filters out the reduced set of information necessary to pass between the groups. This does lead to the possibility of very large populations of respondents engaged in common task objectives. A practical example of this is multiple industrial standards groups which must be informed of what is arising from other groups that impacts on their considerations, but do not need to be involved in details of the subgroup deliberations in other areas.

There are many Delphi applications where respondents actually engage in taking on roles (e.g Stakeholder Analysis, Linstone, 1984) to deal with certain situations. This requires moderator supervision and direction. Associated with role playing is the employment of gaming situations where there may be groups in competition with one another and communication is regulated by the "game director" (Hsu, 1989). In the area of policy analysis it could be very productive to allow the subgroups that have agreement about a given resolution to have a private open conference where they can discuss the best possible responses to the material in the main Delphi as a private subgroup. It is also clear that subgroups could be formulated dynamically based upon the content of what is taking place.

Multiple group Delphis in a computer environment is a relatively new potential and there are no hard and fast rules for setting up communication structures in this area. As group oriented Computer Mediated Communication Systems become more widely used, there will be much opportunity to experiment with this relatively new opportunity for structuring communications at both the inter and intra group level.

GO TO START


STRUCTURE

The heart of a Delphi is the structure that relates all the contributions made by the individuals in the group and which produces a group view or perspective. In a computer based Delphi, the structure is one that reflects continuous operation and contributions. This is somewhat different than the paper and pencil mode where the structure must be divided into three or more discrete rounds. As an example, we will describe potential transformations of two simple structures that have often been utilized in paper Delphis, for use in a computerized environment.

The Policy Delphi

The first example is the Policy Delphi (Turoff, 1970). This is an interesting Delphi structure in that its objective is not to produce a consensus, but to expose the strongest pro and con arguments about differing resolutions of a policy issue. It is a form of policy analysis that provides a decision maker the strongest arguments on each side of the issue. Usually one utilizes as respondents individuals who have the strongest opposing views.

The structure of a Policy Delphi is very simple.

Policy Delphi Structure
TYPE OF ITEM VOTING SCALES RELATIONSHIPS
Resolution Desirability
Feasibility
Alternatives
Argument Importance
Validity
Pro or con to a
given resolution
Opposing to other
arguments

In the above structure any respondent in the Delphi is free to add a possible resolution (solution) to the basic policy issue, or to make a pro or con argument about one or more of the listed possible resolutions. He or she can do this at any time. Also, the respondent can vote at any time on the two types of voting scales associated with either of the item types. Individuals may also choose to change their vote on a given item at any time. In this structure the two scales are needed to highlight situations where policy resolutions might be rated in such categories as desirable but infeasible, and arguments may be rated as important but invalid (others might believe it). When making additions of a qualitative nature, participants must also indicate how that addition is related to the existing items.

The computer's role in the above process is to organize everything so that the individual can follow what is going on and obtain a group view:

Provide each member with new items that they have not yet seen.

Tally the votes and make the vote distribution viewable when sufficient votes are accumulated.

Organize a pro list and a con list of arguments about any resolution.

Allow the individual to view lists of arguments according to the results of the different voting scales (e.g. most valid to least valid arguments).

Allow the individual to compare opposing arguments.

Provide status information on how many respondents have dealt with a given resolution or list of arguments.

The role of the Delphi Coordinator or human facilitator is very minimum in such a well defined structure. The software powers or special privileges that such an individual needs are:

Being able to freeze a given list when it is felt there are sufficient entries to halt contributions, so as to focus energies on evaluation of the items entered to that point in time.

Being able to edit entries to eliminate or minimize duplications of resolutions or arguments.

Being able to call for final voting on a given item or set of items.

Being able to modify linkages between items when appropriate.

Reviewing data on participation so as to encourage participation via private messages.

It is possible to also develop rules to allow the computer to handle some of the above functions. But in terms of today's technology, these functions are still better handled by a human. A group using this structure for the first time should go through a training exercise. The Policy Delphi structure can be designed to be fairly easy to learn and utilize. The use of graphics to support visualization of the structure of the discussion can also be helpful.

The Policy Delphi structure was first implemented in paper and pencil in 1970 and was later implemented in two separate computer versions (Turoff, 1972; Conklin and Begeman, 1987). It should be noted that the structure of relating items in a Policy Delphi may also be viewed as a representation of a specialized or tailored Hypertext system (Conklin, 1987; Nelson, 1965). Most Delphi designs, when translated to a computer environment, do depend upon semantic relationships among items being established and are utilized for browsing and presenting content oriented groupings of the material. A generalized approach to supporting Delphi relationships within a Hypertext environment may be found in the literature (Rao & Turoff, 1991; Turoff, Rao, and Hiltz, 1991).

Most Delphi structures can be considered to be types of items (i.e. nodes) which have various relationships (i.e. links) to one another. Therefore, it is possible to view a specific Delphi as a particular instance of a Hypertext system. Hypertext is the view of text fragments in a computer as the nodes within a graph or web of relationships making up a body of knowledge. Hypertext functionality is therefore useful for the support of automated Delphi processes.

GO TO START

The Trend Model

This Delphi involves first choosing a specific trend of concern to the group. For example, this might be deaths from AIDS or the amount of life extension expected from a particular treatment. One might include in a single study a set of related trend variables. For the purpose of this explanation we will focus on one trend.

The individual respondents are asked first to make a projection of where they think the time curve will go in the next five years. Then they are asked to list the assumptions they are making and any uncertainties they have. Assumptions are things they think will occur over that time frame and which impact on determining this trend. Uncertainties are things they do not think will occur, but if they did, they would cause changes in estimates of where the trend will go.

Since some peoples' uncertainties are other's assumptions, these are compiled into a list of "possible" assumptions and every individual is asked to vote on each possible assumption according to validity. To accomplish this validity estimation the group may be provided with an anchored interval scale which varies, for example, from "definitely true" to "definitely false," with a mid-point of "maybe." The resulting list of assumptions is automatically reordered by the group validity judgement. The ones the group agrees on as valid or invalid are set aside, and the subsequent discussion focuses on the assumptions that have an average vote of "maybe". The analysis of the voting has to point out which "maybe" votes result from true uncertainty on the part of the respondents, and which result from wide differences in beliefs between subgroups of respondents.

Clearly in the computer environment, this process of listing, voting, and discussing the assumptions can take place on a continuous basis. The voting serves to quickly eliminate from the discussion those items on which the group agrees. The remaining uncertain items usually are divided into two types: 1) those which can be influenced (e.g. improvements in knowledge about the proper use of condoms), 2) those that cannot be influenced (e.g. hospital facilities in the short term).

In the final stage, after the list has been completed and evaluated, the participants are asked to re-estimate their earlier trend estimate. One could observe that a statistical regression analysis might have produced a similar trend curve. However, the application of such a mathematical technique will not produce the qualitative model that represents the collective judgement of all the experts involved. It is that model which is important to understanding the projection and what actions can be taken to influence changes in the trend or in understanding the variation in the projection of the trend.

There is practically no planning task where the above trend analysis structure is not applicable. In the medical field, for example, considering examining trend curves for the occurrences of certain medical problems and the impact of various treatments is rather broadly applicable. This particular structure has been utilized in a significant number of corporate planning exercises. With graphic capabilities on workstations, it would be quite easy to implement in a computerized version. A similar structure may be applied to qualitative trends made up of a time series of related discrete events. An example would be AIDS cases triggering specific legal rulings and particular ethical dilemmas.

The above two examples were chosen because they are fairly simple and straightforward. However, there are literally tens of different Delphi structures that have been demonstrated in the paper and pencil environment (Linstone and Turoff, 1975) and are quite transferable to the computer based environment. Many of these require the ability to utilize graphics to view the complexity of relationships among concepts. Others require extended facilities to utilize generalized Hypertext structures. However, one of the most significant potentials for the automation of Delphi is the incorporation of real time analysis aids for the interpretation and presentation of the subjective information produced in a Delphi exercise. This will be treated in a following section.

It should also be clear from the above examples that there are certain fundamental tools that apply across a wide range of Delphi structures. The ability of a group to contribute to building a specific list, to be able to apply specific voting capabilities, and to be able to sort the list by voting results, represents a set of general tool capabilities. This is the approach we have taken in the development of the EIES 2 system (Turoff, 1991) at NJIT to support a wide variety of applications such as Group Decision Support, Delphi Design, Project Management, and Education (Hiltz, 1986, 1990).

EIES 2 is a general purpose Computer Mediated Communication System that provides many features whereby an individual moderating a specific conference can tailor the group process. The moderator of a given conference can create, at any point in the discussion, an "activity" that may be attached to a comment. These activities accomplish different specialized functions such as list collection and voting. However, the interface to all these activities is the same in the sense that the same basic generic commands apply to any activity. For example, one may "Do" the activity to make changes to it or "View" the results of the activity, regardless of what type of activity it is. The conference moderator has the authority to introduce these activities whenever he or she feels they fit within the current discussion. Also, the moderator may choose to allow or not allow the facilities of anonymity for a given activity or conference.

EIES 2 also provides a general notifications capability that can be tailored to notify the participants in a group process whenever any action occurs of which they need to be made aware. For example, a notification may let the members of a Delphi know when the votes on a specific item are sufficient to allow viewing of the resulting distribution. EIES 2 is constructed so that any programs or analysis routines developed in any language within the context of the UNIX operating system or a TCP/IP network can be integrated or made available through the EIES 2 interface. The major facility to allow a Computer Mediated Communication System to enhance Delphi processes is to provide alternative structures in the form of a collection of group support tools. The system must also include the privileges for a facilitator or group leader to decide on the dynamic incorporation of these tools in the group process.

GO TO START


ANALYSIS

A principal contribution to the improvement of the quality of the results in a paper and pencil Delphi study is the analysis that the design and coordination team can perform on the results of each round. This analysis has a number of specific objectives:

Improve the understanding of the participants through analysis of subjective judgements to produce a clear presentation of the range of views and considerations.

Detect hidden disagreements and judgmental biases that should be exposed for further clarification.

Detect missing information or cases of ambiguity in interpretation by different participants.

Allow the examination of very complex situations that can only be summarized by analysis procedures.

Detect patterns of information and of sub-group positions.

Detect critical items that need to be focused upon.

To accomplish the above, there are a host of analysis approaches that come from many different fields. Many of these are amenable to implementation as real time computer based support to a continuous Delphi process conducted via a Computer Mediated Conferencing System. We will briefly address here some of the most significant types of these methods for supporting Delphi applications.

GO TO START

Scaling Methods

Scaling is the science of determining measuring instruments for human judgement. Clearly, one needs to make use of appropriate scaling methods to aid in improving the accuracy of subjective estimation and voting procedures. While most of these methods were originally developed to measure human judgement, they are easily adaptable, in many cases, to providing feedback to a Delphi group on the consequences of the judgements being made by the individuals.

For example, in many cases the appropriate judgement we wish to solicit from an individual is a ranking (i.e. ordinal scale measurement) of individual items. It is comparatively more accurate to ask individuals to rank order items, such as objectives or goals, than to ask for interval or ratio measures. A person can estimate that a particular goal is more important than another one; however, how much more important it is much more difficult to estimate consistently among a group of individuals. However, a scaling method such as Thurstone's Law of Comparative Judgement (Torgenson, 1958) can transform individual ranking judgements and produce analytically a group result which is an interval scale rather than a rank ordered scale. Providing the group the results in terms of this interval scale allows the individuals to detect in a much more reliable manner the extent to which certain objectives are clearly distinct from other objectives, and which are considered in closer proximity. Merely providing an averaging of the ranking scale does not contribute this added insight to the group as a whole. Furthermore, standard averaging approaches can lead to inconsistencies in group judgements (i.e. Arrow's Paradox). This can occur when there are disagreements underlying the averaging and when there is a lack of appropriate "anchoring" of the scales.

Standard correlation analysis approaches can be utilized to determine if there are subgroups or patterns of agreement and disagreement that exist across different issues or judgements made in the Delphi exercise. Do the people who feel a certain way about an issue feel the same way about another issue? This type of analysis should, in most cases, be provided first to the facilitator, and that person should decide which relationships need to be passed back to the group. In many Delphis, there are identified sub-groups. A Delphi might comprise people from different disciplines. Do the administrators, researchers, lawyers, insurers, and practitioners have differences in viewpoint that are based upon the perspective they take on a new medical treatment? The utility of these insights needs to be evaluated by the facilitator in the context of the application. With groups that work together over a long term, it might be desirable to provide such an analysis in terms of direct feedback without facilitator intervention.

Scaling methods span a wide range of techniques, from fairly simple and straightforward to fairly sophisticated. An example of a sophisticated approach is Multi-Dimensional Scaling (Carroll and Wish, 1975). MDS allows subjective estimates of similarity between any two objects to be translated into a relative position in a Euclidean space. It provides, in essence, N dimensional interval scaling of similarity estimates. The number of meaningful dimensions found suggests the number of independent dimensional factors underlying the way both the individuals and the group are viewing the similarity among objects. By looking at the alternative two dimensional projections, it is possible to arrive at an understanding of what the dimensional factors are.

The process by which one would use MDS in a Delphi would be to ask for the similarities and provide back the graphical layouts of the alternative dimensions. The respondents would then be asked to try to determine what these dimensions mean or represent. The result is a very powerful technique for potentially exposing the hidden factors a group is using to make judgements about similarities. The question of similarity is one that can be applied to a very wide range of object types, e.g. goals, products, countries, relationships, jobs, criteria, etc. MDS may also be viewed as a form of Cluster Analysis, and many methods in Cluster Analysis (Anderberg, 1973) can also be usefully applied to analyzing the subjective comparison judgements made by Delphi respondents.

When a group is using voting and estimation structures over a long period so that they make judgements about a growing number of similar situations, it is possible to consider the introduction of "scoring" methods (Dalkey, 1977), into the Delphi process. Given later feedback upon the accuracy of estimates or the quality or success of a given judgement, it is possible to consider feedback to estimators on their degrees of "accuracy" or possible biases due to factors such as conservatism. At the point where there are individuals utilizing Delphi techniques on a continuous basis, it will be possible to conduct the sorts of investigations needed to develop this particular area as a decision aid.

Designing a Delphi, whether via paper and pencil or on the computer, does include the process of designing a survey. As such, all the guidelines on good survey design and all the analysis methods that have been developed for analyzing survey data are potentially applicable to a Delphi. There is, however, a fundamental difference in objectives, which determines how one employs a given method, and whether it is applicable in a given situation.

Most scaling methods were evolved to aid in making an assessment of a human judgement with the premise that one is measuring a stable and constant quantity. One's intelligence or personality would not be affected or changed as a part of the measurement process. The goal is to discover biases and inconsistencies and to produce more accurate measurements. In the Delphi process, however, we are interested in informing the respondents about what they are really saying, and how it compares to the group as a whole. We are also interested in promoting changes in viewpoints and the other items we measure, if it will promote reaching a superior group view of the situation. We are also interested in detecting and exposing hidden factors or relationships of which the group may not be completely aware. With this in mind, one has to take special care that the use of these analysis methods does not convey a false impression of finalization in a group view.

Related to scaling is the area of Social Choice Theory, which provides alternative methods for the summarization of voting processes (Hogarth, 1977). The use of multiple methods of viewing the summarization of a given voting process can be useful in preventing a group from placing an over emphasis on a single voting result.

Probably the most important single consideration in the past that has prevented the incorporation of many of the approaches discussed here is the difficulty of educating the respondent in the interpretation of the method when the respondent is involved in only one short term Delphi process. With the potential that Computer Mediated Communications offers for long term continuous use by groups, it is now possible to consider incremental training for individuals to gain an understanding of the more sophisticated methods.

With the appropriate use of scaling methods it becomes possible to establish that individuals will mean the same thing when they use terms like: desirable, very desirable, likely, unlikely, agree, strongly agree, etc. It becomes possible to determine which alternatives are truly similar and which are distinctly different. Scaling methods, in essence, serve the objective of eliminating ambiguity in the judgmental and estimation process of a group.

GO TO START

Structural Modeling

The term Structural Modeling (Lendaris, 1980; Geoffrion, 1987) has come to represent a host of specific methods that have the objective of allowing an individual to express a large set of independent relationships and judgements which the given method utilizes to produce a "whole" model of the "system" being described. In computer terms, these are methods that allow a user to build a model of a situation without having to program or go through the use of experts in modeling and simulation. These methods vary from ones that provide a simple static relationship model (e.g. Interpretive Structural Modeling, Warfield, 1974), to more dynamic probabilistic and time varying models (e.g. Cross Impact, Time Series Regression, etc.). Just about any technique that organizes data into some sort of framework is a candidate for falling under the rubric of Structural Modeling. This includes Decision Trees and Payoff Matrices.

The objective of these approaches is to allow participants as individuals or as part of a group, to contribute pieces of a complex situation and to be provided a composite model. For example, in Interpretive Structural Modeling the individual is asked only to make a series of judgements about each two components of a model (such as two goals) with respect to whether they are related. The resulting complex network of relations is analyzed to collapse the network to a hierarchy of levels utilizing the existence of cycles within the network to make that simplification. The result for the individual or group is a set of levels or clusters of the objects which infer a relationship of higher to lower levels. This provides a graphical representation of the binary judgements made about each set of objects, taken two at a time.

The Cross Impact type model allows individuals to express probabilities of occurrence for a series of events, and conditional probabilities based upon assumptions as to which events will or will not occur. This is used to construct a quasi-causal model that allows participants to then vary the original estimates of individual events and see the consequences on the whole event set.

An excellent example of structural modeling to determine the important relationships and impacts on changes in medical care policies may be found in a recent article by Vennix et. al. (1990). This particular example is based upon the specification of negative and positive feedback loops. The development of the model was arrived at through the joint use of a paper and pencil Delphi and follow up face-to-face meetings.

All these techniques may be used in a Delphi process to help a group to develop a collaborative model of a complex situation. This is one area where the merger of the Delphi process and the computer presents a unique opportunity for dealing with situations of unusual complexity. More often than not, the individual experts who can contribute to building a complex model are geographically dispersed, and the effort to derive and improve such models is one that needs to take place over an extended period of time. In other words, improvement of the model has to be based upon feedback from its performance and incremental refinement.

A recent experiment (Hopkins, 1987) produced the very significant finding that it was possible to distinguish the degree of expertise an individual had about a complex situation by the measured richness of the models that were specified by each individual. This finding suggests the possibility of incorporating automated procedures for rating potential quality or inferred confidence in the contributions made by various individuals. This possibility deserves further investigation, as it would obviously provide improved models with a reduced communication load requirement.

Developing all the structural relationships in models of symptoms, tests, diagnosis, and treatment is an obvious area for the application of structural modeling. Appropriate techniques can be utilized on the computer to allow individuals to visualize the structures resulting from their contributed relationships and examine that structure for consistency. At the group level the same methods can be used to examine composite models for consistency and feed back inconsistencies for further refinement. Individuals are good at estimating individual relationships, but they are not always able to maintain consistency in developing complex models. The problem is compounded for group efforts.

A group can improve the nature of a model only by first seeing the results and consequences of the current design. Model building is a long term incremental process. The proper integration of Delphi methods, Computer Mediated Communications and Structural Modeling methods makes possible effective large scale modeling efforts not otherwise currently doable.

GO TO START


DELPHI, EXPERT SYSTEMS, GDSS, AND COLLABORATIVE SYSTEMS

The concept of an Expert System is to somehow capture the knowledge of a group of experts and store it in a computer for utilization by non-experts. The incorporation of the Delphi method in computer environments makes possible a number of significant refinements of this objective and some fundamental possible changes to the nature of Expert Systems.

The common approach to the development of an expert system is to achieve agreement among all involved experts before the actual coding of the knowledge base is performed. At present this is accomplished by a knowledge engineer or team of knowledge engineers, who must interface to a team of domain experts. Besides being time consuming, the fundamental flaw in this approach is that even within scientific and/or engineering fields, there is incomplete agreement among experts. Furthermore, agreement and disagreement are evolving properties that change dynamically over time. The Delphi method may be viewed as an alternative approach to collecting and synthesizing expert knowledge. In fact, within the current terminology, the design of a Delphi is in fact the design of a knowledge base or structure for putting the collected information together. It has also been an important objective of Delphi design to capture disagreements as well as agreements.

Another potential problem area is that experts concerned with a common problem can be in conflict. For example, design, production, and marketing professionals can have severe conflicts about the properties of a potential new product. Different medical researchers have different views about the most promising directions for research. Some of the problems addressed here have been investigated in the work on "Multi Expert Knowledge Systems (MKS) (LeClair, 1985, 1989). LeClair's work represents one of the few in depth approaches to incorporating the knowledge of disagreeing experts into the same system. However, this work still assumes the final system no longer incorporates the humans, but only their knowledge.

On the other hand, the view that we believe is the most promising is an objective for "Collaborative Expert Systems," where the experts are provided a knowledge structure (a Delphi Design) that allows them to dynamically contribute their knowledge to the system and to modify and evolve the system, over time. Clearly, such a system is one which the experts must desire to use for themselves as well as a tool for others who need their knowledge. This is the situation where the experts are both the creators and the users of the resulting expert system.

Without the above form of expert systems, the only feasible systems are those that restrict themselves to well established rules and agreements. In our view, the future of expert systems lies in their ultimate ability to be utilized by working groups of experts as a tool for collecting and assessing their collective knowledge about their work.

The current approach to expert systems through the use of knowledge engineers has been recognized as the chief bottleneck to the creation of these systems (Welbank, 1983; Waterman, 1986) for four main reasons:

Human expertise is usually complex, undocumented and consists of many different types and levels of knowledge (e.g. casual knowledge, common sense, meta knowledge, etc.)

Different experts may solve the problem differently and therefore may argue or even criticize one another on the method used.

There often exists a communication barrier between the knowledge engineer and the experts. The knowledge engineer is not an expert on the area and many experts do not understand their own problem solving process. As a result, many details and complications of the reasoning process may be ignored or obscured.

Motivation for the expert is often lacking because the results are often delayed or are not intended to benefit the expert.

Many of these problems can be overcome if one can develop collaborative design systems that focus on allowing a group of experts to develop their own expert system in an evolutionary manner and as a group oriented aid to their own work. The evolving system could also be tapped by non-experts for use. In that mode it would be considered by the experts as an aid to disseminating needed information to a wider circle of users and freeing the time of the experts for more difficult problems.

A collaborative expert system has to deal with at least four types of knowledge:

Deductive reasoning as represented by rule based models.

Inductive and intuitive reasoning representing experience on the part of experts.

Objectives, Goals, and Vested Interests which are viewpoints of experts in given circumstances.

Values and Beliefs which often underlie judgements about viewpoints.

The first two types have been typical of current expert systems. The other two areas have largely been the domain of Group Decision Support Systems, Delphi and Nominal Group Techniques, decision and utility theory, and psychological measurement methods. All four of these types of knowledge in a collaborative expert system must handle disagreement among the participants.

GO TO START

The Deductive Level of Disagreements

At the predicate logic level, experts may disagree about both the predicates to use and the rules that are valid in the real world. A well designed knowledge acquisition and expert environment should permit experts to "speak their mind" and not limit them to a preconceived vocabulary. It is therefore necessary that the accumulation of the vocabulary for specification of the rules be an integral part of the collaborative process.

Even if experts agree on a basic vocabulary, they often disagree about subtle details of a representation. This problem occurs whenever there are several possible reference frames, a situation which is well documented in the literature (e.g. Sondheimer, 1976). Unfortunately, at the current state of the art, two relations with different numbers of arguments are treated by logic programming environments as being two completely different entities.

One approach to this problem is to allow each member of a collaborative group to construct and tailor their own knowledge base and then to superimpose an analysis system for determining various types of agreement and disagreement. There are various weighted voting procedures (Shapley and Grofman, 1985) and scaling methods (Torgerson, 1958) that are promising for analysis of this situation. Weights have been used in some expert systems (e.g. Reboh, 1983). When such information is being accumulated over time then there are various "scoring" approaches (Dalkey, 1977) that may also be employed and coupled with "explanation based learning" approaches (Pazzani, 1988). Early work with the Delphi method indicated that even experts in a given area differ in expertise in various sub-domains and that the greatest improvement in accuracy of estimates was obtained by weighing estimates by this type of difference (Dalkey, 1970).

GO TO START

The Inductive Level of Disagreements

One of the major problems in designing knowledge representations that reflect common sense models of the world is that the world is not a discrete and well specified place. In fact, the world is quite vague and ambiguous. Ambiguity is the key property that most people have to deal with in reaching conclusions and decisions (Daft and Lengel, 1986). Ambiguity results from differences in concepts (e.g. "expensive") among different people and, in this context, from the collaborative process itself. In many cases the problem of ambiguity can be structured as the degree to which an object "more or less belongs" to a class. Fuzzy sets (Zadeh, 1965; Klir, 1988) are a generalization of standard sets that allows for degrees of membership. One approach to this problem is to utilize fuzzy set theory to represent the types of ambiguity that result from intuitive thinking.

The major research issue in this area is to develop methods for accurately combining multiple judgements and resolving disagreements about estimates of degrees of membership in fuzzy set relations (Stephanou and Sage, 1987). In this instance, scaling methods seem particularly appropriate. Humans are good at ranking (object A belongs more than object B) but not good at direct estimates of correlation factors needed for fuzzy set relationships. However, various scaling methods can be used to convert a collaborative set of ranking measures to interval or ratio scales.

Another approach is to incorporate multivalent and fuzzy logic (Dubois and Prade, 1980) into any model framework where the expert group is building the relationships. An example of degrees of truth and the resulting treatment of logical inference from a fuzzy perspective may be found in Baldwin (1981).

In essence, the problem is the recognition that models that capture intuition have to capture the structure of disagreements. A result is no longer true or false, but possibly a little of both. Rather than a group process being dedicated to eliminating disagreement, the objective is to capture it, quantify it, and integrate it into the collective model. There has always been a bias that disagreement has no place in the result of a scientific process. Because of this we can become blind to the forcing of unwarranted consensus. It would be a far more realistic view of the world to recognize the necessity for disagreements and "fuzzy" relationships as a fundamental part of any model meant to reflect the collective intuitions of a group of experts.

GO TO START

Goal and Value Disagreements

This is the area that is typically included in applications of Group Decision Support Systems. While there are certain specific approaches (e.g. Stakeholder analysis) for eliciting this type of information, the current state of the art is largely the use of human facilitators to guide the group process for the treatment of this type of knowledge. The fundamental issue of how far one can go in the process of substituting computer facilitation for human facilitation is very much an open issue. Earlier experiments in this area (Turoff & Hiltz, 1982; Hiltz et. al., 1986, 1987) showed that under some circumstances computer facilitation can degrade the performance of the group.

The approach that seems to be the most promising is to evolve a collaborative expert system that would be used to guide the meta group process. This would suggest to the group at what points in the activity they should shift the nature of what they are doing. However, such a facility would also be tailorable by the group so that it can gradually adapt to the preferred group process. Such a system would have to employ "default reasoning" approaches (Post, 1990).

As can be seen, there is no fixed dividing line between such areas as Delphi, Computer Mediated Communications, Group Decision Support Systems, and now Expert Systems. The concept of "collaborative expert systems" is really based upon the foundations established in each of these other areas. Subjective estimation, collaborative judgement formulation, and voting are strongly related support areas that also contribute to the potential for design in this area.

GO TO START


CONCLUSION

Delphi, as a tool, has reached a stage of maturity in that it is used fairly extensively in organizational settings in either the paper and pencil mode or in combination with face-to-face meetings and Nominal Group Techniques. Since most of these exercises are proprietary in nature there is not much of this activity reported in the open literature. The one exception to this is the applications in the medical field which are in fact actively reported and documented (Fink, Kosecoff, Chassin, and Brook, 1984). This clearly is a result of the growing need to formulate collaborative judgements about complex issues that are associated with the production of guidelines on medical practice and decisions.

Computer Mediated Communications has also seen some very significant applications in the medical field with respect to the formulation of collaborative judgements. One of the most significant to be reported in the literature was the use of leading researchers in Viral Hepatitis to review the research literature and update guidelines for practitioners (Siegel, 1980). While this was not run in an anonymous mode, it had all the other aspects of structure necessary for a dozen experts to deal with some five thousand documents and reach complete consensus on the resulting guidelines.

Another CMC application that had Delphi like structuring with Anonymity was a Group Therapy process to aid individuals in the cessation of smoking (Schneider, 1986; Schneider and Tooley, 1986). A general review of CMC applications in the medical field can be found in Lerch (1988).

However, there is yet to be a true merger of Delphi with Computer Mediated Communications. It is only now that the technology is becoming generally available to support the high degree of tailoring necessary to dynamically structure communications within a single conferencing system (Turoff, 1991). Most conference systems, to date, have only represented single design structures with very little control available to facilitators and moderators of discussions. Also, the general lack of graphics has placed a considerable limitation on just what Delphi techniques could be adapted to the computer environment. The merger of Delphi and Computer Mediated Communications potentially offers far more than the sum of the two methods.

Long before the concept of Expert Systems it was known that statistical factor models (Dalkey, 1977) applied to a large sample of expert judgements could produce performance that was consistently in the upper quarter of the performance distribution curve. Such models did not suffer from "regression to the mean" and could result in matching the best decisions by the best experts in the group. Expert Systems is really the emergence of tools to allow this to be done on a fairly wide scale. However, the results of Expert System approaches, as currently practiced, are never going to do better than the best experts.

The merger of the Delphi Method, Computer Mediated Communications and the tools that we have discussed opens the possibility for performance of human groups that exceeds the composite performance curve. We have termed this phenomenon "collective intelligence" (Hiltz and Turoff, 1978). This is the ability of a group to produce a result that is of better quality than any single individual in the group could achieve acting alone. This rarely occurs in face-to-face groups.

A recent experiment in utilizing human judgement in conjunction with the types of models that are used in Expert Systems confirms that this is in fact possible (Blattberg and Hoch, 1990). There has been too much attention in recent years to utilizing computer technology to replace humans and far too little effort devoted to the potential for directly improving the performance of human groups. This can be achieved through integration of computer based methods and the concept of structured communications at the heart of the Delphi Method.


GO TO START


REFERENCES

Anderberg, M.R., Cluster Analysis for Applications, Academic Press, 1973.

Baldwin, J.F., "Fuzzy Logic and Fuzzy Reasoning," in Fuzzy Reasoning and Its Applications, ed. by E.H. Mamdani and B.R. Gaines, Academic Press, 1981.

Benbasat, I. and H.N. Taylor, "Behavioral Aspects of Information Processing for the Design of Management Information Systems," IEEE Transactions on Systems, Man and Cybernetics, (SMC-12:4), July/August 1982, 439-450.

Bui, T. and M. Jarke, "Communications Requirements for Group Decision Support Systems," Proceedings of the nineteenth Annual Hawaii International Conference on System Sciences, 1986, 515-523.

Blattenberg, R.C., and S.J. Hoch, "Database Models and Managerial Intuition: 50% Model + 50% Manager," Management Science, (36:8), August 1990, 887-889.

Conklin, J., "Hypertext: An Introduction and Survey," IEEE Computer, September 1987, 17-41.

Conklin, J., and M.L. Begeman, "gIBIS: A Hypertext tool for team design deliberation," Proceedings of Hypertext Conference, ACM Press, 1987, 247-251.

Daft, R.L. and R.H. Lengel, "Organizational Information Requirements, Media Richness and Structural Design," Management Science, (32:5), May 1986, 554-571.

Dalkey, N.C., Group Decision Theory Report to ARPA, UCLA Engineering Report 7749, July 1977.

Dalkey, N.C., "Use of Self-Ratings to Improve Group Estimates," Journal of Technological Forecasting and Social Change, (1:3), March, 1970.

Delbecq, A.L., A.H. VandeVen and D.H. Gustafson, Group Techniques for Program Planning: A Guide to Nominal Group and Delphi Processes, Scott-Foresman & Co., 1975.

DeSanctis, Gerardine and Brent Gallupe, "A Foundation for the Study of Group Decision Support Systems," Management Science, (33:5), May 1987, 589-609.

Dubois, D. and H. Prade, Fuzzy Sets and Systems, Academic Press, 1980.

Fink, A., J. Kosecoff, M. Chassin, and R.H. Brook, "Consensus Methods: Characteristics and Guidelines for Use," American Journal of Public Health, (74:9), September 1984, 979-983.

Geoffrion, A.M., "An Introduction to Structured Modeling," Management Science, (33:5), May 1987, 547-588.

Hiltz, S.R., Online Communities: A Case Study of the Office of the Future, Ablex Press, 1984.

Hiltz, S.R., "The Virtual Classroom: Using CMC for University Teaching," Journal of Communications, (36:2), Spring 1986, 95-104.

Hiltz, S.R., "Productivity Enhancement from Computer Mediated Communications," Communications of the ACM, (31:12), December 1988, 1438-1454.

Hiltz, S.R., "Collaborative Learning: The Virtual Classroom Approach," T.H.E. Journal, (17:10), June 1990, 59-65.

Hiltz, S.R., K. Johnson and M. Turoff, "Experiments in Group Decision Making, 1: Communication Process and Outcome in Face-to-Face vs. Computerized Conferences," Human Communication Research, (13:2), Winter 1986, 225-253.

Hiltz, S.R. and M. Turoff, The Network Nation: Human Communication via Computer, Addison-Wesley, 1978.

Hiltz, S.R. and M. Turoff, "Structuring Computer-Mediated Communications to Avoid Information Overload," Communications of the ACM, (28:7), July 1985, 680-689.

Hiltz, S.R., M. Turoff and K. Johnson, "Experiments in Group Decision Making, 3: Disinhibition, Deindividuation, and Group Process in Pen Name and Real Name Computer Conferences," Journal of Decision Support Systems, (5), 1989, 217-232.

Hopkins, R.H., K.B. Cambell and N.S. Peterson, "Representations of Perceived Relations Among the Properties and Variables of a Complex System," IEEE Transactions on Systems, Man and Cybernetics, (SMC-17:1), January/February 1987, 52-60.

Hogarth, R.M., "Methods for aggregating opinions," in Decision Making and Change in Human Affairs, ed. by H. Jungermann and G. de Zeeuw, Dordrecht, Netherlands, 1977.

Hsu, E., "Role-Event Gaming-Simulation in Management Education: A Conceptual Framework and Review," Simulation and Games, (20:4), December 1989, 409-438.

Kiesler, S., Siegel, J., and McGuire, T.W. Social-Psychological Aspects of Computer-Mediated Communication," American Psychologist, 39, 1984, 1123-1134.

Klir, G.J., and T.A. Folger, Fuzzy Sets, Uncertainty, and Information, Prentice Hall, Englewood Cliffs, NJ, 1988.

LeClair, S.R., "Interactive learning: A Multiexpert Paradigm for Acquiring New Knowledge," SIGART Newsletter, (108), 34-44, 1989.

LeClair, S.R., A Multiexpert Knowledge System Architecture for Manufacturing Decision Analysis, Ph.D. Dissertation, Arizona State University, Tempe, Arizona, 1985.

Lehner, P.E., M.A. Probus and M.E. Donnell, "Building Decision Aids: Exploiting the Synergy Between decision Analysis and Artificial Intelligence," IEEE Transactions on Systems, Man and Cybernetics, (SMC-15:4), July/August 1985, 469-474.

Lendaris, G., "Structural Modeling: A Tutorial Guide," IEEE Transactions on Systems, Man & Cybernetics, (SMC-10:12), December 1980, 807-840.

Lerch, I. A., "Electronic Communications and Collaboration: The Emerging Model for Computer Aided Communications in Science and Medicine," Telematics and Informatics, (5:4), 1988, 397-414.

Linstone, H. and M. Turoff, The Delphi Method: Techniques and Applications, Addison-Wesley, 1975.

Linstone, H., Multiple Perspective for Decision Making, Elsevier North Holland, 1984.

Lowe, D., "Cooperative Structuring of Information: The Representation of Reasoning and Debate," J. of Man-Machine Studies, (23:1), July, 1985, 97-111.

Merkhofer, M.W., "Quantifying Judgemental Uncertainty: Methodology, Experiences and Insights," IEEE Transactions on Man, Systems, and Cybernetics, (SMC-17:5), Sept./Oct. 1987, 741-752.

Nelson, T., "A File Structure for the Complex, The Changing and the Indeterminate," ACM 20th National Conference Proceedings, 1965, 84-99.

Pazzani, M.J., "Explanation-based learning for knowledge-based systems," in Knowledge Acquisition for Knowledge-Based Systems, eds. B.R. Gaines and J.H. Boose, Academic Press, 1988.

Post, S., and A.P. Sage, "An Overview of Automated Reasoning," IEEE Transactions on Systems, Man, and Cybernetics, (20:1), Jan./Feb., 1990, 202-224.

Rao, U., and M. Turoff, "Hypertext Functionality: A Theoretical Framework," International Journal of Human-Computer Interaction, (2:4), 1990, 333-358.

Reboh, R., "Extracting Useful Advice from Conflicting Expertise," Proceedings of the 8th International Joint Conference on Artificial Intelligence, William Kaufmann Inc., Los Altos, Ca., 145-150, 1983.

Rice, R.E. and Associates, The New Media: Communication, Research, and Technology. Beverly Hills: Sage, 1984.

Riesbeck, C.K., "Knowledge Reorganization and Reasoning Style," in Developments in Expert Systems, ed. by M.J. Coombs, Academic Press, 1984, 159-176.

Rohrbaugh, J., "Improving the quality of group judgement: Social judgment analysis and the Nominal Group Technique," Organizational Behavior and Human Performance, (28), 1981, 272 -288.

Schneider, S. J., and Tooley, J., "Self-Help Computer Conference," Computers and Biomedical Research, (19), 1986, 274-281.

Schneider, S. J., "Trial of an On-Line Behavioral Smoking Cessation Program," Computers in Human Behavior, (2), 1986, 277-296.

Shapley, L. and B. Grofman, "Optimal Weighting of Votes," Public Choice, 1985.

Siegel, E. R., "The Use of Computer Conferencing to Validate and Update NLM's Hepatitis Data Base," in Electronic Communications: Technology and Impact, Henderson, H., and MacNaughton, J., Eds., AAAS Selected Symposium 52, Westview Press, (1980), ISBN: 0-89158-845-0.

Sondheimer, N.K., "Spatial Reference and Natural-language Machine Control," J. of Man-Machine Studies, (8:3), 329-336, 1976.

Stephanou, H. and A.P. Sage, "Perspectives on Imperfect Information Processing," IEEE Transactions on Systems, Man and Cybernetics, Volume SMC-17, Number 5, September/October 1987, 780-798.

Streitz, N.A., "Cognitive compatibility as a central issue in human- comptuer interaction: Theoretical framework and empirical findings," in Cognitive Engineering in the Design of Human- Computer Interaction and Expert Systems, edited by G. Salvendy, Elsevier, 1987, 75-82.

Torgerson, W.S. Theory and Methods of Scaling, Wiley, 1958.

Turoff, M., "The Anatomy of a Compute Application Innovation: Computer Mediated Communications (CMC)," Journal of Technological Forecasting and Social Change, (36), 1989, 107-122.

Turoff, M., "Computer Mediated Communication Requirements for Group Support," Organizational Computing, (1:1), 1991, 85-113.

Turoff, M., "The Policy Delphi," Journal of Technological Forecasting and Social Change, (2:2), 1970.

Turoff, M., "Delphi Conferencing: Computer Based Conferencing with Anonymity," Journal of Technological Forecasting and Social Change, (3:2), 1972, 159-204.

Turoff, M., "Computerized Conferencing and Real Time Delphis: Unique Communication Forms," Proceedings 2nd International Conference on Computer Communications, 1974, 135-142.

Turoff, M. and S.R. Hiltz, "Computer Support for Group versus Individual Decisions," IEEE Transactions on Communications, (Com- 30:1), January, 1982, 82-90.

Turoff, M., U. Rao, and S.R. Hiltz, "Collaborative Hypertext and Computer Mediated Communications," Proceedings of the 24th Hawaii International Conference on Systems Science, Volume IV, IEEE Computer Society Press, 1991, 357-366.

Vennix, Jac A.M., Jan W. Gubbels, Doeke Post, and Henk J. Poppen, "A Structured approach to Knowledge Elicitation in Conceptual Model Building," System Dynamics Review, (6:2), Summer 1990, 31-45.

Warfield, J.N., "Toward interpretation of complex structural models," IEEE Transactions on Systems, Man and Cybernetics, (SMC-4), 1974, 405-417.

Waterman, D.A., A Guide to Expert Systems, Addison-Wesley, 1986.

Welbank, M., A Review of knowledge Acquisition Techniques for Expert Systems, British Telecom Research Laboratory Report, 1983.

Zadeh, L.A., "Fuzzy Sets," Information and Control, (8), 338-353, 1965.

GO TO START


BIOGRAPHIES

Starr Roxanne Hiltz and Murray Turoff are Professors of Computer and Information Science at the New Jersey Institute of Technology. Since 1974, they have been jointly involved in the development and evaluation of Computer Mediated Communication Systems and associated investigations of group processes such as Delphi and Group Decision Support. They are the authors of the award winning book: "The Network Nation: Human Communication via Computer."

Dr. Turoff has been a designer of a significant number of successful Delphi studies and is responsible for the development of the Policy Delphi structure. He also developed the first Computerized Conferencing System.

Dr. Hiltz has conducted a number of large scale evaluation studies of CMC systems and is also the principal designer of the Virtual Classroom application of Computer Mediated Communication Systems.

Their current research efforts involve the incorporation and evaluation of group decision aids within CMC systems.

GO TO START


Back to homepage of Murray Turoff
Back to homepage of Roxanne Hiltz