%\documentstyle[11pt,twoside,informat]{article} %\setlength{\oddsidemargin}{0pt} %\setlength{\evensidemargin}{0pt} %\setlength{\textheight}{8.25in} %\setlength{\textwidth}{6.5in} %\input{psfig} \newcommand{\papername} {Matjaz Book} \newcommand{\lnstretch}[1] {\renewcommand{\baselinestretch}{#1} \large \normalsize} \newcommand{\QED}{ \rule{4mm}{4mm}} %\begin{document} \bibliographystyle{/usr/local/lib/tex82/inputs/alpha} \title{AI Progress, Massive Parallelism and Humility} \author{J. Geller \\ {\it Department of Computer and Information Sciences \\ New Jersey Institute of Technology, Newark, NJ 07102\\ geller@homer.njit.edu} } \titleodd{AI Progress, Massive Parallelism\ldots} \authoreven{J. Geller} \keywords{massive parallelism, knowledge representation, Connection Machine} \abstract{ In this paper we outline a view of Artificial Intelligence that is between the extremes that ``everything is just fine in AI'' and ``AI is hopeless, we might as well give up.'' We present our own approach to Artificial Intelligence, which is based on the combination of Knowledge Representation with Massively Parallel hardware. We conclude that Massive Parallelism might be helpful for the development of Artificial Intelligence. However, the large investments necessary for the development of Massive Parallelism itself will require a determined involvement of one or even several cooperating governments. } \date{} \maketitle %\noindent {\bf KEYWORDS:} \large \normalsize %\begin{figure} %\centerline{\psfig{figure=figure1.eps}} %\caption %\end{figure} \section{Introduction} In contrast to most other fields of science, Artificial Intelligence is still questioned as a discipline in its entirety, even after a 41 year history. The opinions of scholars about AI cover the widest possible range. On one side, many practitioners and researchers feel that AI is developing as well as can be expected. At the other extreme the opinion is held that AI cannot succeed. The conclusion to be drawn from this is apparently that we might as well stop working on it. The contributors to this book certainly agree that AI is {\it not} developing as well as it should. However, they disagree with each other on what the problem is, and how to deal with it. Let us briefly address the advocates of the extreme attitude that ``The problems of AI can never be solved, so we might as well give up and work on something different.'' We feel that three questions have to be raised, if we want to make a decision about terminating AI as a research program. (1) Is the problem of AI important to science and humanity as a whole? (2) Is it still true that scientific investigation in every known field to date has resulted in some advancement? (3) Is there positive proof that the problems of AI {\it cannot} be solved? As long as the answers to these three questions are yes, yes, and no, respectively, we feel justified in pursuing the goals of AI. Where do we fit in the wide spectrum of opinions about AI? Let us draw a comparison to cancer research. There is wide agreement that from the point of view of curing cancer, there has been (too) little progress in research. Does anybody therefore advocate that we stop working on cancer research? Of course not. The problem of cancer is too important to humanity. We feel the same way about AI. We feel that AI has not made as much progress as expected. AI has also not made as much progress as its proponents have promised at different stages of AI's history. But the overall problem of AI is much too important for our understanding of ourselves to give up on it. Thus, the solution that we suggest is to expend {\it more} effort on AI while promising {\it less} about it. In Section 2 we present an argument why we feel that AI has shown a certain lack of progress. In Section 3 we touch very briefly on the problem of overcommitment in AI. In Section 4 we finally introduce our own approach to Artificial Intelligence research. This approach is based on the combination of Knowledge Representation and Massively Parallel hardware. Our conclusion in Section 5 is that we need a return to ``big ticket science'' in Artificial Intelligence. Governments need to get involved in building the next generation of massively parallel computers for AI applications. \section{Is there a Lack of Progress in AI?} In 1995 the first AI book ever, {\it Computers and Thought} was republished after 32 years \cite{Feigenbaum95}. Looking at this book one gets the feeling that, let me say this carefully, AI has not advanced as well as other fields of computer science. Below are two quotes, one from IJCAI 95\footnote{International Joint Conference on Artificial Intelligence -- 1995, held in Montreal, Canada.}, the other one from ``Computers and Thought.'' The reader is invited to guess which one is from where. ``Both the FRS and the DBMS server were running on the same workstation, a SPARCstation 10 model 41 with 64 MB of physical memory.'' (Quote 1). ``The IPL-IV system uses about 1200 of the JOHNNIAC's 4096 words of high-speed core memory and about 650 words of the 12,288-word auxiliary drum memory.'' (Quote 2) The reader will probably think that I am trying to pull his leg. Obviously, Quote 1 is from 1995, and Quote 2 from 1963. However, there is a point that I am trying to make. Computer architecture, processor design, memory technology and system software have improved between the time of Quote 2 and the time of Quote 1. They have improved so strikingly, so ``screamingly,'' that the lives of all of us have been affected in almost every imaginable aspect. Now let us try two more quotes: ``However, even if we neglect the problem of selecting which legitimate parsing is correct in a given instance, the problem of discovering any legitimate parsing is itself formidable when we deal with the tens of thousands of rules needed to describe a natural language. A complete set is, in fact, so large that none has yet been devised for any natural language, although some have been under study for thousands of years.'' (Quote 3) ``There are 19\% no-solution cases in the test. This means that 593 background verb-object pairs in the text are not sufficient to support word sense disambiguation of these cases.'' (Quote 4) The expert in Natural Language will probably recognize with ease which one of these two quotes must be from 1995, and which one is from 1963. For the general population, however, both quotations give a feeling that things have not ``screamingly'' improved over the last 32 years, in areas such as natural language understanding. Let's try one last quote. ``Thus the statement that `George voted for ..... and is opposed to medical care for the aged' makes it more likely that George is opposed to the United Nations, though only slightly so.'' (Quote 5) My point is that I am not denying the considerable advancement in all fields of Artificial Intelligence as seen from within the fields themselves. However, looking at them from outside, from the point of view of their impact on our every day lives, and doing this in comparison to the phenomenal development of available computing power, we must say that core areas such as natural language understanding and scene/face recognition have not advanced as expected. The same thing is true for the most important subarea of Artificial Intelligence, namely, Knowledge Representation. If Wittgenstein was right, and ``meaning is use,'' then the meaning of Knowledge Representation is right now ``mostly a subfield of applied logic.'' Anybody who has doubts about that may consult the proceedings of the Knowledge Representation conferences, e.g., the recent one in Boston \cite{Aiello96}. This development is understandable. Many years of ``cute programs'' made it harder and harder to judge correctness and originality of research. Research in applied logics is easier to judge for its originality. There are original theorems with proofs, so the research results in correct publications. What fields such as knowledge representation need is a return to programming. Libraries of test cases and benchmarks need to be maintained, and competitions organized, such as in other AI areas, e.g., machine learning and theorem proving. Any and all published implementation papers must make their source code and their test data available over the Internet. A paper that describes a system that objectively improves an existing system (or its own earlier version) by 10\% must be publishable in a high quality conference, and if the improvement is based on a conceptual advancement, even in a high quality journal. Metrics must be developed to compare implementations. Again, this is in no way an original request. Software engineers do it all the time, but knowledge representation researchers don't. Below a separate section will be devoted to our own approach for making progress in AI, namely, the combination of Knowledge Representation with Massive Parallelism. Our comparisons with 1963 should not be misunderstood. Having a notion of Knowledge Representation is a considerable improvement over not having it. Having a track on knowledge representation in all major AI conferences, and a dedicated KR conference, is certainly an improvement over what was done in 1963. Yes, AI has influenced every other field in computer science, and many other areas outside of computer science, including philosophy, psychology, and linguistics. Yes, my new (cheap) Canon camera's directions for use mention that the camera uses AI (= Artificial Intelligence). There is no doubt about all these great advancements since {\it Computers and Thought.} But, why is there no Microsoft Semantic Network bundled together with Windows NT? Why is there no SUN natural language help program delivered with SOLARIS? Why do I still have to swipe a card to get into my office on Sunday, instead of being able to smile into a robot camera? In fact, why do people in AI still need to go to the office on Sunday? Why do I still have to vacuum the house if I don't go to the office on Sunday? Why isn't a household robot doing it for me? All these are questions that indicate that we have not come far enough in AI yet, and {\it Computers and Thought} is a painful reminder of that fact. Maybe readers will not agree with my prescription(s) for progress, but let us take the (re)publication of this book as a signal that AI researchers need to re-evaluate their own methods and their progress. Let me conclude this section by solving the above puzzles. Quote 3 and Quote 5 are from Computers and Thought, Quote 4 is from IJCAI 1995. The most interesting point about Quote 5 is that even the ``popular examples'' then and now are from healthcare. \section{Humility -- a Subjective Interlude} Learning, the ability to improve one's behavior, is considered to be central to any form of intelligence. So what can we learn from 41 years of AI history? Maybe what we should learn is humility. The overclaims of AI history are well known to everybody, and well documented in other papers in this volume. Did this failure shape the behavior of the AI community? In my opinion, the answer is ``no.'' How often do we hear conference speakers say ``I don't know''? How often do we find conference papers where the authors stress the shortcomings of their own approaches? Not often enough, in my opinion. I think that the field over all would gain credibility if results were presented with a little more humility. As for predictions, maybe AI researchers should stop making predictions altogether. One event that can and should teach AI researchers humility is having children. It is very humiliating to see a child under the age of three process language, solve problems, and negotiate obstacle courses in a way that puts our natural language processing, planning, problem solving, vision and robotics efforts to shame. Maybe one semester in a child-care facility should become a prerequisite for getting a PhD in AI. (This prerequisite will be waived for parents.) \section{Massively Parallel Knowledge Representation} \subsection{What is Massively Parallel Knowledge Representation?} We have spent the past seven years investigating an approach to AI that combines ideas from Knowledge Representation with the power of Massive Parallelism. We call this Massively Parallel Knowledge Representation (MaPKR)\footnote{This term was initially coined for \cite{GellerDu91}. It is pronounced ``mapcar'' which is intended to hint at its LISP heritage. Unfortunately, the abbreviation was mangled by a copy editor and came out in the final version of the paper as M\&PKR.} \cite{GellerDu91,Geller93a,Geller94a,Geller94b,Geller97} \cite{Lee93,Lee96a,Lee96b,Lee97,Lee97b}. A small number of other investigators have also produced research which can be described by this label, e.g., \cite{Evett93a,Hendler95,Kitano93a,Waltz90,Shastri89,Stoffel97,Shastri97}. Having said about humility what we just said about humility, we need to stress that we don't know whether MaPKR will contribute to the solution of AI's big problems. As opposed to some of the papers in this volume, we cannot claim that this is a new approach. The foundations of what has been done in MaPKR go back to Fahlman \cite{Fahlman79}. It is simply our belief that this is an approach that still deserves some more attention. We will now contrast Massively Parallel Knowledge Representation in broad strokes with other approaches to symbolic knowledge representation (SKR). In SKR, a difficult problem is introduced with the help of a (few) small example(s). Then a solution is developed that is found to work on problems of the same complexity as the initial example. When the solution is tried out on problems of considerably higher complexity, it usually does not scale up. In Massively Parallel Knowledge Representation, on the other hand, a comparatively simple problem is introduced, also with a few simple examples. However, the solution is developed from the outset keeping in mind that it will have to work for problems of considerably higher complexity than these examples. To achieve these goals, algorithms are developed for massively parallel hardware. If the implementations of these parallel algorithms show satisfactory runtimes, then the simple problems are replaced by more complicated problems. Now we will contrast Massively Parallel Knowledge Representation with neural network approaches. Neural networks often have the term ``massively parallel'' in common with MaPKR. However, most neural network approaches use numeric weights that are hard to interpret. This is not the case for MaPKR. Even though our own work uses a numeric representation, as explained below, the numbers are (large) integers which can be easily interpreted at all times \cite{GellerDu91}. Other approaches to MaPKR don't use any auxiliary numeric representations. Some research straddles the borderline between MaPKR and neural networks, especially \cite{Shastri88,Shastri89,Shastri97}. We feel that only research that has been implemented on massively parallel hardware really deserves the label MaPKR. Simulators are not acceptable. This also seems to be in contrast with neural network research, where simulators are often used. \subsection{A Summary of our Research Results} Some AI researchers have been motivated by trying to understand how people can solve certain problems so quickly, so effortlessly, so reflexively \cite{Shastri90,Shastri93,Shastri97}. AI's considerable interest in inheritance and IS-A hierarchies is rooted in this motivation. One problem that is even simpler than inheritance is transitive closure reasoning. Like inheritance, this is a ubiquitous form of human reasoning which people perform on a daily basis, seemingly without any effort. For example, humans can answer the question ``Are lions larger than pencils?'' without any hesitation. We know that lions are larger than pencils, even though we have never seen a lion together with a pencil. There are certainly many explanations possible for how humans are performing this reasoning step, and we are not trying to find out which one is the right one. What we are trying to do is to suggest a mechanism that gives similar results as the ones obtained by humans and that works very fast. Transitive closure reasoning seems to qualify for this purpose. We can build a reasoner that ``knows'' that lions are larger than cats, and that also knows that cats are larger than pencils. If we apply transitivity to these two assertions, the reasoner can derive the fact that lions are indeed larger than pencils. The root of our research on transitivity reasoning was the combination of two surprising (for us) realizations. One realization was that this kind of transitive closure reasoning can actually be performed in constant time. We were made aware of this fact by a paper of Schubert {\it et al.} \cite{Schubert87}. Their solution had two problems: (1) It works only for trees; (2) For large knowledge bases, there is no good update algorithm for Schubert's representation. The second realization was that on a Connection Machine \cite{TMC88} it takes the same time to add the number 1 to one variable, or to add the number 1 to 4096 variables, as long as every one of these variables sits on its own little processor. (This assumes that the Connection Machine in question has 4k processors. As the largest possible Connection Machines had 64k processors, this is a very reasonable assumption. We did most of our research on a 32k processor Connection Machine.) To make this more concrete, we will briefly show an example of how constant-time transitive closure reasoning is possible. In Figure 1 a typical IS-A hierarchy is represented. Every node in the graph represents a concept of a semantic network. The node is annotated with a text string describing which concept is represented. Every arrow represents an IS-A (subclass) link. An arrow from Mammal to Animal indicates that every mammal is also an animal. In addition, IS-A links are used for inheritance, however, this is not relevant here. Each node is annotated with a number pair. The number pair encoding next to each node is due to Schubert \cite{Schubert87}. The way number pairs are generated is simple, but it is not relevant to this paper, and we will omit a description \cite{Schubert87}. The way the number pairs are used is described now. If the question is raised whether a Cheetah is an Animal, then the question can be answered in two different ways: (1) By following the chain of pointers from Cheetah up until either Animal or the root (Thing) is found; (2) By comparing the number pair of Cheetah with the number pair of Animal. If and only if the number pair of Cheetah is contained in the number pair of Animal, we can say that a Cheetah is indeed an animal. Because [7 7] is contained in [3 9] this is the case, and this result was obtained without any ``pointer chasing.'' \begin{figure*} \centerline{\psfig{figure=geller.eps}} % Page 14 Yugi Dissertation \caption{} \end{figure*} Combining the two realizations -- the possible speed of transitive closure reasoning and the power of the Connection Machine -- we found that problem (2) from above, finding an update algorithm, can be easily solved \cite{GellerDu91}. In a relatively short time all necessary algorithms were developed, implemented, tested, and published. Then we went on to solve problem (1). Even though we could draw on a very good idea that was published shortly before that time \cite{Agrawal89}, we still had to find a way for parallelizing this idea. We figured that all problems would be solved in a short time, about as much time as it had taken us to implement the tree update and transitivity algorithms \cite{GellerDu91}. {\bf Five years and one dissertation \cite{Lee97b} later...} In the next several years it turned out that by advancing from trees to directed acyclic graphs (DAGs) of IS-A links, completely unexpected phenomena occurred. Just to give a flavor of what happened, we found that during update, number pairs suddenly disappeared and other number pairs appeared out of nowhere. The PhD student working on these problems was very happy. She had found enough hard problems for a thesis \cite{Lee97b} and there would even be some problems left. All in all, this was our lesson in humility. \subsection{The Demise of Massive Parallelism -- A Chance Missed?} During our research on MaPKR, the initial CM-1 Connection Machine was replaced by the (better) CM-2 model. After that, the CM-2 was replaced by the CM-5, and this author attended one of the ``unveiling presentations'' of the new machine. To our considerable dismay, the CM-5 was not really massively parallel anymore. It was more like a tightly networked cluster of workstation processors that achieved much of its parallelism by simulating it. The model that we had access to had only 32 real processors. One has to admit that these processors were much more powerful than the tiny CM-2 processors. Still, the central idea of massive parallelism implemented in hardware had been abandoned by Thinking Machines, the makers of the Connection Machine. The reason for this aboutface was market pressure. It was much cheaper to develop the next generation of Connection Machines with off-the-shelf chips. Thinking Machines was under considerable pressure to cut costs and turn profits. An insider's look at these difficulties appeared in \cite{Waltz97}. With the end of the cold war and the continued budget crisis, the United States government had lost its willingness and ability to fund big ticket item research such as the Connection Machine. Thinking Machines went ``Chapter 11'' (the US bankruptcy statute) and was later re-created as a software company, losing many of its best and brightest scientists in the process. While other researchers found it easy to convert their efforts to the coarse grained parallelism of other computers \cite{Stoffel97}, we feel that this is not the right way to go for our research. We believe that if human-like implemented Artificial Intelligence ever becomes a reality, it will be found that one of the components in the Solution (with a capital S) will be a form of Massively Parallel Knowledge Representation. Undoubtedly, there will be many other ingredients. Maybe all subfields of current day AI will have contributed to the Solution. But AI researchers should not make any predictions. We just hope that massively parallel hardware, at least one order of magnitude larger than what we have seen so far, will eventually be developed and made available for Artificial Intelligence research. \section{Conclusions} We have presented an example of why we feel that Artificial Intelligence has not advanced as well as other areas of computer science in the last 41 years. While we share this opinion with others, we believe that the problem is not that AI is fundamentally ``unsolvable'' but that a more concentrated effort is needed for solving it. Specifically, it is our bias that Massively Parallel Knowledge Representation is a useful area for further investigation. We believe that massively parallel computers with at least a million processors will be needed to model ``effortless'' human reasoning. Obviously, such a large project is beyond the reach of normal channels of financing. Venture capital, bank loans, and development as part of a normal profit-oriented business cycle have failed before \cite{Waltz97} and will certainly fail at a project of this size. Governments will have to step in, and multi-national cooperations might be necessary for the development of this kind of computational capability. Further development in other areas of Artificial Intelligence will also be necessary, and the integration of these different technologies will pose a nontrivial problem. With all that, we are not comfortable with ``promising'' success. At a time when government funding for research is often only available if the proposer can ``virtually guarantee'' success, we are making it a point that no such guarantee is possible. After all, maybe AI researchers should stop making predictions altogether.\\ {\small \subsection*{Acknowledgments} Bill Rapaport has pointed out to me initially that the re-publication of {\it Computers and Thought} engenders the feeling of a certain lack of progress in AI. Mike Halper has commented on a draft of this paper. %\bibliography{/export/ai/faculty/geller/DB/db} %\bibliography{/export/ai/faculty/geller/PAR/par} \begin{thebibliography}{{Thi}88} \bibitem[ABJ89]{Agrawal89} R.~Agrawal, A.~Borgida, and H.~V. Jagadish. \newblock Efficient management of transitive relationships in large data and knowledge bases. \newblock In {\em Proceedings of the 1989 ACM SIGMOD International Conference on the Management of Data}, pages 253--262, Portland, OR, 1989. \bibitem[ADS96]{Aiello96} L.~C. Aiello, J.~Doyle, and S.~Shapiro, editors. \newblock {\em Proceedings of the Fifth International Conference on Principles of Knowledge Representation and Reasoning (KR' 96)}, San Francisco, CA, 1996. Morgan Kaufmann. \bibitem[EHA93]{Evett93a} M.~P. Evett, J.~A. Hendler, and W.~A. Andersen. \newblock Massively parallel support for computationally effective recognition queries. \newblock In {\em Proceedings of the Eleventh National Conference on Artificial Intelligence}, pages 297--302. MIT Press, Cambridge, MA, 1993. \bibitem[Fah79]{Fahlman79} S.~E. Fahlman. \newblock {\em NETL: A System for Representing and Using Real-World Knowledge}. \newblock MIT Press, Cambridge, MA, 1979. \bibitem[FF63]{Feigenbaum95} E.~A. Feigenbaum and J.~Feldmann, editors. \newblock {\em Computers and Thought}. \newblock McGraw-Hill, New York, 1963. \newblock Republished 1995 by AAAI with MIT Press. \bibitem[GD91]{GellerDu91} J.~Geller and C.~Y. Du. \newblock Parallel implementation of a class reasoner. \newblock {\em Journal of Experimental and Theoretical Artificial Intelligence}, 3:109--127, 1991. \bibitem[Gel93]{Geller93a} J.~Geller. \newblock Innovative applications of massive parallelism. \newblock {\em AAAI 1993 Spring Symposium Series Reports, AI Magazine}, 14(3):36, 1993. \bibitem[Gel94a]{Geller94a} J.~Geller. \newblock Advanced update operations in massively parallel knowledge representation. \newblock In H.~Kitano and J.~Hendler, editors, {\em Massively Parallel Artificial Intelligence}, pages 74--100. AAAI/MIT Press, 1994. \bibitem[Gel94b]{Geller94b} J.~Geller. \newblock Inheritance operations in massively parallel knowledge representation. \newblock In L.~Kanal, V.~Kumar, H.~Kitano, and C.~Suttner, editors, {\em Parallel Processing for Artificial Intelligence}, pages 95--113. Elsevier Science Publishers, Amsterdam, 1994. \bibitem[GKS97]{Geller97} J.~Geller, H.~Kitano, and C.B. Suttner, editors. \newblock {\em Parallel Processing for Artificial Intelligence 3}. \newblock North-Holland Elsevier, Amsterdam, 1997. \bibitem[HCL95]{Hendler95} J.~A. Hendler, Jaime Carbonell, and Douglas Lenat. \newblock Very large knowledge bases - architecture vs engineering. \newblock In {\em Proc. of the 14th Int. Joint Conference on Artificial Intelligence}, pages 2033--2036. Morgan Kaufmann, Montreal, Quebec, 1995. \bibitem[Kit93]{Kitano93a} H.~Kitano. \newblock Challenges of massive parallelism. \newblock In {\em Proc. of the 13th Int. Joint Conference on Artificial Intelligence}, pages 813--834. Morgan Kaufmann, San Mateo, CA, 1993. \bibitem[Lee97]{Lee97b} Y.~Lee. \newblock {\em Massively Parallel Reasoning in Transitive Relationship Hierarchies}. \newblock PhD thesis, CIS Department, New Jersey Institute of Technology, 1997. \bibitem[LG93]{Lee93} E.~Y. Lee and J.~Geller. \newblock Representing transitive relationships with parallel node sets. \newblock In Bharat Bhargava, editor, {\em Proceedings of the IEEE Workshop on Advances in Parallel and Distributed Systems}, pages 140--145. IEEE Computer Society Press, Los Alamitos, CA, 1993. \bibitem[LG96a]{Lee96a} E.~Y. Lee and J.~Geller. \newblock Constant time inheritance with parallel tree covers,. \newblock In {\em Proceedings of the FLAIRS (Florida AI Research Symposium)}, pages 243--250, Key West, Florida, 1996. \bibitem[LG96b]{Lee96b} E.~Y. Lee and J.~Geller. \newblock Parallel transitive reasoning on mixed relational hierarchy,. \newblock In {\em Proceedings of the Knowledge Representation and Reasoning}, Cambridge, MA, 1996. \bibitem[LG97]{Lee97} E.~Y. Lee and J.~Geller. \newblock Parallel operations on class hierarchies with double strand representation. \newblock In J.~Geller, H.~Kitano, and C.~Suttner, editors, {\em Parallel Processing for Artificial Intelligence 3}, pages 69--94, Amsterdam, 1997. North-Holland Elsevier. \bibitem[SA90]{Shastri90} L.~Shastri and V.~Ajjanagadde. \newblock An optimally efficient limited inference system. \newblock In {\em Proceedings of IJCAI-90}, pages 563--570, Boston, MA, 1990. \bibitem[Sha88]{Shastri88} L.~Shastri. \newblock {\em Semantic Networks: an Evidential Formalization and its Connectionist Realization}. \newblock Morgan Kaufmann Publishers, San Mateo, CA, 1988. \bibitem[Sha89]{Shastri89} L.~Shastri. \newblock Default reasoning in semantic networks: a formalization of recognition and inheritance. \newblock {\em Artificial Intelligence}, 39(3):283--356, 1989. \bibitem[Sha93]{Shastri93} L.~Shastri. \newblock A computational model of tractable reasoning -- taking inspiration from cognition. \newblock In {\em Proc. of the 13th Int. Joint Conference on Artificial Intelligence}, pages 202--207. Morgan Kaufmann, San Mateo, CA, 1993. \bibitem[SHS97]{Stoffel97} K.~Stoffel, J.~Hendler, and J.~Saltz. \newblock Parka on mimd-supercomputers. \newblock In J.~Geller, H.~Kitano, and C.~B. Suttner, editors, {\em Parallel Processing for Artificial Intelligence 3}, pages 95--118. North-Holland Elsevier, New York, NY, 1997. \bibitem[SM97]{Shastri97} L.~Shastri and D.~R. Mani. \newblock Massively parallel knowledge representation and reasoning: Taking a cue from the brain. \newblock In J.~Geller, H.~Kitano, and C.~B. Suttner, editors, {\em Parallel Processing for Artificial Intelligence 3}, pages 3--40. North-Holland Elsevier, New York, NY, 1997. \bibitem[SPT87]{Schubert87} L.~K. Schubert, M.~A. Papalaskaris, and J.~Taugher. \newblock Accelerating deductive inference: special methods for taxonomies colors and times. \newblock In N.~Cercone and G.~McCalla, editors, {\em The Knowledge Frontier}, pages 187--220. Springer Verlag, New York, NY, 1987. \bibitem[{Thi}88]{TMC88} {Thinking Machines Corporation}. \newblock {\em {*LISP} Reference Manual Version 5.0 edition}. \newblock Thinking Machines Corporation, Cambridge, MA, 1988. \bibitem[Wal90]{Waltz90} D.~L. Waltz. \newblock Massively parallel {AI}. \newblock In {\em Proceedings of the Eighth National Conference on Artificial Intelligence}, pages 1117--1122. Morgan Kaufmann, San Mateo, CA, 1990. \bibitem[Wal97]{Waltz97} D.~L. Waltz. \newblock Ai applications of massive parallelism: An experience report. \newblock In J.~Geller, H.~Kitano, and C.~B. Suttner, editors, {\em Parallel Processing for Artificial Intelligence 3}, pages 327--339. North-Holland Elsevier, New York, NY, 1997. \end{thebibliography} } %\end{document}