CHAPTER 1

INTRODUCTION


This document is the final report of a study on the feasibility of using machine intelligence, including automation and robotics, in future space missions. The 10-week study was conducted during the summer of 1980 by 18 educators from universities throughout the United States who worked with 15 NASA program engineers. The specific study objectives were to identify and analyze several representative missions that would require extensive applications of machine intelligence, and then to identify technologies that must be developed to accomplish these types of missions.

The study was sponsored jointly by NASA, through the Office of Aeronautics and Space Technology (OAST) and the Office of University Affairs, and by the American Society for Engineering Education (ASEE) as part of their continuing program of summer study faculty fellowships. Co-hosts for the study were the NASA-Ames Research Center and the University of Santa Clara, where the study was carried out. Project co-directors were James E. Long of the Jet Propulsion Laboratory and Timothy J. Healy of the University of Santa Clara.

The study was sponsored by NASA because of an increasing realization of the major role that advanced automatic and robotic devices, using machine intelligence, must play in future space missions (fig. 1.1). Such systems will complement human activity in space by accomplishing tasks that people cannot do or that are otherwise too dangerous, too laborious, or too expensive. The opportunity to develop a powerful new merger of human intellect and machine intelligence is a result of the growing capacity of machines to accomplish significant tasks. The study has investigated some of the ways this capacity may be used as well as a number of research and development efforts necessary in the years ahead if the promise of advanced automation is to be fully realized.


1.1 Survey of Artificial Intelligence


Many of the concepts and technologies considered in this study for possible use in future space missions are elements of a diverse field of research known as "artificial intelligence" or simply AI. The term has no universally accepted definition or list of component subdisciplines, but is commonly understood to refer to the study of thinking and perceiving as general information processing functions - the science of intelligence. Although, in the words of one researcher, "It is completely unimportant to the theory of AI who is doing the thinking, man or computer" (Nilsson, 1974), the historical development of the field has followed largely an empirical and engineering approach. In the past few decades, computer systems have been programmed to prove theorems, diagnose diseases, assemble mechanical equipment using a robot hand, play games such as chess and backgammon, solve differential equations, analyze the structure of complex organic molecules from mass-spectrogram data, pilot vehicles across terrain of limited complexity, analyze electronic circuits, understand simple human speech and natural language text, and even write computer programs according to formal specifications - all of which are analogous to human mental activities usually said to require some measure of "intelligence." If a general theory of intelligence eventually emerges from the AI field, it could help guide the design of intelligent machines as well as illuminate various aspects of rational behavior as it occurs in humans and other animals.

AI researchers are the first to admit that the development of a general theory of intelligence remains more a goal for the future than an accomplishment of the present. In the meantime, work is progressing in a number of more limited subdisciplines. The following seven topical research areas include most elements normally considered to be a part of the field.


1.1.1 Planning and Problem Solving


All of artificial intelligence involves aspects of planning and problem solving, a rather generic category. This includes planning and organization in the program development phase as well as the dynamic planning required during an actual mission. Problem solving implies a wide range of tasks including decision making, optimization, dynamic resource allocation, and many other calculations or logical operations that arise throughout a mission.


1.1.2 Perception


Perception is the process of obtaining data from one or more sensors, and analyzing or processing these data to facilitate subsequent decisions or actions. One simple example is a visual perception system that views a scene, determines whether or not a specified round object is in the scene, and if so, initiates a signal which causes an automatic arm to move the object out of the scene. Perception may be electromagnetic (visual, infrared, X-ray, microwave), aural, tactile, chemical; the possibilities are virtually unlimited.


Figure 1.1.—Overview of NASA/ASEE 1980 Summer Study on Advanced Automation for Space Missions.


The basic problem in perception is to extract from a large amount of sensed data some feature or characteristic that permits object identification. If viewed scenes can contain only two possible object classes, say, round or square, then the problem of deciding which is present may be relatively simple. But if thousands of characteristics are important in the scene, the task of creating a perceptual model of sufficient richness to permit unambiguous identification may be formidable indeed.


1.1.3 Natural Languages


One of the most difficult problems in the evolution of the digital computer has been communication between machine and human operator. The operator would like to use an everyday language—a natural language—to gain access to the computer system. But proficiency in communication between human beings, and between machines and people, requires (1) mutual intimate familiarity with contextual understanding, (2) a very large base of data, (3) linguistic inferential capability, and (4) broad utilization of jointly accepted models and symbols. The process, very complex and detailed, demands expensive computer hardware and software to achieve accurate and efficient translation between machine and human languages. Extensive research is now in progress in the AI field to better understand the fundamentals of human language and to improve the quality of communication between man and machine.


1.1.4 Expert Systems


Scientific expertise typically develops in human beings over many years of trial and error in some chosen field. So-called "expert systems" permit such individual expertise to be stored in a computer and made available to others who have not had equivalent experience. Successful programs have been developed in fields as diverse as mineral exploration, mathematical problem solving, and medical diagnosis. To generate such a system, a scientific expert consults with software specialists who ask many questions in the chosen field. Gradually, over a period of many months, the team builds a computer-based interactive dialogue system which, to some extent, makes the expert's experience available to eventual users. The system not only stores scientific expertise but also permits ready access to the knowledge base through a programmed capacity for logic and inference.

Typically, a user interrogates the expert system via a computer terminal, typing in, for example, statements about apparent symptoms in a medical case. The system may then inquire about other conditions or symptoms, request that specific tests be performed, or suggest some preliminary diagnosis, thus attaching a probability or level of confidence to its conclusion and supplying an explanation upon demand. Therefore, user and system interact and gradually approach an answer to some question, whether on a diagnosis of an illness, the location of a mineral deposit, or the solution to a problem in mathematics.


1.1.5 Automation, Teleoperation, and Robotics


Automatic devices are those that operate without direct human control. NASA has used many such machines for years for diverse purposes including antenna deployment, midflight course changes, and re-entry parachute release.

Teleoperation implies a human operator in control of a mechanical system remotely. Executive signals are transmitted from controller to device over hard wires if the distance is small, as in the case of a set of master-slave arms in an isolation room (e.g., "P4" biohazard facility, radioisotope handling, etc.). Or, control signals may travel millions of kilometers over a radio wave link to a planet light-hours away.

Robotic devices have the capacity to manipulate or control other devices. They may be mobile, able to move to some distant physical location where an action must be taken. Robots can be either automatic or teleoperated.


1.1.6 Distributed Data Management


Large amounts of data are involved in the operation of automatic and robotic devices. This may include control information that specifies the next action to be taken in some sequence of operations, archived data that are being transmitted from one memory bank to another, or sensed or measured data that give the status of a geographical area, the position of an actuator, or the speed of a spacecraft. The field of distributed data management is concerned with ways of organizing such data transmission and distribution so that it is accomplished rapidly, efficiently, and in a manner which best supports overall system operation, and with ways of optimizing cooperation among independent but mutually interacting databases.


1.1.7 Cognition and Learning


In this study, cognition and learning refer to the development of a machine intelligence capable of dealing with new facts, unexpected events, and contradictory information in novel situations. Many potential applications of advanced automation require a level of adaptive flexibility that is unavailable with present technology. Today's automatic computer-controlled machines handle new data by a method or approach programmed into them when they were developed. Tomorrow's more sophisticated tools may need the ability to learn, even understand, in the sense of changing their mode of operation as they encounter new situations.


1.1.8 Research and Development in Artificial Intelligence


At present, there is a great deal of AI theoretical research (and in some cases practical development) in progress at several institutions in the United States and throughout the world. Much of the early work in the field was accomplished at five major centers: Carnegie-Mellon University, Edinburgh University, MIT, SRI International, and Stanford University. Today, however, the list of active sites is much longer and includes, in the United States alone, such schools as the University of Illinois, the University of Massachusetts, Yale University, the University of Southern California, Texas University, the University of California at Berkeley, etc. Corporations with ongoing work include Bolt Beranek and Newman, General Motors, IBM, Lockheed, Rand, Schlumberger, Texas Instruments, and Xerox-PARC. Other institutions in this country have shown increasing interest in the field. International activity is concentrated in Great Britain, Japan, and the Soviet Union, with some work under way in Canada, France, Italy, West Germany, and Sweden.

These research and development programs are necessary for the eventual success of the applications described elsewhere in this report. They are also a part of the environment which has led to NASA's current strong interest in the potential of machine intelligence in space. However, even a vigorous research effort does not necessarily imply an applications development process adaptable to future NASA needs. The technology transfer problem is further aggravated by the relative scarcity of qualified workers in the AI field. NASA may begin to alleviate this manpower crisis by directly supporting artificial intelligence and robotics research in colleges and universities throughout the United States.


1.2 History of NASA Automation Activities


Since its inception in the late 1950s, NASA has been primarily devoted to the acquisition and communication of information about the Earth, the planets, the stars, and the Universe. To this end, it has launched an impressive string of spectacularly successful exploration missions including the manned Mercury, Gemini, and Apollo vehicles and the unmanned Surveyor, Mariner, Pioneer, Viking, and Voyager spacecraft to the Moon and beyond. Numerous Earth orbiting NASA satellites have added to an immense, growing fund of useful knowledge about terrestrial resources, weather and climatic patterns, global cartography, and the oceans. Each mission has made use of some level of automation or machine intelligence.

Mission complexity has increased enormously as instrumentation and scientific objectives have become more sophisticated and have led to new problems. The Mariner 4 mission to Mars in 1965 returned about 106 bits of information and was considered a tremendous success. When Viking revisited the planet only a decade later, roughly 1010; bits were returned with greatly increased flexibility in data acquisition. Even now, the amount of data made available by NASA missions is more than scientists can easily sift through in times on the order of a decade or less. The situation can only become more intractable as mission sophistication continues to increase in the future, if traditional data acquisition and handling techniques are retained.

A 1978 JPL study suggested that NASA could save from $500 million to $5 billion per annum by the year 2000 AD if the technology of machine intelligence is vigorously researched, developed, and implemented in future space missions. According to a special NASA Study Group:

"Because of the enormous current and expected advances in machine intelligence and computer science, it seems possible that NASA could achieve orders-of-magnitude improvement in mission effectiveness at reduced cost by the 1990s [and] that the efficiency of NASA activities in bits of information per dollar and in new data-acquisition opportunities would be very high" (Sagan, 1980). Modern computer systems, appropriately programmed, should be capable of extracting relevant useful data and returning only the output desired, thus permitting faster analysis more responsive to user needs.

During the next two decades there is little doubt that NASA will shift its major focus from exploration to an increased emphasis on utilization of the space environment, including public service and industrial activities. Current NASA planning for this eventuality envisions the construction of large orbital energy collection and transmission facilities and space stations operated either in Earth or lunar orbit or on the surface of the Moon. The first steps toward space industrialization already have been taken by NASA's Skylab astronauts who in 1973 performed a number of successful material-processing experiments. Progress will resume when the Space Shuttle delivers the first Spacelab pallet into orbit, and this line of experimentation continues.

Economy is perhaps the most important reason- why robotic devices and teleoperated machines will play a decisive role in space industrialization. A conservative estimate of the cost of safely maintaining a human crew in orbit, including launch and recovery, is approximately $2 million per year per person (Heer, 1979). Since previous NASA mission data indicate that astronauts can perform only 1 or 2 hr of zero-gravity extravehicular activities (EVA) per day, the cost per astronaut is on the order of $10,000/hr as compared to about $10-100/hr for ground based workers. This suggests that in the near term there is a tremendous premium attached to keeping human beings on the ground or in control centers in orbit, and in sending teleoperated machines or robots (which are expected to require less-costly maintenance) into space physically to perform most of the materials-handling jobs required for space industrialization.

In summary, the objective of NASA's space automation program is to enable affordable missions to fully explore and utilize space. The near-term technology emphasis at OAST includes:

  • Increasing operational productivity
  • Reducing cost of energy
  • Reducing cost of information
  • Enabling affordable growth in system scale
  • Enabling more cost-effective high performance missions (planetary, etc.)
  • Reducing cost of space transportation

The growth in capability of onboard machine intelligence will make possible many missions (OAST, 1980) technically or economically infeasible without it. The startling success of the recent Viking and Voyager robot explorers has demonstrated the tremendous potential of spacecraft controllers even when computer memory alone is augmented. Earlier spacecraft computers were limited to carrying out activity sequences entirely predetermined by programmed instructions; the advanced Viking and Voyager machines could be reprogrammed remotely to enable them to perform wholly different missions than originally planned - a flexibility that ultimately yielded more and better data than ever before.


1.2.1 NASA Study Group on Machine Intelligence and Robotics (1977-78)


Recognizing the tremendous potential for advanced automation in future space mission planning and development, and suspecting that NASA might not be utilizing fully the most recent results in modern computer science and robotics research, Stanley Sadin at NASA Headquarters requested Ewald Heer at the Jet Propulsion Laboratory (JPL) ta organize the NASA Study Group on Machine Intelligence and Robotics, chaired by Carl Sagan. The Study Group was composed of many leading researchers from almost all major centers in the fields of artificial intelligence, computer science, and autonomous systems in the United States. It included NASA personnel, scientists who worked on previous NASA missions, and experts in computer science who had little or no prior contact with NASA. The Study Group met as a full working group or as subcommittees between June 1977 and December 1978, and devoted about 2500 man-hours to an examination of the influence of current machine intelligence and robotics research on the full range of space agency activities, and recommended ways that these subjects could assist NASA in future missions (Sagan,1980).

After visiting a number of NASA Centers and facilities over a two-year period, the Study Group reached four major conclusions:

  • NASA is 5 to 15 years behind the leading edge in computer science and technology.
  • Technology decisions are, to a great degree, dictated by specific mission goals, thus powerfully impeding NASA utilization of modern computer science and automation techniques. Unlike its pioneering work in other areas of science and technology, NASA's use of computer science has been conservative and unimaginative.
  • The overall importance of machine intelligence and robotics for NASA has not been widely appreciated within the agency, and NASA has made no serious effort to attract bright, young scientists in these fields.
  • The advances and developments in machine intelligence and robotics needed to make future space missions economical and feasible will not happen without a major long-term commitment and centralized, coordinated support.

The Study Group recommended that NASA should adopt a policy of vigorous and imaginative research in computer science, machine intelligence, and robotics; that NASA should introduce advanced computer science technology into its Earth orbital and planetary missions, and should emphasize research programs with a multimission focus; and that mission objectives should be designed flexibly to take best advantage of existing and likely future technological opportunities.

The Study Group concluded its deliberations by further recommending that (a) the space agency establish a focus for computer science and technology at NASA Headquarters to coordinate R&D activities; (b) computer scientists should be added to the agency advisory structure; (c) a task group should be formed to examine the desirability, feasibility, and general specification of an all-digital, text handling, intelligent communication system for the transfer of information between NASA Centers; and (d) close liaison should be maintained between NASA and the Defense Mapping Agency's (DMA) Pilot Digital Operations Project because of the similarity of interests.


1.2.2 Woods Hole New Directions Workshop (1979)


Soon after the NASA Study Group on Machine Intelligence and Robotics completed its work, the NASA Advisory Council (NAC) convened a New Directions Workshop at Woods Hole in June,1979. The NAC, a senior group of scientists, engineers, sociologists, economists, and authors chaired by William Nierenberg (Director, Scripps Institute of Oceanography), had become concerned that people in the space program "might have lost some of their creative vitality and prophetic vision of the future" (Bekey and Naugle, 1980). Before setting off for Woods Hole, 30 workshop members assembled at NASA Headquarters for briefings on the agency's current program and long range plans, the projected capabilities of the Space Transportation System, and various interesting concepts that had not yet found their way into formal NASA planning. The Workshop members then divided themselves into eight working groups, one of which, the Telefactors Working Group, was charged with examining possible future applications of very advanced automation technologies in space mission planning and implementation.

The Telefactors Working Group recognized that the cost of traditional space operations, even if transportation becomes relatively inexpensive, makes many proposed large-scale enterprises so expensive that they are not likely to gain approval in any currently foreseeable funding environment. Long delays between large investments and significant returns make the financial burden still less attractive. The crux of these difficulties is the apparent need to carry fully manufactured machinery and equipment to generate useful output such as oxygen, water, or solar cells in situ. The Group decided to see if the feasibility of certain large-scale projects could be enhanced by using machines or machine systems that are able to reproduce themselves from energy and material resources already available in space. Such devices might be able to create a rapidly increasing number of identical self-replicating factories which could then produce the desired finished machinery or products. The theoretical and conceptual framework for self-reproducing automata, pioneered by von Neumann three decades ago, already exists, though it had never been translated into actual engineering designs or technological models.

The difference in output between linear and exponentiating systems could be phenomenal. To demonstrate the power of the self-replication technique in large-scale enterprises, the Telefactors Working Group assumed a sample task involving the manufacture of 106 tons of solar cells on the Moon for use in solar power satellites. A goal of 500 GW generating capacity - to be produced by entirely self-contained machinery, naturally occurring lunar materials, and sunlight for energy - was established. From an initial investment estimated at $1 billion, to place a 100-ton payload on the surface of the Moon, a nonreplicating or "linear" system would require 6000 years to make the 106 tons of solar cells needed - clearly an impractical project - whereas, a self-replicating or "exponentiating" system needs less than 20 years to produce the same 106 tons of cells (fig. 1.2, Bekey, 1980).

The Working Group concluded that replicating machine systems offer the tantalizing possibility in the near future that NASA could undertake surprisingly ambitious projects in space exploration and extraterrestrial resource utilization without the need for unreasonable funding requests from either public or private sources. In practice, this approach might not require building totally autonomous selfreplicating automata, but only a largely automated system of diverse components that could be integrated into a production system able to grow exponentially to reach any desired goal. Such systems for large-scale space use would necessarily come as the end result of a long R&D process in advanced automation, robotics, and machine intelligence, with developments at each incremental stage finding wide use both on Earth and in space in virtually every sphere of technology.

The Telefactors Working Group, believing that robotics, computer science, and the concept of replicating systems could be of immense importance to the future of the space program, recommended that NASA should proceed with studies to answer fundamental questions and to determine the most appropriate development course to follow.


1.2.3 Pajaro Dunes Symposium on Automation and Future Missions in Space (1980)


Because of the burgeoning interest in machine intelligence and robotics within NASA, the decision was made in September 1979 to fund an automation feasibility study to be conducted the following year as one of the annual joint NASA/ASEE Summer Study programs. To help provide the Summer Study with a set of futuristic goals and possibilities, an interactive symposium was organized by Robert Cannon at the request of Robert Frosch to take place the week before the opening of the summer session. During 15-22 June 1980, 23 scientists, professors, NASA personnel, and science fiction authors gathered at Pajaro Dunes near Monterey, California, to consider two specific questions: (1) What goals involving self-replicating telefactors might NASA possibly pursue during the next 25, 50, or 100 years, and (2) what are the critical machine intelligence and robotics technology areas that need to be developed? (Proceedings of the Pajaro Dunes Workshop, June 1980, unpublished).

A large number of highly imaginative missions were discussed, including automatic preparation of space colonies, an automated meteor defense system for the Earth, terrestrial climate modification and planetary terraforming, space manufacturing and solar power satellites, a geostationary orbiting pinhole camera to permit high-resolution solar imaging, lunar colonies, a Sun Diver probe capable of penetrating and examining the solar photosphere, advanced planetary surface exploration, and so forth. However, Workshop participants selected four missions they regarded as most significant to NASA's future and to the development of advanced automation technology:

Mission I - Very Deep Space Probe - highly automated for Solar System exploration and eventually to be extended to include interstellar missions capable of searching for Earthlike planets elsewhere in the Galaxy

Mission II - Asteroid Resource Retrieval - includes asteroids, jovian satellites and lunar materials that will use mass drivers, nuclear pulse rockets, etc., for propulsion

Mission III - Hazardous Experiment ("Hot Lab") Facility - an unmanned scientific laboratory in geostationary orbit with isolation necessary to safely handle such dangerous substances as toxic chemicals, high explosives, energetic radioisotopes, and genetically engineered biomaterials

Mission IV - Self-Replicating Lunar Factory - an automated unmanned (or nearly so) manufacturing facility consisting of perhaps 100 tons of the proper set of machines, tools, and teleoperated mechanisms to permit both production of useful output and reproduction to make more factories.


Figure 1.2 - Comparison of linear and exponentiating (self-replicating) systems in production capability.


Mission IV appears to have generated the most excitement among Workshop participants, in part because it has not yet been extensively studied by NASA (or elsewhere) and the engineering problems are largely unexplored. A number of important issues were raised and concepts defined, and there was a general consensus that virtually every field of automation technology would require further development for the self-replicating factory to become a reality.

Six important robotics and machine intelligence technology categories were identified as most critical by Workshop participants:

  1. Machine vision capabilities, especially in the areas of depth perception, multispectral analysis, modeling, texture and feature, and human interface
  2. Multisensor integration, including all nonvision sensing such as force, touch, proximity, ranging, acoustics, electromagnetic wave, chemical, etc.
  3. Locomotion technology to be used in exploration, extraction processes and beneficiation, with wheeled, tracked, or legged devices under teleoperated or autonomous control
  4. Manipulators, useful in handling materials both internal and external to the machine, general purpose and special purpose, teleoperated or fully automatic
  5. Reasoning or intelligence, including logical deductions, plausible inference, planning and plan execution, real world modeling, and diagnosis and repair in case of malfunction
  6. Man-machine interface, including teleoperator control, kinesthetic feedback during manipulation or locomotion, computer-enhanced sensor data processing, and supervision of autonomous systems.

1.3 Summer Study on Advanced Automation for Space Missions

edit

Immediately following the conclusion of the Pajaro Dunes Symposium, the present summer study was convened on 23 June 1980 and completed its formal work (roughly 10,000 man-hours) on 29 August 1980. During the first two weeks of the study the group was introduced to the status of work in artificial intelligence by a series of lectures given by scientists from SRI International. A number of NASA program engineers participating in the study reviewed agency interests in relevant mission areas.

Study members then focused their work by selecting four space missions which appeared to have great potential for the use of machine intelligence and high relevance to future NASA program goals. There was no assumption that these specific missions would ever be carried out. The four teams and the missions they chose to examine were:

(a) Terrestrial applications - an intelligent Earth-sensing information system

(b) Space exploration - Titan demonstration of a general-purpose exploratory system

(c) Nonterrestrial utilization of materials - automated space manufacturing facility

(d) Replicating Systems - self-replicating lunar factory and demonstration.

The teams spent the major part of the summer elaborating their missions (summarized below), with particular emphasis on the special role that machine intelligence and robotics technology would play in these missions.

The study has produced three significant outputs, outlined briefly in the remainder of this chapter, as follows: Mission Scenarios, Advanced Automation Technology Assessment, and an Epilogue.

1.3.1 Mission Scenarios

edit

Over the last few years literally hundreds of mission opportunities beyond the 10-year time frame have been developed by the NASA Office of Aeronautics and Space Technology and assembled into a comprehensive Space Systems Technology Model (OAST, 1980). To reduce the problem of automation technology assessment to manageable proportions, the summer study group formed four mission teams that could select single missions for concentrated attention in order to illustrate fully the potential of advanced automation. The task divisions among the teams guaranteed that all major classes of possible future NASA missions were considered, including public service, space utilization, and interplanetary exploration. A fifth group, the Space Facilities and Operations Teams consisted largely of NASA and industry personnel whose duty it was to ensure that all mission scenarios were technically feasible within the constraints of current or projected NASA launch- and ground-operations support capabilities.

(a) Terrestrial Applications Team. The Terrestrial Applications Team elected to examine a sophisticated, highly intelligent information processing and delivery system for data obtained from Earth-sensing satellites. Such a system can play an immediate and practical role in assisting people to manage local resources, and, in a broader sense, could provide continuous global monitoring that is useful in the management of the individual and collective activities of man. The mission scenario presented in chapter 2 includes basic systems descriptions and hardware requirements, a discussion of "world model" structures, and a suggested developmental timeline.

(b) Space Exploration Team. The Space Exploration Team developed the concept of a general-purpose, interstellar-capable, automated exploratory vehicle that can (1) operate in complex unknown environments with little or no a priori knowledge, (2) adapt system behavior by learning to enhance effectiveness and survivability, (3) independently formulate new scientific hypotheses by a process called abduction, (4) explore with a wide variety of sensory and effector-actuator systems, (5) coordinate distributed functions, and (6) exchange information with Earth via an entirely new form of man-machine interactive system. A demonstration mission to Titan was examined in some detail and is presented in chapter 3, including mission operational stages, hardware specifications, sensing and modeling functions, and machine intelligence and other advanced technology requirements.

(c) Nonterrestrial Utilization of Materials Team. The Nonterrestrial Utilization of Materials Team considered options for a permanent, growing, and highly automated space manufacturing capability based on the utilization of ever-increasing fractions of extraterrestrial materials. The major focus was the initiation and evolutionary growth of a general-purpose Space Manufacturing Facility (SMF) in low Earth orbit. The mission scenarios in chapter 4 include surveys of solar system resources and various manufacturing processes especially applicable to space, a description of several basic industrial "starting kits" capable, eventually, of evolving to complete independence of Earth materials resupply, and discussions of the rationales for and implications of such ventures.

(d) Replicating Systems Concepts Team. The Replicating Systems Concepts Team proposed the design and construction of an automated, multiproduct, remotely controlled or autonomous, and reprogrammable lunar manufacturing facility able to construct duplicates (in addition to productive output) that would be capable of further replication. The team reviewed the extensive theoretical basis for self-reproducing automata and examined the engineering feasibility of replicating systems generally. The mission scenarios presented in chapter 5 include designs that illustrate two distinct approaches - a replication model and a growth model - with representative numerical values for critical subsystem parameters. Possible development and demonstration programs are suggested, the complex issue of closure discussed, and the many applications and implications of replicating systems are considered at length.

1.3.2 Advanced Automation Technology Assessment

edit

A principal goal of the summer study was to identify advanced automation technology needs for mission capabilities representative of desired NASA programs in the 2000-2010 period. Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance - autonomous "world model" information systems, learning and hypothesis formation, man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology.

Technology needs were individually assessed by considering the following sequence of questions in each case:

  1. What is the current state of the relevant technology?
  2. What are the specific technological goals to be achieved?
  3. What developments are required to achieve these goals?

After mission definition was completed (the first seven weeks), summer study personnel reorganized into formal technology assessment teams with assignments based on interest and expertise. During this phase of the study, participants focused their attention on the evaluation of advanced automation technologies required to attain the desired mission capabilities. The results of this activity are presented in chapter 6 of this report.

1.3.3 Epilogue

edit

The purpose of the epilogue (chapter 7) is to present carefully targeted recommendations for the present NASA space program. An evolutionary NASA space program scenario was assumed by the study group, based on relevant planning documents and other information sources. The major premise was that coordinated developmental initiatives would be undertaken by the agency in the next 20 years to establish a basis for an aggressive, multidisciplinary program of space exploration and utilization early in the twenty-first century. The epilogue includes a space facilities and programs overview, specific goals for growth scenario, and a series of recommended NASA planning options including a consistent space program strategy, technological development priorities, and updates to the OAST Space Systems Technology Model.

1.4 References

edit

Bekey, Ivan; and Naugle, John E.: Just Over the Horizon in Space. Astronautics and Aeronautles, vol. 18, May 1980, pp.64-76. Heer, Ewald: Prospects for Robotics in Space. Robotics Age, vol. l, Winter 1979, pp.20-28.

Nilsson, Nils J.: Artificial Intelligence. Information Processing 74, vol. 4, Proceedings of IFIP Congress 74, 1974, pp.778-801. Office of Aeronautics and Space Technology (OAST):

NASA Space Systems Technology Model: Volume 111 Opportunity Systems/Programs and Technologies, NASA, Washington, D.C ., May 1 980 Sagan, Carl, Chmn.: Machine Intelligence and Robotics:

Report of the NASA Study Group, NASA TM-82329.