Title : NSF 92-109 Research Priorities in Networking and Communications Research (Workshop Report) Type : Report NSF Org: CISE / NCR Date : October 1, 1992 File : nsf92109 RESEARCH PRIORITIES IN NETWORKING AND COMMUNICATIONS Report to the NSF Division of Networking and Communications Research and Infrastructure by Members of the Workshop Held April 9-11, 1992, Airlie House, Virginia NATIONAL SCIENCE FOUNDATION Washington, D. C. 20550 The opinions expressed in this report are those of the workshop panel and do not represent NSF policy In accordance with Federal statues and regulations and National Science Foundation (NSF) policies, no person on grounds of race, color, age, sex, national origin, or disability shall be excluded from participation in, denied the benefits of, or be subject to discrimination under any program or activity receiving financial assistance from the NSF. The Foundation has TDD (Telephonic Device for the Deaf) capability which enables individuals with hearing impairment to communication with the Division of Personnel and Management for information relating to NSF programs, employment, or general information. This rnumber is (202) 357-7492 Facilitation Awards for Scientists and Engineers with Disabilities (FAD) provide funding for special assistance or equipment to enable persons with disabilities (investigators and other staff, including student research assistants) to work on an NSF project. See the FAD program announcement (NSF Publication 91-54), or contact the FAD Coordinator in the Directorate for Education and Human Resources. The telephone number is (202) 357-7456. The Foundation provides awards for research and education in most fields of science and engineering. The awardee is wholly responsible for the conduct of such research and preparation of the results for publication. The Foundation, therefore, does not assume responsibility for such findings or their interpretation. The Foundation welcomes proposal on behalf of all qualified scientists and engineers, and strongly encourages women, minorities, and persons with disabilities to compete fully in any of the research and research-related Programs described in this document. Ordering by Electronic Mail or by FAX If you are a user of electronic mail and have access to either BITNET or Internet, you may order publications electronically. BITNET users should address requests to pubs@nsf. Internet users should send requests to pubs@nsf.gov. In your request, include the NSF publication number and title, number of copies, your name, and a complete mailing address. Printed publications may be ordered by FAX (703/644-4278). Publications should be received within 3 weeks after receipt of request. This program is described in the Catalog of Federal Domestic Assistance, Number 47.070, Computer and Information Science and Engineering. PREFACE The Networking and Communications Research program of the National Science Foundation supported a two-day workshop at Airlie House, Virginia, April 9-11, 1992. This was the second such workshop, following the first by approximately three years. Its focus was to identify major research issues in networking and communications. The workshop produced a vision of telecommunications for the future and a research path to implement that vision. The report defines the context for these priorities by addressing outstanding applications of networking and communications in today's telecommunications and information systems. It deals with issues in education in networking and communications and identifies and discusses in detail seventeen major research topics in networking and communications that the participants concluded should be pursued over the next several years. RESEARCH PRIORITIES IN NETWORKING AND COMMUNICATIONS Report of a Workshop held April 9-11, 1992, Airlie House, Virginia Sponsor: NSF Division of Networking and Communications Research and Infrastructure Participants: Anthony Acampora, Columbia University Thomas M. Cover, Stanford University *G. David Forney, Jr., Motorola Codex Robert G. Gallager, Massachusetts Institute of Technology David J. Goodman, Rutgers University *Bruce Hajek, University of Illinois Robert E. Kahn, Corporation for National Research Initiatives Robert S. Kennedy, Massachusetts Institute of Technology H. T. Kung, Carnegie Mellon University Simon Lam, University of Texas at Austin Shu Lin, University of Hawaii Nicholas Maxemchuk, AT&T Bell Laboratories Michael B. Pursley, University of Illinois Ronald W. Schafer, Georgia Institute of Technology Jonathan Turner, Washington University William H. Tranter, University of Missouri at Rolla Stuart Wecker, Northeastern University Jack K. Wolf, University of California, San Diego *(co-chairs) Contents 1. SUMMARY 2. IMPORTANCE OF NETWORKING AND COMMUNICATIONS RESEARCH 3. THE NSF PROGRAM 4. ALIGNMENT WITH NSF INITIATIVES HPCC and NREN Manufacturing Advanced education in networking and communications 5. FUTURE INTEGRATED SYSTEMS FOR DELIVERY OF SERVICES Personal communications services and networks Low-earth-orbit satellite networks Digital HDTV Lightwave systems 6. PRIORITIES FOR BASIC RESEARCH Coding and coded modulation Data compression Information theory Storage channels Modeling and system analysis Communications signal processing Radio systems and networks Mobile network management Protocol theory, design, and engineering Network interface architectures Dynamic network control Internetworking Lightwave network architectures Network security and survivability Switching systems Fundamental limits of networking Networking of applications 7. CONCLUSION 1. SUMMARY The NSF Division of Networking and Communications Research and Infrastructure was established in 1987, shortly after the creation of the CISE Directorate. In 1989, a workshop was convened to assess the overall program and to suggest directions for networking and communications research. A similar group reconvened in April 1992 to consider the research program in the light of the evolution of technology, to update the suggestions made by the previous workshop, to reconsider the assessment of research directions vis-a-vis research opportunities and national needs, and to evaluate the educational and research base in these areas in the U.S. This document is the report of the April 1992 workshop. The principal findings of the workshop participants are: * The fields of networking and communications are developing rapidly, and their importance is continually increasing. * Major new applications of both communications and networking have emerged in the past three years. * Significant research progress has occurred. * Important new research problems have sprung up. * Funding levels are still well short of the needs of the field relative to its national importance. 2. IMPORTANCE OF NETWORKING AND COMMUNICATIONS RESEARCH The importance of a state-of-the-art communications infrastructure in a modern information-oriented society has become universally recognized. The computer and communications industries, together with the semiconductor industry, are the primary drivers of the information revolution, which is the main technological and economic development of our time. The computer and communications industries are of comparable size. Approximately two million persons are currently employed in the communications industry, and its annual gross revenues are in excess of $100 billion. The advances in communications and networking technology have been dramatic. In just the past three years, the following have occurred: * The NSFNET backbone has been upgraded to 45 Mb/s links; according to Merit Inc., it now reaches 820,000 host computers in 35 countries and is growing at 11% per month; * Personal computer networks have become affordable and widely implemented; * Cellular telephones have become ubiquitous; * U.S. proposals for HDTV have shifted from analog to digital technology, putting the US in a position to play a leadership role in its development; * Distributed client-server computing has largely replaced centralized mainframe computing. Our vision of the future network remains that of a fiber optic backbone with access via fiber, copper, and radio. However, scenarios that were merely dreams three years ago have been developed to the point of wide acceptance. For example: * The National Research and Education Network (NREN) will allow computers and terminals to be interconnected via links with gigabit (one billion b/s) data rates; * Supercomputers will be networked to solve problems of `Grand Challenge' scope; * Personal Communications Systems/Networks (PCS/PCN) will give individuals continuous untethered voice, data, and image access to the global public network and to the NREN via small wireless communicators; * Television, personal computers, workstations, and facsimile will evolve into multimedia devices interconnected with other systems, data bases and entertainment services worldwide. 3. THE NSF PROGRAM The Division of Networking and Communications Research (DNCRI) is the primary source of funding for basic research in networking and communications in the U.S. The annual DNCRI budget of approximately $7 million for basic research in networking and communications, plus $1.8 million for gigabit testbeds and other programs, is clearly low relative to the national importance of communications. NSF funding of networking and communications research is less in real terms today than ten years ago, whereas the field has expanded considerably. Furthermore, funding from other traditional sources of support such as the Department of Defense is either also declining, or highly focused on a few projects. Fewer than 100 principal investigators are being supported, and the competition for grants is fierce. In 1992, the success ratio is expected to be less than 20%, which is below average NSF levels. A significant number of `Must Fund' research proposals are not being supported. Anecdotal evidence indicates that graduate students are choosing not to enter this field, and new Ph.Ds are being discouraged from academic careers because it is so difficult to obtain adequate funding. Experimental research and training are critical in many of the most vital research areas. The typical grant size, of the order of $70-80,000 annually, makes it almost impossible to support an experimental program, and therefore biases research towards theory. The low level of funding often precludes use of advanced information technologies such as workstations and local area networks, often the very technologies that the research is addressing. System integration is often a key issue, yet because of low funding it cannot be appropriately addressed. A positive development of the past three years is the funding for gigabit testbeds, but more money is needed in the DNCRI research budget for experimental research. New computer communications equipment is quickly being developed and deployed, increasing the urgency of building the necessary research and knowledge base. In such disparate areas as high-speed network protocols, digital HDTV, digital cellular systems, and 100 Mb/s data transmission over copper wires, there is a very real threat of adoption of outdated and inefficient technologies, potentially hobbling communications for decades. For these reasons the workshop participants feel strongly that the NSF should not be content with the progress achieved in funding DNCRI research in the past three years, which one participant characterized as progress ``from starvation to austerity.'' The national needs in this area are too great. 4. ALIGNMENT WITH NSF INITIATIVES Networking and communications research is strongly aligned with the Administration's High Performance Computing and Communications (HPCC) Initiative. Networking and communications research should play a role in supporting the Manufacturing Initiative as well. Finally, basic research in networking and communications is an important element in promoting a vital infrastructure for education. This section of the report explores the demands and opportunities for basic research in networking and communications implied by each of these initiatives. HPCC and NREN The High Performance Computing Act authorizes funding for research and development of a National Research and Education Network (NREN) for the U.S. research community, supporting transmission rates above 1 Gb/s. The NREN will greatly strengthen the nation's research infrastructure, and will stimulate the commercial development of broadband networks. The Administration's vision of the HPCC Initiative is described in the ``Grand Challenges'' Report, a supplement to the President's FY 93 budget. The report was prepared by the Committee on Physical, Mathematical, and Engineering Sciences of the Office of Science and Technology Policy's (OSTP) Federal Coordinating Council for Science, Engineering and Technology (FCCSET). The requested HPCC budget of $800M for FY 93 includes $122.5M for the National Research and Education Network (NREN). The report assigns agency responsibility for gigabit network research to the NSF. Most of the research areas discussed at the workshop are directly related to the HPCC Initiative, from fundamental research on limits on communication and computation (there are limits imposed by factors other than the finite speed of light), through research into protocols, switching, dynamic network management, and lightwave network architectures. In addition, current research on gigabit testbeds is expected to feed directly into NREN design. Fortunately, the NREN is being created at a time of rare consensus in industry. Asynchronous transfer mode (ATM) switching technology has been demonstrated at 600 Mb/s, and the framework is in place for a steady improvement in transmission speeds to well above a gigabit per second. The computer and communication industries have largely agreed on ATM as the core technology for next generation networks, and are working toward the development of effective systems. However, while it has become clear that the core technology is feasible, there remain substantial challenges that must be met if the NREN is to realize its great potential. While much progress has recently been made in the area of bandwidth management and congestion control, there is no consensus on the best approach to the problem. Issues revolving around the high-level control of multicast virtual circuits have received very little attention. While a fully implemented NREN will have to support service connections using from 1 Kb/s into the gigabit range, and internal link speeds from 150 Mb/s up to perhaps 9.6 Gb/s, we lack the high-level control mechanisms needed to efficiently operate a network with such a heterogeneous mix of data rates. To allow a smooth transition to the gigabit NREN, internetworking protocols and new high-speed gateways are needed to allow traffic from current LANs and the current NSFNET to be redirected onto the NREN. The integration of the new network technology with applications has only just begun. New software and hardware approaches are needed in workstations if applications are to enjoy the tremendous capabilities of broadband networks. New database and storage system architectures will be required to support multimedia document systems that make extensive use of images and video. Less expensive and more efficient data compression methods are needed for both storage and more efficient transmission. More cost-effective switching system designs are needed to support large-scale networks with gigabit terminal connections, and a more thorough understanding of switching system performance is needed to provide quality-of-service guarantees. The NREN offers a great opportunity for research not only in the traditional specialties but also in systems integration, which is increasingly becoming the determining factor in the success of large systems of all types. It is no longer enough to have a system constructed from all the right pieces; the pieces must fit together into a coherent whole. The issues facing system integrators are at least as challenging as those in the various research specialties and merit more attention from academic researchers. The NREN development offers an ideal opportunity to stimulate such research and presents a challenging set of problems to be solved. The ability to develop advanced communication systems is now mostly concentrated in a few large companies. If the industry is to expand and diversify, universities must develop research and educational programs that will train the next generation of system architects. Communication and computation are inextricably intertwined. Several of the basic research topics discussed in this report address both the ``Computing'' and the ``Communications'' components of the HPCC Initiative. Communication places limits on computation due to the finite velocity of light and the inability of large computers to be fully connected. Conversely, computation limits communication. Research areas related to the question of the fundamental limits of communication and computation include coding theory, data compression, and information theory. Another point of contact between the Administration's vision of HPCC and the basic research program of DNCRI is magnetic recording. To quote from the Grand Challenges report, ``Further advances in magnetic storage recording technology will underlie new technological advances and scientific breakthroughs, from laptop supercomputers to data collection from earth orbiting satellites downloading terabits of information per day..." Several basic research issues related to magnetic recording technology, many involving coding, coded modulation and signal processing, are detailed in the section on magnetic recording in Section 6 of this report. This is not surprising because storage is simply communication from the past to the future. Many improvements in communication systems transform immediately into improvements in optical and recording systems. Manufacturing The mainstay of the U.S. economy is our manufacturing base, which recently has come under serious challenge from abroad. To compete effectively in this new international competitive environment, it is critical to rethink our basic approach to manufacturing from how we train students to how the factory floor operates. The most significant capabilities which have yet to be adequately introduced into this field are the use of information technology and its coupling to custom design and flexible manufacturing. This represents a major new opportunity for the country and for the computer and communications communities. Specifically, there are opportunities to use advanced supercomputers and workstations in cooperative interactive design, involving individuals in different organizations where necessary or desirable; to use computers and flexible robots in planning, scheduling and operations on the factory floor; to access design data bases for historical records, work in progress and relevant rules, codes and procedures; and to couple the factory to supplier networks for ``just-in-time'' manufacturing. If it is possible to optimize computer-based designs in real time, and to use networks to obtain rapidly low-cost prototypes from foundries, dramatic improvements are likely to follow in our manufacturing processes and infrastructure. Advanced education in networking and communications Advanced education needs to keep up with rapid advances in technologies. In the fields of networking and communications, technology changes are especially rapid; a major new thrust emerges approximately every two or three years, and new applications develop continuously. It is likely that the pace will only accelerate in the next decade. Advanced education must reflect the changes in the field in a timely way. In particular, there exists an urgent need for educational and research programs to produce students having a background in experimental research and in systems integration. Experience in these areas is necessary if students are to be productive in an industrial setting. Thus, university research having an experimental component and research that focuses on systems-level issues should be encouraged. NSF and the universities must enhance the synergism between research and education. While individual research projects serve important purposes in their own right, a quality research program can also enhance the educational process by serving as a catalyst for new courses, new textbooks and other teaching materials, and for curriculum development. Researchers who work at the forefront of technologies and applications should be encouraged to connect their programs closely to educational programs. NSF has many programs that support education and institutional infrastructure directly. These include the Educational Infrastructure Program, the Institutional Infrastructure Program, and the Instrumentation and Laboratory Improvement Program. The Engineering Education Coalitions Program, part of the Engineering Directorate, is sponsoring undergraduate curriculum revisions. Individual research projects can be tied to these programs so that timely responses to educational needs in networking and communications are insured. For example, new course materials, lecture notes and lab workbooks can be developed around research projects, and be made available to a large number of institutions. Research investigators should also be encouraged to include undergraduate students in their ongoing projects. We specifically endorse the REU (Research Experiences for Undergraduates) program as an important tool for accomplishing this goal. 5. FUTURE INTEGRATED SYSTEMS FOR DELIVERY OF SERVICES Advances in communications and networking technology will impact our society through the delivery of information services via new integrated systems. Four illustrative examples of such systems are described here, indicating the spectacular evolution to come. A vital basic research base is essential for the development of these integrated systems of the future. Personal communications services and networks The word ``personal'' entered the lexicon of information technology with the birth of personal computers in 1981. PCs took hold quickly, and are now evolving toward laptops, notebook computers, and even smaller information devices. We want our information to be personal and to be part of ourselves, not part of our home or office. The other major trend in personal computing is networking. Not only do we want to take our information with us, we also want to interact with other information resources. The popularity of cordless and cellular telephones illustrates the same trends in voice communications. Together, these trends in telecommunications and computing support a vision of information of all kinds moving between people and machines everywhere. To realize this vision, many advances in theory and technology are needed. Successful development of personal communications services will require progress in most of the research topics identified in the workshop, including: bandwidth-efficient communication over wireless channels, data compression, system design and analysis, protocols, and distributed network management and control. Finally, integration of these technologies into an efficient, high-quality system is the ultimate challenge. Low-earth-orbit satellite networks Recently several U.S. companies have proposed low-earth-orbit (LEO) satellite networks to provide a service equivalent to a world-wide cellular network. Enough satellites will be in orbit so that one will always be in view; thus subscribers will always be able to reach any point on earth from any other. The low orbit, combined with modern speech compression, will permit reliable voice communication at reasonable transmitted power levels and low link delays. Such networks will also support global data services. The transmission issues for such systems are reasonably well understood. However, they generate many challenging new problems for research in networking. The constantly changing network topology, limits on communication between satellites, and nonstationary subscriber profile lead to difficult issues in routing, switching, and dynamic distributed network management and control. No satisfactory research base exists for such problems. Digital HDTV Just in the past two years, U.S. proposals for high-definition television (HDTV) have dramatically shifted from analog to digital technology. Advanced video coding techniques can compress HDTV into about 20 Mb/s (million bits per second). This shift raises the prospect of low-cost digital image manipulation and signal processing in consumer-level TV sets, and integration with future multimedia capabilities in personal computers and workstations. The U.S. hopes to leapfrog earlier analog HDTV technology proposed in Europe and Japan. While the digital video compression technology in the several proposals now before the FCC is sophisticated, and is largely similar in all proposals, there has been no comparable development of technology for digital broadcast transmission in existing 6 MHz TV channels. The proposals use very different modulation, coding, and equalization schemes, mostly quite primitive. It would be a shame not to use the best available transmission techniques in a standard that is likely to be in use for decades. However, the digital HDTV broadcast channel is dominated by impairments quite different from those on other channels, notably co-channel interference from nearby existing (NTSC) stations. Furthermore, HDTV receivers must meet severe cost constraints at fairly high symbol rates. Research on such problems has barely begun, either in industry or in academia. Lightwave systems The possible uses of lightwave networks span a plethora of multimedia and multisession applications involving video, voice and data. Exactly what services will be enabled depends on the specific designs of the networks. Development of lightwave testbed networks is now appropriate. Although not necessarily operational in the usual sense of the word, these networks would enable the creation and trial of applications which capitalize on the capabilities of lightwave networks. Their development will involve a major systems integration effort. The capability of lightwave networks represents the same qualitative change in capability as did the existing telecommunications network when it emerged a century ago. Just as the eventual role of the latter network in our society could not be anticipated until it began to be used, neither can be the role to be played by the lightwave networks considered here. 6. PRIORITIES FOR BASIC RESEARCH A number of research areas in networking and communications were identified by workshop participants as especially important and timely. Many of them will support future development of the integrated systems described above, both by providing theory to guide development, and by strengthening the needed educational base. Workshop participants support the current policy of funding the most worthy proposals, as judged by the traditional process of peer review by anonymous reviewers and a proposal review panel. This list of topics is therefore not to be interpreted to exclude proposals in other areas. The order of the topics is entirely arbitrary and no assignment of relative priority is implied. Coding and coded modulation The future communications network will be digital, and will be based on a fiber-optic backbone with almost unlimited capacity. At the same time, the use of wireless communications will continue to explode, both to provide access to the network for people and computers on the move, and in stand-alone radio networks for specialized applications. Broadcast television will become digital, in HDTV. More and more bits will be sent down the existing copper wires that go to the individual home or desk. New satellite communication systems will be developed. On channels with less than unlimited capacity, it is well understood that coding is required to achieve the best efficiency at low error rates. Powerful error-correcting codes are now used almost routinely in data communications and storage systems. More recently, the invention of trellis coded modulation has revolutionized communications over bandlimited channels, and is starting to be used in magnetic storage. As a research field, coding contains both well-explored and newly emerging areas. Important current research areas include: * A unified structure theory embracing block, convolutional, lattice and trellis codes has begun to emerge. A code is regarded as being generated by a dynamical system, whose state structure determines the structure (trellis diagram) of the code, and therefore also determines the code's decoding complexity. Codes with decomposable structure are of particular interest to simplify decoding. A goal is to construct new codes which are easy to decode and which have good distance properties. * Suboptimal decoding algorithms that can approach optimal performance with significantly reduced complexity are likely to be the best choice for high-speed, high-performance applications. New hardware and software architectures are needed for high-speed decoding. * Coding techniques for memoryless channels have been successfully extended to channels with intersymbol interference. New codes and suitable decoding algorithms are needed for other types of channels, such as fading and bursty channels. * There have been exciting developments recently in Euclidean-space group and ring codes, and in Hamming-space algebraic geometry codes. Our knowledge of these classes of algebraic codes remains far from complete. * Work in the past two years on developing quantization duals of Euclidean-space coding techniques, and vice versa, has been promising. Further development of such dual techniques should enrich both fields. * Closer ties between synchronization, equalization, and coding are needed. As codes improve, synchronization and equalization must be maintained in the presence of more severe errors. Data compression Data compression (or source coding) is an important application area within communication and computer systems, and at the same time is a fundamental branch of information theory. We include here the compression of text, voice, images, video, etc. We also include both lossless compression, in which the data must be retrieved exactly from the compressed version, and lossy compression, in which some limited distortion may be introduced by the compression. One might think that the need for compression should disappear with the decreasing cost-per-bit of both communication and storage, but in fact commercial need and interest are growing. One major reason for this is that the cost and time scale of computation (and therefore the cost and time scale of performing data compression) are falling about as rapidly as the cost of storage and communication. Another reason is that communication capacity is likely to remain scarce in wireless communication systems such as broadcast video, cellular radio, personal communication networks, and emergency systems. A more subtle reason is that for voice and video, the major gains of compression can only be achieved with variable rate transmission, and such variable rates will be natural in broadband ISDN systems. Finally, the example of limited capacity diskettes for personal computers illustrates why compression will remain important for data storage. Lossless data compression for sources with known statistics is a very mature science, but, despite the elegance of the Lempel-Ziv algorithm, adaptive compression for unknown sources is still not well understood. This problem is closely related to stochastic modeling and to pattern recognition, and could easily provide the key to understanding adaptive lossy compression. This is an important area for basic research. In many applications where perfect fidelity is not required, it is of interest to remove redundancy in a controlled way through the use of signal modeling and digital signal processing techniques. While research in areas such as speech and image coding is supported mainly by other NSF programs, there are many challenging research problems requiring the blending of lossy techniques with lossless coding and channel coding. Efficient and easily implemented data compression algorithms of this type will be increasingly important in networked multimedia information systems where a wide range of signal types must be compressed/decompressed and then combined for transmission and storage. A problem that arises in many applications is that of errors or erasures in compressed data. For example, occasional errors occur in most communication systems, and dropped cells are expected to be a major problem in broadband ISDN. Preventing error propagation in the decoding of compressed data is an important problem on which little has been done in the past. The development of coding/decoding algorithms that are insensitive to either occasional errors or dropped cells is an example of lossy compression research that is appropriate for this research program. A closely related problem occurs in data base applications where part of a compressed file must be read without reading the entire file. Adaptive compression is particularly difficult in the presence of occasional errors because the decoders knowledge of the code itself can be corrupted by errors. Information theory The fundamental limits of communication theory are embodied in information theory. In particular, entropy provides an achievable lower bound on data compression, and channel capacity provides an achievable upper bound on data transmission. All data compression schemes and coding/modulation schemes are bounded by these limits. The insights gained from the evolution of this theoretical work now thoroughly permeate the design of point-to-point communication systems. In particular, information theory lies at the heart of the theory and practice of data compression, the theory and practice of channel coding, and the theory and practice of modulation and detection. In addition, information theory has made central contributions to cryptography and public key cryptosystems, to computer science (in many ways including Kolmogorov complexity), to statistics, to pattern recognition, and to statistical mechanics. Network information theory is one of the major topics of contemporary information theory research. This includes the general question of the capacity region for networks of senders and receivers in the presence of interference and noise. Many network-information-theoretic questions have been fully answered. For example, the multiple-access channel capacity region is known in principle, and therefore the tradeoff in rates required to accommodate many users of a shared communication medium is known. On the other hand, the same multiple-access problem with feedback has an unknown capacity region. Similarly, for the broadcast channel, which has one sender and many receivers, there is a nice theoretical characterization of the transmission region, but it is not known whether this is indeed the capacity region. Although much progress has been made on these fundamental capacity questions, there has been a lag in the impact on the design of actual networks (except for CDMA, as discussed below). One problem is that networking is evolving in a more and more ad hoc fashion due to the pressures of rapid technological breakthroughs. Thus it is of vital importance to support and stimulate work in information theory that has a promise of providing insight and fundamental limits on performance for new kinds of communication systems, particularly those involving networks. With the need for standardization of very broad band networks, it is likely that the network structures that evolve over the next few years will persist far into the future. This adds urgency to the need to provide additional conceptual underpinnings to network design. Some areas where information theory might provide a cohesive approach to networks are as follows: * Code division multiple access (CDMA or spread spectrum) for cellular radio systems has evolved rapidly in recent years and is best viewed as a multiple-access channel in the information theoretic sense. Current work on detection, coding, and rate allocation use insights from this theory, and are starting to guide the theory toward some of the real problems. Much more work is needed on multiuser coding and decoding, or in decoding for one transmission in the presence of interference from others. The current practice of regarding interferers as noise is strictly suboptimal and can be improved. * Further theoretical work is needed on how to formulate capacity questions for networks with bursty traffic sources. In point-to-point communication, one can ignore variations in source traffic rates, since the only question is whether the maximum rate can be carried. In networks, these questions are critical because the congestion caused by bursty sources of traffic can reduce the available capacity. * The computational complexity and delay bounds inherent in high-rate communication must be addressed theoretically. More generally, there is a need for a unified information-theoretic theory of networks. These problems are urgent. Theoretical understanding and design are already dangerously decoupled in the network field, and work to bridge the gap is critically important. Storage channels Over the last several years, the principles of communication theory have seen increasing application in the design of high density computer storage systems such as magnetic disks and tapes, magneto-optic disks and optical disks. The world-wide sales of magnetic recording products produced in the United States has been steadily growing, with the 1991 gross sales figure in excess of $60 billion. Although U.S. industry is still the dominant force in this very lucrative market, foreign manufacturers are beginning to exert a strong challenge. The application of the concepts of modern communication theory to high density digital storage systems is still in its infancy, and many extremely challenging research problems exist in this area. Although a number of university researchers recently have begun programs in this area, the breadth and depth of research topics to be solved could support a larger number of investigators. To a first approximation, a magnetic recording system can be modeled as a bandlimited linear channel corrupted by additive Gaussian noise. However, the input signals can take on only two different levels (corresponding to the magnetic material being magnetized all in one direction or the other). Communication systems that achieve reliable transmission of many bits per Hertz use multilevel (as opposed to two-level) signaling. How to achieve many bits per Hertz with a two-level input signal is the key question in obtaining higher recording densities. At high linear densities, the previously described model must be replaced by more complicated nonlinear models and signal-dependent noise. For example, if transitions are placed close together on a thin-film hard disk, the transitions partially annihilate each other, causing the resultant signal level to be lower than what would be predicted by a linear model. Experimental programs are required to make measurements upon which more accurate channel models can be based. Furthermore, when new signal processing algorithms are proposed, they must be tested in the laboratory on real disks and tape systems. Fairly primitive signal processing systems are used in current storage systems. The result is that these systems are limited to low densities, but they have the advantage of being inexpensive to implement. With the advent of VLSI, very sophisticated signal processing systems can be fabricated inexpensively on a chip. A recently introduced hard disk product utilizes a type of advanced signal processing called PRML (partial response equalization with maximum likelihood sequence estimation). It is to be expected that newer systems will use even more advanced systems. There is a need for research on new signal processing techniques that will lead to dramatic improvements in density (both more bits per centimeter along a track, and also more tracks per centimeter). Almost all present digital storage systems (magnetic, optical, or magneto-optic) utilize two distinct codes. One code is for error detection and correction, and is similar to the error control codes used in communication systems. The other is called a modulation code, and is used to control intersymbol interference and for clock recovery. (In the PRML system, the modulation code is also used to limit the memory of the Viterbi detector.) Coding theorists should be encouraged to develop new and more efficient codes for future storage systems. Similar research problems exist with optical and magneto-optic systems. Because optical and magneto-optic systems are in an earlier stage of development, this is an opportune time for pursuing research in these areas. Modeling and system analysis The development of high-speed workstations, and the resulting widespread availability of substantial computing power, has significantly impacted the manner in which system design and analysis is conducted. In particular, a host of computer-aided techniques for the design, modeling, and analysis of complex communication links and networks are based on simulation methodologies. The determination of efficient techniques for simulation of digital communication systems is an important problem, especially at very low symbol error rates. Efficient simulation of wideband systems operating in nonlinear environments and of systems in which signals have bandwidths that vary significantly (as in spread-spectrum systems) are also open problems. Semi-analytic techniques should be extended to more complex system models that include the effects of fading, impulse noise, coding, diversity transmission and synchronization effects. Simulation verification is another significant research area. Currently available computing power allows the use of the more complex system and channel models that are necessary to accurately represent real-world communication systems. For example, the analysis and design of wireless communication systems has been hampered by our inability to fully understand the behavior of radio waves in complicated channels. The use of computer graphics to describe the environment in and around a building may allow the development of a more realistic channel model that would better incorporate temporal and spatial variations of multipath components. Wideband wireless systems operating in the presence of a large number of other (interfering) systems present unique problems. Modeling of optical communication systems involves special problems. Quantum and thermal noise models are reasonably well developed for most applications. Phase noise, and the effect of phase noise on modulated signals, is an important issue. The impact of channel nonlinearities on the performance on optical networks and crosstalk in multiple-user systems can be important at gigabit speeds. Communications signal processing Communications signal processing covers the methodologies and algorithms used for analog and digital signal processing in communications; e.g., for encoding/decoding,modulation/ demodulation, or compression/decompression. Many of these algorithms are based on fundamental principles of adaptive filtering and signal estimation. Examples are algorithms for adaptive equalization, echo cancellation, suppression of interference (intentional and unintentional), and decoding. In the past, the least-mean-squares (LMS) adaptation algorithm has been popular, due to its simplicity and ease of implementation. However, VLSI technology now provides special purpose digital signal processor (DSP) chips and application-specific integrated circuits (ASICs) which can implement much more sophisticated algorithms, such as recursive least squares (RLS) or signal subspace algorithms. New possibilities for solving communications problems therefore arise: for example, new approaches for combining channel characterization and equalization with data transmission. Many communications signal processing applications occur in terminal equipment, where low cost and power are important. For example, in personal communication networks, high-performance algorithms that can be implemented efficiently with VLSI technology are needed. System architecture, parallel processing, and numerical precision are issues that are typically prominent in such applications. Modern digital communication systems often require specialized signal processing that is quite different from usual DSP techniques. For example, future mobile radio systems require methods for combined decoding and equalization. Adaptive diversity-combining systems require nonlinear signal processing to combat fading and interference. Adaptive interference suppression is likely to be required for wideband communications in the presence of narrowband interference. The design goals and performance criteria in communications applications such as these are distinctly different from those in ordinary DSP applications. Radio systems and networks Radio systems and networks are needed to provide voice and data communication capability between mobile terminals and to permit such terminals to have wireless access to wireline or optical fiber backbone networks. Among the commonly mentioned applications are personal communications, wireless office communications, and vehicular communications (e.g., cellular radio). In addition, wireless voice, telemetry, data, and video communications are needed in flexible manufacturing, construction, and mining operations. In many applications, access to planned and existing services (e.g., the public switched telephone network, or the NREN) is desired. Mobile radio systems and networks are also required to provide wireless point-to-point or network connections among mobile terminals, or between such terminals and a control center. Basic research is required to design and develop wireless systems and networks that can operate efficiently and reliably over radio channels with limited bandwidth, distortion due to time- and frequency-selective fading, and severe radio-frequency interference. Simultaneous access must be provided for a large number of mobile terminals, so multiple-access radio communication techniques are required. In addition, the system may have to coexist in the same frequency band with other types of communication services, so the radio communication links are required to discriminate against radio-frequency interference. Modulation, coding, and channel access methods that can cope with interference from both external and internal sources are required for wireless networks. Classical frequency-division multiple access (FDMA) and time-division multiple-access (TDMA) communications, even with efficiently source-coded digital inputs, provide only moderately efficient use of a limited frequency band allocation. For bursty traffic, spread-spectrum techniques and other wideband signaling methods can provide significant improvements in bandwidth efficiency (information throughput per unit bandwidth) over FDMA and TDMA. Such techniques can also provide substantial performance improvements on channels with interference and fading, such as the cellular radio channel. Forward-error-correction coding is essential for frequency-hop spread spectrum and certain hybrid forms of spread spectrum, and it extremely beneficial for all types of wideband and narrowband radio communication. Specialized processing techniques need to be developed for radio receivers to enhance the reliability of the communication in the presence of interference and fading. Multipath combining, diversity combining, adaptive equalization, and other types of processing are often required. These systems require physical, link, and network protocols. In particular, for store-and-forward packet radio networks, there is significant interplay between the protocols and the modulation, coding, and receiver processing. There has been a great deal of previous research on mobile radio systems and networks for military applications. Future work should build on this research base and carry it into the realm of commercial applications, where the requirements and constraints are often quite different than for military systems. Commercial mobile radio systems and networks may have to use less bandwidth, handle a significantly larger number of simultaneous transmissions, coexist with other communication services in the same frequency band, and operate in more severe fading environments. Commercial broadcast radio uses analog modulation, either AM or FM, designed in the era of vacuum tube circuitry. The fidelity of a broadcast radio signal using analog modulation is inconsistent with the fidelity of the compact disk. Research into new bandwidth- efficient, high-fidelity digital audio broadcast techniques is needed. Mobile network management Mobile communication networks bring information to people and machines on the move. These networks need special capabilities to serve untethered terminals that change location frequently. Managing transmission resources and tracking terminal locations are two examples of network management tasks that create unique problems in mobile networks. Terminals may be linked to the network by means of limited-bandwidth, variable-quality radio or infrared channels. The network must manage these transmission resources dynamically in order to provide the necessary bandwidth and channel quality at the places where they are needed. Resource allocation algorithms must be coupled to efficient communication protocols to ensure that the network delivers to each terminal the highest-quality communications consistent with the terminal's location and with other network activity. After the network establishes communication with a terminal, it must respond to changes in terminal location. The network must rearrange itself without compromising vital user and network control information. This raises issues of how to detect the need for a rearrangement, how to decide on the new configuration, and how to devise protocols that are sufficiently robust to maintain communications as the network changes. To deliver information services to a mobile terminal, the network must first determine the location of the terminal. In cellular networks, the terminal notifies the network each time it changes location. This action stimulates several network activities including message exchanges over radio links, messages through the fixed network infrastructure, and database updates. As the geographical density of network terminals grows, network cells decrease in size, and the number of location updates increases rapidly. With existing techniques, there is a danger that this location tracking activity could swamp the network control system. Research focusing on the kinds of information the network requires to locate and establish communications with a mobile terminal is important. Some information (for example, the user profile) changes infrequently, while other (location-specific) information changes frequently. How should this information be acquired and stored? Where should it reside in the network, and how should it be transferred to where it is needed? Radio resource management and mobility management are examples of a long list of research issues relevant to mobile information networks. Some other important topics are network security, information privacy, battery power management in terminals, radio transmission technologies, database management, and network operating systems. Protocol theory, design, and engineering Today's computer network architectures and protocols are inadequate for many of tomorrow's needs, specifically the need to create gigabit per second communication paths using high performance fiber-optic links and switching technology. The design and engineering of high performance networks must be based upon a sound theoretical foundation. We must develop techniques for the design, specification, analysis, performance characterization, implementation, testing, maintenance, and modification of network architectures and communication protocols. Recent advances in the theory of protocols are beginning to provide an understanding into many facets of protocol behavior and interaction. As the demands for more effective network communications increases, research in the theory of protocols must continue so that we will have a strong foundation upon which to base future designs. A network, or an internetwork, is necessarily the composition of a large number of interacting protocols. The communication service offered by the network is the result of the interaction of its many protocols. The protocols in the structure perform specified functions, interacting with each other through well-defined interfaces. To meet the objectives of network architectures, it is important to understand how these interfaces should be defined, specified, and satisfied. The individual protocols should be designed with desirable qualities for reuse, portability, efficiency, correctness, modification, and maintainability. To build reliable networks, we must be able to not only prove the correctness of individual protocols, but of the entire collection of interacting protocols. A complete understanding of protocols and interfaces is important for the development of a sound theoretical basis for composing protocols in the construction of computer networks. We must understand how to design and manage such complex software structures, in which many protocol components are usually designed, implemented, modified, and maintained by different groups of individuals. Most protocols are designed to perform multiple functions and provide multiple services. This is done for efficiency but, in many cases, results in very complex protocols. Usually very little can be proved about their correctness and properties. Methods are needed to synthesize multifunction protocols from relatively simple ones that implement individual functions. Specifically, techniques to add new functions to a protocol, without affecting its original functions or correctness, would be extremely useful. Conversely, methods to remove specific functions and unnecessary code from an existing protocol to make it more efficient (lightweight), without affecting the remaining functions, would also let us tailor protocols to specific operating environments. In this way protocols could ``adapt'' to characteristic changes in underlying switching and transmission technology. The development of such methods will facilitate the structured implementation of protocols, create implementations where the issues of interoperability between different protocol implementations with similar functional components can be understood, and address the issues of portability of protocols across implementation platforms. Formal models are, therefore, needed that provide a clear understanding of the theoretical concepts of protocol refinement, projection and conversion, as well as interface semantics and protocol composition. Research advances are needed in protocol specification notations, and semantic models of protocol behavior. Research is also needed to develop from these notations and models, protocol specification, testing, and verification methods which can be used in protocol design and engineering tools. We envision a future in which network protocols, having been formally verified to work as intended, are cataloged and stored in a library. Their interfaces would be formally specified in such a way that their source codes are portable, reusable and easily modifiable. New high performance fiber optic channels and communication switches are forcing us to rethink the architecture and structure of computer networks. Perhaps the old layering paradigm is wrong and even incompatible with these new high performance networks. As the price/performance ratio of processors continues to decrease we see more processor capability in front-end controllers and increasing use of coprocessors. We must rethink the decomposition of functions within computer systems and among their processing elements. Our rethinking must be coupled with our development of the foundations of the theory of protocols and formal models. We must understand the interaction among the protocols and the resultant engineering implementation structures. We must also encourage innovative network architecture and protocol designs. New models for analyzing the performance characteristics of these architectures and protocols are required. They must have a direct relationship to the performance of the applications themselves. Appropriate metrics for evaluating and comparing protocols should also be studied. Finally, the interfaces offered by operating systems for protocol implementation and the interfaces offered by a network to its applications must be investigated if we are to create the high performance computing environments of the future. Network interface architectures Network interfaces are responsible for connecting networks to hosts and other networks, and are critical to the performance and usefulness of networks. The network interface can in fact be viewed as the heart of a network architecture, in that the interface defines the end-user's model of the network. In addition, network interfaces reveal many fundamental and intrinsic system issues in networking, such as interoperability, and are often the most appropriate place to resolve them. The primary function of host interfaces is to move data between the attached network and the host memory. It is well known that most common transport protocols are not performance bottlenecks. For instance, efficient TCP/IP implementations now take as few as 100 CPU instructions to process each packet. However, in preparing packets for transmission and in acting on received packets, the CPU incurs considerable overhead related to the cache architecture, data copies, page locking/unlocking, context switching, interrupts, etc. The environment in which protocols are executed, rather than protocols themselves, is where performance improvements are most needed. Unfortunately, network interfaces have usually been approached in an ad hoc manner, and have not received sufficient attention from the networking research community in general. As a result, host interfaces and network gateways have been performance bottlenecks in almost all high-speed networks. For example, many current workstations already have high-performance I/O buses with sustained bandwidths of several hundred megabits per second. However, due to deficiencies in workstation network interfaces, only a small fraction of this bandwidth has actually been realized across networks. Besides achieving high performance, network interfaces should be designed to support network usage models and various control functions. For example, a network interface can be designed to stripe traffic over a collection of uniform low-speed channels; a network interface can be designed to support page-level shared memory programming models; and a network gateway can be designed to propagate congestion information from networks to hosts to support effective congestion control. Another major and growing concern is high-speed network interfacing with special purpose systems, such as massively parallel computers, high-resolution printers, high-bandwidth storage systems, and high-performance visualization stations. Networks have made it possible for these systems to be used as shared resources and have encouraged their development. However, a special-purpose system is typically not designed to implement high-speed network interfacing functions. An additional network interface unit is needed to provide the required capabilities such as high-performance buffering and protocol processing. Interfacing with a special-purpose system presents some major challenges. For example, when interfacing with a parallel processor array, the data distribution from the interface to individual processors on the array is usually done in an ad hoc manner. There is a lack of high-level protocol support for primitives such as distributing an array evenly among multiple processors. Identifying common network I/O requirements for different types of special-purpose systems would be a useful first step in streamlining network interfaces with these systems. Examples of research topics in network interface architectures include; workstation I/O architectures to support high-speed network interfaces; low-latency (sub 10-microsecond) end-to-end inter-host communication; integration of architectural development of hosts, with that of local, metropolitan, and wide area networks; and new network usage models. Dynamic network control Much research is needed to determine how to configure and control large high-speed networks of the future. In particular, flow control, admission control, and routing algorithms need to be developed. New network control techniques are needed to support new applications with diverse communication requirements, and to exploit new technology for data links and switches. The techniques should be dynamic: adapting the network operating mode in response to changes in network resources or demand. Future networks approaching gigabit transmission speeds are a driving force for much of the research on dynamic network control. However a largely different set of dynamic network control techniques arise in other contexts, such as in support of ground or satellite-based mobile information networks, described elsewhere in this report. New challenges are posed by the increased ratio of propagation delay to bit duration. A coast-to-coast gigabit link contains 15 megabits in transit. Predictive rate control, based on modern automatic control concepts, needs to be developed. At least in the short run, the increase in network speed will also cause a large variation in data-rate requirements. A small number of high-speed sources could generate extremely bursty traffic loads. High-speed applications of a network are likely to pose stringent requirements, including a need for guarantees on end-to-end delay, throughput, support of bursty traffic, and reliability. ``Best-effort'' delivery by networks may not be adequate, so provisions for negotiating service should be considered. Research providing insights and basic control methods should be conducted. Some research may be tied to particular new transmission formats, such as asynchronous transfer mode (ATM). Research is needed in the whole spectrum of switching techniques, from datagram packet switching to circuit switching, including a wide variety of virtual-circuit methods. Cost and complexity considerations may dictate the use of high-speed switches that occasionally drop packets or block circuits in the face of high congestion. Traffic modeling, dynamic traffic control, and network sizing techniques are needed to ensure satisfactory end-to-end performance in the face of possible packet loss. Research on network control techniques should be driven in large measure by an integrated system viewpoint. First, the control techniques should be designed and ultimately assessed for use in a network, not just for use on a single link or connection. Secondly, the control techniques should be suitable for existing or anticipated technology, to work in conjunction with a complete set of protocols. Implementation requirements, including communication and computational demands, should be assessed, whether the techniques are distributed or centralized. Because of the difficulty in implementing high speed networks, it is unreasonable to implement every control technique that is evaluated. However, analyzing implementation requirements, particularly computational demands, without constructing networks frequently results in inaccurate results that overlook important aspects of the problems. Therefore, to the extent possible, analytic work on high speed networks should be performed in conjunction with physical experiments that verify whether or not the approach is reasonable. Internetworking The developments in local area networks (LANs), new high performance wide area networks (WANs), and carrier service offerings have significantly changed the structure of computer networks. Backbone networks no longer consist of simple homogeneous structures, single LANs or uniform packet-switched WANs. Networks in many organizations have evolved into very complex structures, interconnections of subnetworks, carrier services, network architectures, communication protocols, and computing systems from a diverse set of vendors. The challenge of internetworking is to interconnect these components into an infrastructure where the information and computing resources within each computing system are accessible to others in a controlled, predictable and manageable way. Today's network environments are very complex. New standards are continually in development; hardware and software vendors are offering new high performance products; and carriers are evolving their service offerings. Current network architectures will continue to exist and expand due to their special functional capabilities. IBM's SNA, DEC's DECnet, TCP/IP, ISO standards, X.25, and others will be with us for the foreseeable future. The performance of LANs continues to increase with reduced costs of components as products mature. Vendors have packaged technologies and standards in exciting ways: integrated LANs, bridges, routers, and wiring HUBs. Carriers are now offering high performance services, including SMDS, Frame Relay and ATM. Technologies and components are linked through internetworking, creating new infrastructures for network communications. For these internetworked structures to be successful and operate to our expectations, we must develop a theoretical basis for their design. We must understand the characteristics and interface specifications of the paths to be linked and the resultant specification of the composite path. Much of this work shares a common basis with the developments in protocol theory and design: composition, specification, verification, and interface semantics. Depending on the network requirements, paths are either linked directly or linked through intermediate or encapsulating paths. In many of these cases paths of very different speeds, flow characteristics, and error management are being connected. We must understand how each path can effectively link to and use the flow and congestion management algorithms of the other. We must be able to predict the consequences of linking two incompatible flow management and/or routing algorithms and of interconnecting networks of vastly differing speeds and buffering characteristics. In many cases protocols must be transformed between interconnected networks. We must understand how interfaces are mapped across the transformation, how the protocol states and messages are mapped, and how correctness of the composite protocols is insured. We need a basis for verifying this protocol conversion mapping. The issues of failure and recovery modes of the composite path through the linked protocols must be specified and verifiable. The required prediction of efficiency of such composite structures requires the development of network performance models. Since any network must be managed and controlled, we must develop techniques and understandings for the management and control of composite networks, including issues of resource naming, error detection, network problem diagnosis, and failure recovery. In addition, issues of security and access control must be studied. Techniques for the linking of individual network security and access control mechanisms into a useful and verifiable composite mechanism need to be developed. Through the development and understanding of protocol and path mappings, conversions, and compositions we will be able to create composite networks utilizing the best and most appropriate technologies and offering the required services for network applications. The developments in internetworking are crucial to the success of the high performance complex network infrastructures we envision in the next decade. Lightwave network architectures Optical fiber has emerged as the medium of choice for point-to-point transmission systems because the low-loss/low-dispersion properties of single-mode silica fiber allow transmission of information at much higher rates and over much greater unrepeated distances than does copper wiring of any form. However, the fundamental architecture of the nationwide telecommunications infrastructure has remained essentially that which evolved during the pre-photonic era: point-to-point transmission systems interconnecting multiplexing/demultiplexing equipment and hierarchical digital switches, with fiber simply displacing earlier technologies as the physical transmission medium. By contrast, the bandwidth of the potentially addressable optical spectrum in low-loss fiber is thousands of gigahertz, and the goal of research into lightwave networks is the identification of system architectures and supporting device technologies which would enable the sharing of this enormous optical spectrum among a large number of interconnected hosts. As an example, the rigid switching hierarchy might be replaced by access stations which communicate peer-to-peer over the multichannel optical medium. The network could support multimedia traffic through a combination of clear-channel circuit switching, virtual circuit packet switching, and optical datagrams, with each host having access to a large number of simultaneous connections. The bandwidth available to each host should be limited only by the electro-optic constraint: the maximum sustainable rate at which information can be modulated onto or demodulated from the optical medium. The clear-channel service should support point-to-point and broadcast communications in arbitrary transmission formats, and the packet-oriented virtual circuit and datagram services should support any type of multimedia traffic. The network should scale in terms of service area, number of interconnected hosts, available capacity, and capacity per host (which should slowly increase as the electro-optic constraint is relaxed with advancing technology). Such a lightwave network will consist of a non-processing optical medium spanning a local, or metropolitan, wide-area service region, along with access stations connecting hosts (single or multiplexed) to the medium. Many independent high-speed channels may be simultaneously operated over the medium, each providing connections between selected access stations. In this way, a fundamentally new wide-area multimedia telecommunications infrastructure can be created, which will provide an extraordinary environment in support of high-performance computing and communications and multimedia information services. We need innovative new network architectures that can unleash the capacity potential of the core network despite speed-limited opto-electronic parts. Such approaches must invoke concurrency; i.e., the ability to simultaneously transport a multitude of messages distinguishable on the basis of their frequency/ wavelength (FDMA/WDMA), delay (TDMA), or waveform (CDMA). Furthermore, the strong interdependency between system architectures and the potential capabilities of lightwave devices cannot be overstated: the physics of device technologies sets constraints on permissible systems architectures. The architectures to be considered for lightwave networks must take these constraints into account. Many basic research issues need study, and new analytical methodologies must be developed to permit quantitative performance comparisons among alternative, non-traditional architectures. Traffic control and performance management are expected to be fundamental issues as more network functions are moved to the periphery. Modularity (the ability to add hosts to the network without disrupting existing equipment and connections) must be addressed. Technologies to dynamically reconfigure the network in response to time-varying traffic pattern or failure of network nodes or links must be developed, along with techniques to enable self-diagnostics and ``hitless healing.'' Approaches to a ``universal port,'' offering bandwidth upon demand, must be formulated and studied. Support of basic research into lightwave networks will provide the telecommunications industry with the tools to create major new networking capabilities. The potential of lightwave networks for integrated voice, data, image, and video multimedia telecommunications, and for local, metropolitan, and wide-area networks is not incremental but revolutionary. The superabundance of capacity offered by lightwave networks will enable a wide variety of new bandwidth-intensive applications. Network security and survivability Recent attacks on data networks, for example using computer viruses, have highlighted the problem of network security. Meanwhile, networks are being used for functions that require greater security and privacy, such as financial transactions. A great deal of research has been directed toward techniques for encrypting data to ensure secrecy; however, networks remain vulnerable. Too often there is a weak point in the protocols that exchange information or in the operation of the network that enables an intruder to gain access to the information without cracking the code. Research is needed to determine the conditions under which information is and is not accessible. As more and more hosts are networked together, it is unreasonable to expect that they will all have the same high standards of security. Once an intruder has violated a network at a weak point, it can possibly violate the seemingly more secure parts of the network. To guarantee that the effects of any security violation will not spread to other parts of the network, research is needed into protocols and network architectures for secure information exchange; in particular, protocols for access control functions such as authentication, authorization, and authority delegation are important. The notions of ``secrecy'' and ``privacy'' are intuitively meaningful. However, networks are extremely complex, and not every means for violating the security of a network is known. Therefore, the meaning of a network or protocol being ``secure'' is still an open question. The consequences of security violations are so severe that we should at least verify that protocols for secure information exchange can survive known techniques of attack. In order to deal with the complexity of networks, methods founded upon well-defined semantic models are needed. Even in the absence of malicious attacks, networks fail for many reasons. As communication networks grow in size and complexity, they are failing more and more often, because: * high-rate lightwave links have resulted in fewer links in our networks, which has increased the seriousness of failures; * the complexity of the software and hardware in the network has increased, resulting in more undetected bugs; * technology is changing and new equipment is being introduced at an increasing rate, resulting in the interaction of different types of equipment; and * competitive networks are being interconnected, resulting in differences in maintenance procedures. Research is required to provide a better quantitative understanding of failure modes, new approaches for failure prevention and containment, and accelerated recovery procedures. Recent disasters involving large telephone centers, caused by the accidental excavation of fiber optic links and software failures, indicate the importance of this problem area. Switching systems Switching systems are a central element in modern communication networks, providing the primary vehicle for controlling the inherent diseconomy of scale associated with large networks. Technological changes in recent years have raised many new challenges that affect both the theory and practice of switching system design. Several major telecommunications vendors are developing ATM switching systems that route virtual circuits of various data rates along fixed paths through a switching fabric. Effective operation of such systems requires an understanding of virtual circuit blocking that we currently lack. While multicast promises to be an important element of broadband network, we have no effective mechanisms for determining the probability of multicast virtual circuit blocking. This is an area where path selection algorithms can have a profound effect on performance and system cost, yet very little is known about the relative merits of different approaches. The queueing behavior of multistage networks is only beginning to be understood. Analytical queueing models can accurately predict the capacity of small networks with Bernoulli arrivals, but cannot predict cell loss in most realistic system configurations with acceptable accuracy. Modeling of bursty traffic is completely beyond the capabilities of current methods. The trade-offs between accuracy and computation cost have received little attention. A better understanding of these issues is crucial to providing quality-of-service guarantees in emerging broadband networks. While a variety of switching system architectures have been proposed for broadband networks, quantitative comparisons that could clarify the relative merits of the various architectures have been lacking. Architectural differences are often hidden by implementation, technology and performance differences. A better understanding of these issues is needed to allow designers of emerging networks (including the NREN) to make the best possible choices. The control and operation of switching systems for broadband networks has received limited attention to date. Signaling protocols for networks supporting dynamic multicast virtual circuits are being developed, but there is no adequate understanding of the distributed algorithms needed to set up and maintain them in the presence of rapidly changing needs. Large switching systems are likely to require multiprocessor control architectures to handle virtual circuit set-up, but there has yet been little effort on how to design switch control software that can get the best possible performance from such systems. ATM switching systems supporting link speeds of 150 Mb/s and 600 Mb/s have been demonstrated. It appears likely that speeds in the range of 1-10 Gb/s can be achieved with CMOS or BiCMOS technology, although significant challenges in packaging, heat dissipation, reliability, synchronization and cost may have to be overcome. The design of switching systems that can effectively span the spectrum of data rates from 50 Mb/s to 10 Gb/s poses an important research challenge that must be met to enable a smooth transition to the gigabit NREN. The growing need for nonstop network operation will require the introduction of redundant hardware and fault recovery software that can rapidly identify and isolate faulty components, and assist maintenance personnel. Support of terminal data rates above 10 Gb/s will require optical switching. Recent advances in optical amplifiers make it possible to seriously contemplate large-scale circuit-switched optical networks, but it is not yet clear if optical technology has the flexibility needed for directly switching a wide diversity of channels. Fundamental limits of networking The National Science Foundation should make a concerted effort to support research on issues which pertain to the ultimate limits of networking. Of particular interest is the choice of parameters on which to focus, such as the number of hosts that can effectively use a given network under various well defined service and performance requirements. What are the performance limitations of various network architectures which rely on position location, waveform selection and signal processing in place of conventional header information? It is known that kilobit/second networks allowed us to move from slow teletype-like communications to text-based displays with full screens written in a few seconds at most. In addition, file backup and even program down-loading became practical. Megabit/second networks have enabled powerful active servers and iconic user interfaces with interactive context switching between workstations and servers. What are the essential changes which will be enabled as we move towards gigabit/second networks, or to terabit/second networks in the future? Are there any natural limits (other than the technology itself) above which we cannot make effective use of additional network capacity per terminal? Networking of applications Many applications that have been developed to execute on individual computers may run considerably faster if their operation can be distributed over multiple machines connected by high speed networks. In the special case of highly parallel machines, these applications would require decomposition into components that can run in parallel using the normal communication capabilities readily available in the machine (e.g., backplane, file sharing, interrupts, and memory request). However, many of these applications could benefit from the capabilities of multiple state-of-the-art high performance systems which may be located at multiple locations distributed on the same campus or across the country. In this latter case, careful attention must be paid not only to decomposition techniques, but also to communications factors such as latency, bandwidth, reliability, buffering, protocols, distributed debugging, etc. An attendant concern is understanding and tuning, as well as measuring, the performance of the application under different assumptions of decomposition and network communications. In addition, new applications or techniques never before contemplated, due to lack of sufficient computing or communications resources, may now be possible to develop with high performance networks. In particular, the achievement of non-linear speedup in serial or concurrent computation is now within reach. Applications may range from basic computations such as mathematical algorithms which may execute faster in the network environment, to computations in the physical sciences which involve simulation or the solution of numerical equations, to scheduling and planning operations suitable for complex operations such as those which arise in manufacturing and assembly of mechanical systems. 7. CONCLUSION This workshop report describes the enormous importance of networking and communications to many aspects of our society. The impending rapid development of new integrated systems requires new techniques for design, evaluation, manufacturing and deployment of communications and networking equipment. Furthermore, the future supply of a well-prepared corps of engineers and scientists must be insured.