Title : NSF 92-36 - MIP Summary of Awards Type : Dir of Awards NSF Org: CISE / MIP Date : April 27, 1992 File : nsf9236 ****************************************************************************** This File has been updated 10/31/96 to reflect the proper address of the: National Science Foundation 4201 Wilson Boulevard Arlington, VA 22230 For more information call: (703)306-1234 ****************************************************************************** Please NOTE: This electronic version of NSF 92-36 does not contain graphic elements and other typographic elements. For a printed copy of this document, send a request, with your mailing address, to "pubs@nsf.gov" (Internet), or "pubs@NSF" (BITNET). A Postscript version, which duplicates most of the graphics and font changes is available through STIS. Preface The Computer and Information Science and Engineering (CISE) Directorate, under the direction of A. Nico Haberman, Assistant Director, CISE, consists of the following six divisions and offices: Advanced Scientific Computing (ACS) Division, Computer and Computation Research (CCR) Division, Cross-Disciplinary Activities (CDA) Office, Information, Robotics and Intelligent Systems (IRIS) Division, Microelectronic Information Processing Systems (MIPS) Division, and the Networking and Communications Research and Infrastructure (NCRI) Division. The Microelectronic Information Processing Systems Division (MIPS) supports research on novel computing and information processing systems including signal processing. Emphasis is on experimental research, technology-related research and particularly the critical link between conceptualization and realization for integrated systems. Technologies include VLSI, ULSI, OPTICAL, OPTO- ELECTRONIC, INTER-CONNECTION and other emerging technologies. The focus is on research pertaining to hardware systems and their supporting software, including: experimental research involving these new systems; infrastructures, environments, tools, methodologies and services for rapid systems prototyping; design methodologies and tools; technology-driven and application-driven systems architectures; and fabrication and testing of systems. For signal-processing systems, research on algorithms and architectures relating to these new technologies that have promise for real-time computing is emphasized. The purpose of this Summary of Awards for the MIPS Division is to provide the scientific and engineering communities with a summary of those grants awarded in Fiscal Year 1991. This report lists only those projects funded using Fiscal Year 1991 dollars and hence does not list multi-year awards initiated prior to Fiscal Year 1991. Similar areas of research are grouped together for reader convenience. The reader is cautioned, however, not to assume that these categories represent the totality of interests of each program, or the total scope of each grant. Projects may bridge several programs or deal with topics not explicitly mentioned herein. Thus, these categories have been assigned administratively and for the purpose of this report only. In this document, grantee institutions and principal investigators are identified first. Award identification numbers, award amounts, and award durations are enumerated after the individual project titles. Within each category, the awards are listed alphabetically by state and institution. Readers wishing further information on any particular project described in this report are advised to contact the principal investigators directly. Bernard Chern Division Director Microelectronic Information Processing Systems Division Table of Contents Preface . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . iii Table of Contents . .. . . . . . . . . . . . . . . . . . . . . . . . . . v The MIPS Division . . . . . . . . . . . . . . . . . . . . . . . . . . . vii MIPS Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Advisory Committee. . . . . . . . . . . . . . . . . . . . . . . . . . . xi MIPS Staff. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiii Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Design, Tools and Test. . . . . . . . . . . . . . . . . . . . . . . . . 1 The Program. . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Initiatives and Opportunities. . . . . . . . . . . . . . . . . . 2 Awards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Integrated Circuit Theory. . . . . . . . . . . . . . . . 3 Design Automation and Tools. . . . . . . . . . . . . . . 5 Testing. . . . . . . . . . . . . . . . . . . . . . . . . 12 Simulation . . . . . . . . . . . . . . . . . . . . . . . 15 Other. . . . . . . . . . . . . . . . . . . . . . . . . . 17 Microelectronic Systems Architecture. . . . . . . . . . . . . . . . . . 19 The Program. . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Initiatives and Opportunities. . . . . . . . . . . . . . . . . . 20 Awards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Technology Driven Architecture . . . . . . . . . . . . . 21 Application Driven Architecture. . . . . . . . . . . . . 28 Other. . . . . . . . . . . . . . . . . . . . . . . . . . 33 Circuits and Signal Processing. . . . . . . . . . . . . . . . . . . . . 35 The Program. . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Initiatives and Opportunities. . . . . . . . . . . . . . . . . . 36 Awards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Circuits . . . . . . . . . . . . . . . . . . . . . . . . 37 Analog Signal Processing . . . . . . . . . . . . . . . . 37 Array Processing . . . . . . . . . . . . . . . . . . . . 39 Filters (both Linear and Non-linear) . . . . . . . . . . 41 Image Processing . . . . . . . . . . . . . . . . . . . . 43 Image/Signal Reconstruction. . . . . . . . . . . . . . . 45 Multidimensional Signal Processing . . . . . . . . . . . 48 Miscellaneous. . . . . . . . . . . . . . . . . . . . . . 49 Experimental Systems. . . . . . . . . . . . . . . . . . . . . . . . . . 53 The Program. . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Initiatives and Opportunities. . . . . . . . . . . . . . . . . . 54 Awards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Graphics and Solid Modelling . . . . . . . . . . . . . . 55 General Purpose Computing. . . . . . . . . . . . . . . . 56 Application Specific Computing . . . . . . . . . . . . . 57 Other. . . . . . . . . . . . . . . . . . . . . . . . . . 59 Systems Prototyping and Fabrication . . . . . . . . . . . . . . . . . . 61 The Program. . . . . . . . . . . . . . . . . . . . . . . . . . . 61 Initiatives and Opportunities. . . . . . . . . . . . . . . . . . 62 Awards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Systems Prototyping and Fabrication. . . . . . . . . . . 63 Education. . . . . . . . . . . . . . . . . . . . . . . . 68 MOSIS. . . . . . . . . . . . . . . . . . . . . . . . . . 69 Index of Presidential Young Investigators . . . . . . . . . . . . . . . 71 Index of Research Initiation Investigators. . . . . . . . . . . . . . . 73 Index of Principal Investigators. . . . . . . . . . . . . . . . . . . . 75 Index of Institutions . . . . . . . . . . . . . . . . . . . . . . . . . 79 Microelectronic Information Processing Systems The MIPS Division The area of Computing Systems, which involves the structure of computers, is central to MIPS today and will be even more so in the future. This is a core area of computer science and engineering and in the 1990's encompasses much more than just hardware. Computing systems deals with computer architecture, hardware implementation, system software (operating systems and compilers), networking, and data storage systems. The advent of gigabit networks, high performance microprocessors and parallel systems is dramatically impacting research on systems level architecture of high performance computing systems. The emphasis in MIPS is on REAL SYSTEMS i.e. physically realizable. Special weight is placed on design, prototyping, evaluation, and novel use of computing systems and on the tools needed to design and build them. This involves technology driven and application related research, experimental research and theoretical studies. The MIPS programs support research on: high level design (design automation and CAD tools); systems level architecture studies; experimental systems research projects which build and evaluate HARDWARE/SOFTWARE SYSTEMS; signal processing algorithms and systems; knowledge of applications; methodologies, tools and packaging technologies for rapid prototyping at the system level; and infrastructure needed to support MIPS' educational and research activities, e.g. MOSIS. The Programs Design, Tools and Test Program Supports research on design processes for both integrated circuit (IC) chips and systems. Emphasis is on automating the design process in new, existing and mixed technologies. Research areas include theoretical foundations of IC chip and system design, including models, algorithms and methodologies; tools and frameworks for designing IC chips and systems; design synthesis including logic synthesis; simulation of designs; design testing and design validation. Systems Prototyping and Fabrication Program Supports research on technologies, tools, and methodologies needed for the prototyping of experimental information processing systems and for Microelectronics Education. Issues that arise in rapid system prototyping are explored, including use of new packaging techniques such as multichip modules, and such systems issues as interfacing and standards. Support is also provided for new prototyping services. Basic research necessary to model, simulate, measure, automate and improve the microfabrication process is supported. Microelectronics Education support includes workshops, conferences, development of curriculum and courseware materials, and educational support services such as those for FPGA's and fabrication (MOSIS). Microelectronic Systems Architecture Program Supports basic research on computing systems and methods for their design. Computing Systems deals with computer architecture, hardware implementation, systems software, networking, and data storage. Research is encouraged on the fundamental aspects of computing systems architectures and scientific design methods that better utilize existing or emerging technologies, support systems software or address important applications whose computational requirements cannot be met by conventional architectures. The program emphasizes physically realizable systems and, when necessary, limited proof-of-concept prototyping. Circuits and Signal Processing Program Supports research on circuit theory and analog and digital signal processing. The emphasis is on modern signal processing, stressing the impact of VLSI, including areas such as: signal representation, filtering, novel algorithms, special-purpose hardware, and real-time computing. Circuit theory research encompasses such activities as nonlinear, discrete-time, analog and hybrid circuits, and analog/ digital conversion. Experimental Systems Program Supports research experiments that involve building and evaluating information processing and computing systems. These are goal-oriented projects usually undertaken by teams of designers, builders, and users. The building of a system must itself represent a major intellectual effort, and offer advances in our understanding of information systems architecture by addressing significant and timely research questions. The system prototypes being built should be suitable for exploring applications and performance issues. Basic research in Computer Architecture and Computing Systems is supported by the National Science Foundation primarily through three programs in the CISE Directorate: the Microelectronic Systems Architecture Program and the Experimental Systems Program in the MIPS Division; and the Computer Systems Program in the CCR Division. The following pie chart shows the relative support through these three programs. Experimental Systems (53 %) Microelectronic Systems Architecture (30 %) Computer Systems (17 %) MIPS Directions MIPS' planning takes into account advances in technology and new knowledge, and the need for closer ties of computer science and engineering to real world applications. Greater emphasis is now placed on complete systems: with a broad and coherent research program in new systems architectures, automated design, and design tools to aid in research and development of high performance architectures. Fiscal Year 1992 represents the formal start of the FCCSET High Performance Computing and Communications (HPCC) Program. The MIPS role in the HPCC Program focuses on the support of: ~ Basic research (hardware & system software) on new high performance computer architectures and computing systems; ~ Development of tools & CAD frameworks for their design, analysis and realization; ~ Algorithm development and computational techniques for ~grand challenge~ problems in the areas of research supported by MIPS. The HPCC initiative in MIPS builds on the support of the Application Specific Computing Systems work and represents a major extension of the research to encompass the broader class of general high performance computing systems. Research on high performance computing systems is responsive to such major drivers as: technology, applications and new ideas. To make advances in this area that can be effectively exploited, requires that experimental systems be built quickly and cheaply and new kinds of design tools be developed and supplied to the research community to enable this to occur. These prototype systems can be used to evaluate new computing architectures by subjecting them to real applications which provide believable tests of novel ideas and performance. Only by constructing prototypes, performing measurements and evaluating performance, can we realistically gauge the interaction between a new computing system, its applications, and its users. Each of the programs in MIPS plays an important role in this initiative. The Microelectronic Systems Architecture Program seeks to provide new architectural ideas and technological innovations by exploring novel architectures capable of high performance. The Experimental Systems Program concentrates on the prototyping, experimentation with, and evaluation of promising new computer architectures and hardware/software computing systems. The Systems Prototyping and Fabrication Program supports the development of new technologies and tools for rapid systems prototyping of experimental systems, and provides access to these technologies for the research community. This program also supports the development of educational materials and access to these new technologies for educational use in order to provide a new generation of highly qualified computing systems designers and implementors (MOSIS). The Design, Tools and Test Program focuses on high performance computing system design tools and methodologies. The Circuits and Signal Processing Program focuses on a wide range of signal processing problems needing high performance computing, and serves as an application driver for high performance computing research. MIPS Directions (continued) With less and less industrial support available for long range exploratory research on novel high performance computing systems, MIPS will play an increasingly important role in supporting such research, with emphasis on research having a wide range of potential applications. The implications for industrial competitiveness are very strong in this area. We see the need to: 1. Work more closely with the applications as we move toward higher performance computing to understand the computing needs of these applications. The 1992 initiative on High Performance and Application-Specific Computing Systems (ASCS) is a step in this direction. 2. Integrate advanced packaging technology into computing system design and explore the system level tradeoffs arising in the design of high performance computing systems. 3. Develop the necessary infrastructure and human resources in the computing systems area, especially the education of students able to design and build hardware/software systems. 4. Develop new services, tools, and methodologies for universities to utilize new fabrication and device technologies in order to do rapid system prototyping essential for experimental research on increasingly complex systems. 5. Support geographically distributed collaborative novel computing system design requiring expertise from many areas (e.g., architecture, software, storage technology, I/O, applications, etc.). 6. Develop a new generation of systems level design tools which have increased functionality, are highly automated, and accept high level specifications as inputs. Other Initiatives & MIPS Biotechnology Mips will support research on algorithms and special purpose chips and computers for such areas as bio-sequence comparison, image processing and work on neural network computing systems, as well as adaptive filtering techniques useful for modeling biological phenomena. Materials In this initiative MIPS focuses on modeling and simulation of electronic materials and devices. Exploratory reseaarch on the role of new optical materials and devices in advanced computing systems with emphasis on opto-electronic computing is supported. Manufacturing MIPS supports research on new paradigms, methodologies and Application-Specific Computing Systems for distributed design and manufacturing. This includes CAD Tools, new design rules for process-constrained design for manufacturing, and development of "MOSIS" like centers for implementing distributed design and manufacturing. Division of Microelectronics Information Processing Systems Advisory Committee November 1991 Gaetano Borriello University of Washington 206-685-9432 gaetano@cs.washington.edu Blake E. Cherrington University of Texas - Dallas 214-690-2974 cher@utdallas.edu Douglas W. Clark Digital Equipment Corporation 508-264-5555 clark@crl.dec.com Enrico Clementi IBM Corporation 914-385-0413 enrico@YKTVMZ (Bitnet) Edward S. Davidson University of Michigan 313-747-1777 davidson@zip.eecs.umich.edu Delores M. Etter University of Colorado 303-492-7327 etter@boulder.colorado.edu Henry Fuchs University of North Carolina 919-962-1911 fuchs@cs.unc.edu Mary J. Irwin Pennsylvania State University 814-865-1802 mji@cs.psu.edu Anita K. Jones University of Virginia 804-924-7605 jones@virginia.edu Randy H. Katz University of California - Berkeley 415-642-8778 randy@ginger.berkeley.edu Gershon Kedem Duke University 919-660-6555 kedem@cs.duke.edu H. T. Kung Carnegie Mellon University 412-268-2568 htk@n.sp.cs.cmu.edu Sarah A. Rajala North Carolina State University 919-737-5114 sar@ecesar.ncsu.edu Mark J. T. Smith Georgia Institute of Technology 404-894-6291 mjts@eedsp.gatech.edu Robert F. Sproull, CHAIRMAN SUN Microsystems, Inc. 508-671-0353 rsproull@east.sun.com Gerald J. Sussman Massachusetts Institute of Technology 617-253-5874 gjs@altdorf.ai.mit.edu Earl Swartzlander, Jr. University of Texas - Austin 512-471-5923 e.swartzlander@compmail.com Donald W. Tufts University of Rhode Island 401-792-5812 tufts@quahog.uri.edu Division of Microelectronics Information Processing Systems Liaison Members John Toole DARPA/CSTO 703-696-2264 toole@darpa.mil Lance Glasser DARPA/ESTO 703-696-2213 LGLASSER@VAX.DARPA.MIL Division of Microelectronics Information Processing Systems MIPS Staff Bernard Chern Division Director bchern@nsf.gov John R. Lehmann Deputy Division Director jlehmann@nsf.gov Program Program Director Net Address Design, Tools and Test Robert B. Grafton rgrafton@nsf.gov Microelectronic Systems Pen-Chung Yew pyew@nsf.gov Architecture Circuits and Signal John H. Cozzens jcozzens@nsf.gov Processing Experimental Systems Gerald Q. Maguire gmaguire@nsf.gov Systems Prototyping and Paul T. Hulina phulina@nsf.gov Fabrication The above E-mail addresses are for the ARPANET, the CSNET and the NSFNET; for BITNET use the form: @nsf The address and telephone number for all of the above: National Science Foundation 1800 G Street, N.W. Washington, D. C. 20550 (202) 357-7853 MICROELECTRONIC INFORMATION PROCESSING SYSTEMS DIVISION [graphic elements omitted] Summary Number Dollars Design, Tools and Test 64 $3,870,735 Integrated Circuit Theory 10 $515,359 Design Automation and Tools 29 $1,834,822 Testing 14 $909,662 Simulation 6 $422,732 Other 5 $188,160 Microelectronic Systems Architecture 47 $3,240,634 Technology-Driven Architecture 23 $1,638,788 Application-Driven Architecture 22 $1,502,124 Other 2 $10,500 Circuits and Signal Processing 61 $3,646,312 Circuits 2 $104,866 Analog Signal Processing 8 $473,928 Array Processing 6 $335,249 Filters (both Linear and Non-linear) 9 $534,399 Image Processing 10 $659,438 Image/Signal Reconstruction 12 $617,193 Multidimentional Signal Processing 4 $301,492 Miscellaneous 10 $509,339 Experimental Systems 27 $5,390,080 Graphics and Solid Modelling 5 $1,146,709 General Purpose Computation 7 $1,338,373 Application Specific Computing 12 $2,340,820 Other 3 $472,862 Systems Prototyping and Fabrication 27 $2,360,826 Systems Prototyping and Fabrication 23 $2,150,590 Education 3 $103,809 MOSIS 1 - This data includes funds designated for special Foundation initiatives and reserves and may not agree with official NSF records. Design, Tools and Test Dr. Robert B. Grafton, Program Director (202) 357-7853 rgrafton@note.nsf.gov The Program The Design, Tools and Test Program supports basic research on design of both integrated circuit (IC) chips and microelectronic systems. Emphasis is on automating the design process. Areas of research include, but are not limited to: Theory and foundations - computational models, design styles, and algorithms for design tools. Synthesis tools - layout, logic, behavioral and high level synthesis, synthesis for performance and testability, synthesis with formal verification, hardware-software co-design. Validation of designs - numeric, symbolic, and other kinds of simulation as techniques for evaluating designs before manufacture, and identification and evaluation of manufacturing test methods. DRIVING FORCES: TECHNOLOGY, MANUFACTURING, COMPETITIVENESS Advances in technology make it possible to design a system on one or a few chips. New materials allow higher operating and switching speeds, making electrical effects more pronounced. Physical interconnect among logic gates is a design factor with three levels of metal not uncommon. All must now be accounted for in design. Manufacturing costs go up at an exponential rate, so it is necessary to develop design for manufacturability concepts and embed them in tools. Competitiveness requires that the product design cycle be reduced to allow rapid prototyping of hardware designs, and that factors such as cost and operating environment be accounted for. TOPICS Theory creates intellectual foundations for the design of ICs and systems. It explores the capabilities and limits of computing in the VLSI medium. The goal is to find fundamentals for design of future ICs and microelectronic systems. Synthesis research is focused on tools and algorithms for automating the IC and system design processes. This includes design synthesis at all levels, hardware-software co-design, design frameworks, synthesizing testable designs and tools for formal methods for proving properties of designs. Testing is viewed as an activity starts with design verification and continues during the lifetime of the system. Research includes: design validation, manufacturing test, integrating IC test with system test and diagnosis; and models for detection of realistic failure modes. Electrical simulation emphasizes speed and efficiency in solving the circuit equations, especially in light of larger designs with more complex equations. Functional simulation may be non-numeric, symbolic, or a mixture of both. Initiatives and Opportunities There are many opportunities and special programs that are available through the DTT Program. The special Programs are: High Performance Computing and Communications (HPCC) Materials Processing Biotechnology Manufacturing-related Research. DTT Program related research on these special programs falls into the areas of theory, modeling and simulation. Topics which are pertinent to one or more of these include: * Models of computation which reflect the specific needs of high performance computing, e.g. high density, high speed. * Algorithms and tools for solving HPCC system design problems. * Methods for generating tools for creating designs with novel or specialized features that may occur in biotechnology and manufacturing systems. * High performance system synthesis, including interconnect problems for systems of chips, on-chip data management, design validation, and test. * Complexity control of 10-100M transistor chip designs. * Parallelism as a model of computation both in electrical and optical mediums, especially for irregular structures. * Validation techniques, such as proving correctness of critical parts of a design, timing analysis, electrical and functional simulation, system test techniques and testing high speed high density chips. * Models of computation for computing in new electronic and opto-electronic materials which have higher chip densities, smaller feature sizes, higher frequencies, and increased energy density. * Models for signal propagation in complex devices made of advanced materials in integrated circuit geometries. * Accurate simulation of material/device electrical properties under high frequency excitation. * Computation of signal propagation on the same layer or between adjacent layers. * Methods to generate tools as needed when creating specialized designs for applications in HPCC, biotechnology, or manufacturing systems. Awards Integrated Circuit Theory Stanford University; David Dill; PYI: Automatic Verification of Finite State Concurrent Systems; (MIP-8858807 A03 & A04); $100,000; 12 months. Research is on three topics in IC design automation. First, is verification of finite-state concurrent systems, especially hardware. Both of the two approaches being investigated find and inspect the set of states reachable from a set of initial states. One stores the set of states in a hash table, while the other represents the set of states as a boolean function in the form of a binary decision diagram. Second, is asynchronous circuit synthesis which explores new ways of designing asynchronous circuits. The idea of using explicitly controlled delays to implement state machines is being modified by feeding the outputs of a combinational circuit back to the inputs. Thus one can characterize precisely that class of behaviors that can be implemented. Also the use of timing constraints in synthesis is being examined. Third, work on automatic synthesis of processes and schedules from logical specifications is being done so that several processes can be derived simultaneously when communication is constrained to a designated set of shared variables. University of California - Santa Cruz; Martine Schlag; PYI: Theoretical Computer Science, VLSI Layout; (MIP-8896276 A03 & A04); $70,000; 12 months. This research is on specification languages and related aspects of VLSI design with a focus on Field Programmable Gate Arrays (FPGA's). Three areas are being investigated. First is algorithms and techniques for mapping designs to FPGA's. Routability is being considered at the logic minimization level. Problems include: methods to specify routing patterns, and paradigms for logic minimization based on communication complexity, rather than factoring techniques. Second is to explore the possibility of producing an executable design from the netlist specification provided by Wirec or Wirelisp. Third is to investigate a new paradigm for computation: embedding problem instances into hardware. Theoretical and practical implications of reconfigurable hardware are being investigated. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Northwestern University; Majid Sarrafzadeh; Algorithm Design for VLSI Layout; (MIP-8921540 A01 & A02); $18,000. The research is in four areas of IC design theory: floor- planning, global routing, wireability theory, and point dominance. In floor-planning a generalized version of the module placement problem, where the modules can assume any shape, is being developed. For global routing, a generalization of sequential methods to a method, which can handle all nets in 2-D (and higher dimension) arrays simultaneously, is under investigation. A layout model that generalizes the Lipski-Preparata wireability model to include 2 and 3 layer net routing and incudes more detail is being analyzed. Point dominance is a new area which builds on and integrates results in graph theory relating to layout problems. Questions about k-chains and circular k-chains are being addressed. This grant includes support for undergraduate students through the Research Experiences for Undergraduates. University of Illinois; C. L. Liu; Research in Computer-Aided- Design of VLSI Circuits; (MIP-8906932 A01); $9,902. Research is on physical design and synthesis of IC's, and on reliable IC designs. Theory of VLSI design models and methodologies is being used to understand layout problems for complex IC designs. Research problems being addressed include: multi-layer routing, pin assignment, placement of flexible modules, module orientation, and placement of standard cells. Two approaches to reliable designs are being investigated. The first is to identify minimal sets of redundant elements that are needed to replace defective elements in both homogeneous and non-homogeneous arrays. The minimal sets are called minimum covering sets. Second is an effort to develop a general model for fault coverage problems in reconfigurable chips. The model is capable of capturing a large class of possible relationships between redundant elements and defective elements. This grant includes support for a graduate student to work on a new performance driven placement algorithm. University of Minnesota - Duluth; Clark Thomborson; Algorithms for VLSI Design; (MIP-9023238); $22,750; 12 months (Joint support with the Computer Systems Architecture Program - Total Grant $ 57,750). This research is in four areas of IC design. The theory underlying optimal adder design and global wire routing is well understood, so the focus is on implementation issues. Experimental software is being developed to determine optimal adders. Attention is being paid to tradeoffs between the smallest area, fastest, and least power adders. The feasibility of using large linear programs to solve the problem of routing wires on VLSI chips and multi-chip modules is being investigated. Theoretical work is proceeding on the problem of finding minimum length tree shaped interconnections (Steiner trees) among a set of terminals on a VLSI chip. The algorithm is anticipated to handle 25 terminals optimally, an improvement over the current best, which is 18. New work is being pursued in the problem of rendering grey-scale images on a monochrome display terminal or laser printer. Algorithms, amenable to speedup on parallel or pipelined computers, are being sought. University of Minnesota; David Du; Performance-Driven Layout; (MIP- 9007168 A01); $56,689. This research is on a unified way to consider both timing and geometric constraints during the placement process. The approach is to convert timing constraints to geometric shapes using "defined windows". A window represents a region in which all the modules along a given path can be placed without degrading the circuit performance. Then, based on the window information, a constructive placement process is used to select an unplaced module and to find an appropriate position for the module. Algorithms for the following issues are being studied: path elimination, window and region construction, module selection, module placement and path breaking, and slack distribution. State University of New York - Stony Brook; Armen H. Zemanian; VLSI-ULSI Parameter Computation; (MIP-8822774 A01); $58,000; 12 months. This research is on three dimensional modeling methods for predicting performance and characteristics of VLSI-ULSI circuits and devices. The focus is on: 1. efficient computations of capacitances and inductances in three dimensional configurations of on-chip and off-chip interconnection lines; and 2. feasible and accurate computations of threshold voltages in three-dimensional simulations of MOSFETs. The former computations are needed for assessment of delay times in off-chip interconnection lines. The latter computations are needed when modeling pulse propagation through MOSFET circuits. The approach is to use infinite electrical network theory to simplify the computation of coupling between interconnection lines and adjacent devices. This theory leads to efficient numerical methods, including domain contraction techniques, which yield more powerful computation algorithms. Carnegie-Mellon University; Edmund M. Clarke; Temporal Logic, Hardware Verification, and Parallel Theorem Proving; (CCR-9005992 A02); $54,194; 12 months (Joint support with the Software Engineering Program and the Numeric and Symbolic Computation Program - Total Grant $108,389). A procedure, called temporal logic model checking, for automatic verification of concurrent programs has been developed. This procedure determines if a collection of finite-state processes satisfies its specification in a propositional temporal logic by methodically searching the global state graph determined by the processes. The first part of this project deals with temporal logic model checking and how it can be extended to verify hardware controllers, cache coherency protocols, and real-time programs. In particular, new techniques (various methods for compositional model checking, alternative state space representations using binary decision graphs and partial orders, etc.) will be developed to extend the size of the finite state programs that can be handled using this technique. The second part of this project deals with parallel theorem proving and symbolic computation. A parallel resolution theorem prover called Parthenon has been built. In this project, a number of important algorithms for theorem proving and symbolic computation will be implemented on different types of multiprocessors. University of Tennessee; Michael Langston and Michael Fellows; Algorithmic and Combinatorial Advances in VLSI Design Theory; (MIP- 8919312 A01); $84,979; 12 months. In the design and manufacturing of VLSI systems, practical problems are often characterized by fixed parameter instances. For example, such a parameter may represent the number of tracks permitted on a chip or the load capacity of a communications link. By fixing such parameters, attention can be focused on the physically realizable nature of the system rather than abstract aspects. Research is concentrated on fixed-parameter algorithmic and combinatorial problems of IC design. Powerful and, in some cases, emergent techniques from the fields of complexity theory, graph theory, well-partial- order theory, and algebraic coding theory are being used in considering such pragmatic issues as circuit layout, arrangement, embedding, and routing. A second thrust is the general problem of organizing and utilizing large collections of processors working in concert. Thus problems of system organization, utilization, mapping, and emulation are being addressed. University of Virginia; James Cohoon and Jeffrey Salowe; Next Generation Research in Physical Design; (MIP-9107717); $40,345; 12 months. This research is on design and analysis of tools for VLSI physical design. The approach is to take an integrated look across the physical design activities of partitioning, floorplanning (placement) and routing. The work has two components, routing and partitioning. In routing, algorithms for generalized multi-layer and multi-net routing are being explored. These algorithms are designed to route on any number of layers, work on a rectilinear, gridless surface with obstacles, account for technology constraints, and permit horizontal, vertical and 45 degree wiring on a layer. In partitioning, a powerful geometric partitioning technique called SHAPE is being used to develop algorithms (including parallel ones) which bridge between various physical design activities, such as floor-planning, Steiner routing, and identification of critical nets. Design Automation and Tools Stanford University; Giovanni De Micheli; PYI: Computer-Aided Design (CAD) Algorithms; (MIP-8858806 A04); $62,500; 12 months. Research is on four topics in logic synthesis and related IC design problems: 1. Technology Mapping - techniques for spectral analysis of logic functions has been initiated. Research is on using this to match functions representing a portion of the network to functions representing library elements. Theoretical aspects of using the spectra as well as implementing algorithms are being done. 2. Synchronous logic synthesis and optimization -research is on optimizing area and delay in synchronous Boolean networks by using Boolean techniques; and using an iterative improvement scheme, where the network is refined by replacing Boolean expressions by others according to a delay oriented or an area oriented model. 3. Optimization of control circuits from behavioral descriptions that include timing constraints - the minimal area control implementation is being addressed by describing hardware behavior as a set of sequencing and timing constraints on the operations. Sequential don't care conditions are also considered. 4. Arithmetic circuits - research here includes: evaluating the wave pipelining design method by testing and analyzing a test chip, and an evaluation of Wallace trees for high speed multiplications. Tanner Research Inc.; John Tanner; SBIR: The Use of Ultra Violet Radiation to Improve Chips Used in Neural Synapses; (ISI-9022386); 18 months (Joint support with the Small Business Innovative Research Program - Total Grant $22,553). This project investigates the use of ultra-violet light to modify on-chip voltages. Analog quantities in on-chip long term storage can be used to compensate for fabrication variations and to provide variable synapses in neural networks. The voltage on floating gates in a standard CMOS integrated circuit (IC) can be set by shining uniform ultra-violet light on the chip and arranging the circuit geometry so that nearby active areas have the correct voltages. The use of ultra-violet light to set the weights on synapses in a Hopfield style neural network is being investigated. Research tasks include: 1. refine the basic circuit model through experimentation; 2. measure cycle lifetimes on circuits with a faster settling time; 3. investigate electrical methods of controlling charge on floating gates; 4. fabricate, test and refine a pattern recognition chip with floating node storage of synapse weights; and 5. identify and test circuits suitable for distribution as layout libraries. University of California - Berkeley; Randy Katz; Process and Project Management for VLSI Design Environments; (MIP-9002962 A01); $79,948; 12 months. This research is on IC circuit and system design frameworks, focusing on process and project management. Tasks include: 1. developing a model of design development, tasks, and history appropriate for the CAD framework; 2. extending the model to a prototype process management framework; 3. creating a prototype implementation of a task manager to enforce the proper sequencing of design activities and sharing of work among project members, and integrating this into the OCT framework; and 4. exploring the use of process specification, version models and history models as a basis for design documentation which promotes reuse of design efforts. University of California - Berkeley; Ernest Kuh; Research on Layout, Interconnection, and Testing; (MIP-8803711 A04); $125,000; 12 months. This research covers VLSI physical design problems: macrocell-based custom chip layout, interconnection within and between VLSI chips, and design for testability. Layout problems are considered within a hierarchical automatic building block layout system. The problems are on timing- driven layout and optimum layout of power and ground nets. The later problem is considered for the case of three metal layers. Theory and algorithms for interconnection problems are being pursued, and two specific problems are being solved. These are: three dimensional module interconnect including estimates of length and space, and global and detailed routing. Also being considered are problems in routing methodology for long wire dominated custom chips and routing for wafer scale integration. Research at the interface between layout and testing is being addressed. In particular, validation of assumptions made for design for testability and reduction of hard to detect faults through layout are being investigated. Computational geometry is being used to understand the problem of designing repairable arrays. University of California - Irvine; Daniel Gajski; System Synthesis From Specification Charts; (MIP-8922851 A01); $84,000; 12 months. Research is on system synthesis. In this model specifications are expressed in terms of graphical language called "specification charts". It has sufficient expressive power to describe IC systems in terms of activities initiated and terminated by events. Topics being pursued are: 1. definition of the specification charts language; 2 determining how to translate the language into VHDL; 3. algorithms for partitioning the specifications into system components (e.g. chips) and for quality estimations using system quality measures; 4. incorporation of interface synthesis into system synthesis; and 5. evaluation of the tools being developed. University of California - Irvine; Fadi Kurdahi; System-Level Partitioning of VLSI Circuits Using Design Evaluators; (MIP- (8909677 A01); $8,000. This research is on automation of the system level partitioning problem. In particular it addresses the task of partitioning a design so as to minimize the number of VLSI chips, while still satisfying constraints on system performance and chip parameters. The P.I. is generalizing a model for area estimation for standard cell chips to handle other layout design styles. This model plays a central role in developing system level partitioning tools by providing accurate estimates of the design layout areas. Complimentary tools which evaluate other aspects and merits of the design, such as power consumption, performance and pin count are being developed. The tools are being integrated into a spreadsheet- like design aid which will allow the designer to interact with the system. This grant includes support for undergraduate students through the Research Experiences for Undergraduates Program. University of California - Los Angeles; Jason Cong; RIA: Interconnection Problems for High-Performance VLSI Circuits and Systems; (MIP-9110511); $5,000; 24 months (Joint support with the Systems Fabrication and Prototyping Program - Total Grant $69,980). The research is on chip-to-chip and on-chip interconnection problems. General formulations and efficient solutions to these problems are being explored. The focus is large chip/system designs with over a million transistors. Algorithms which minimize the interconnection delay and maximize circuit performance are being developed. Topics being addressed are: 1. timing driven global routing with bounded routing costs for both cell-based designs and building-block designs; 2. high-speed clock routing with minimum skew for cell-based and building-block designs; and 3. chip-to-chip interconnection problems for multichip packaging, including multilayer planer subset, multilayer via minimization, and transmission line problems. University of California - Los Angeles; Andrew Kahng; RIA: New Approaches to Partitioning for Large-Scale VLSI Systems; (MIP- 9110696); $69,900; 24 months. The research is on high-level system design, with a focus on partitioning logic among functional modules. Three new problem formulations and algorithms for performance driven layout are being investigated. These are: 1. numerical methods for finding sparse cuts for logic bi- partitioning with extensions to multi-way logic partitions; 2. techniques for estimating rapidly optimal solution values in large scale partition problems. These methods are based on random walks and the theory of simulated annealing, and have extensions to module area estimation for floorplan synthesis; and 3. a theoretical foundation (weighted module packing) for the performance-driven partitioning problem. Provably good algorithms for this problem are being examined. University of California - Santa Cruz; Pak Chan; RIA: Technology Mapping for Timing Optimization; (MIP-9111607); $70,000; 24 months. Part of the VLSI synthesis problem is mapping the Boolean equations onto a given set of primitive cells to minimize a total given cost function. The latter can be time (performance) or area or both. This process, called technology mapping, has solutions for Boolean tree networks. Algorithms for this are based on one-dimensional dynamic programming. The problem of doing the technology mapping of Boolean networks, represented as directed acyclic graphs (DAGs), is being explored. The major focus in this work is timing issues. Research tasks are: 1. deriving optimal (in latency) technology mapping algorithms for DAGs with recursive structures, and 2. extending these algorithmic techniques to arbitrary structures. The optimization technique is based on multi-dimensional dynamic programming. University of California - Santa Cruz; Wayne W-M. Dai; PYI: Computer-Aided Design of VLSI Circuits--Constrained Net Embedding for Multichip Modules; (MIP-9058100 A01); $10,000; 12 months (Joint support with the Systems Prototyping and Fabrication Program - Total Grant $90,000). This research models the interconnection topology and matrix needed for optimally laying out interconnections in multichip modules. Three topics are being pursued. First in performance driven layout for multi-chip modules (MCM). A performance driven layout system for thin film MCM designs is being developed. Variable width, variable spacing, evenly distributed spacing, and thermal via insertion are used to maintain distortion-free propagation of high-speed signals and to control cross-talk, switching noise, and thermal resistance. Second is early system analysis tools, which allow the designer to bridge architecture and technology issues and evaluate tradeoffs. These tools provide a framework for different levels of analysis and more detailed simulation. Third, an investigation is being made into a multiple bus network for parallel processing which matches the MCM requirements of higher I/O pin count and inter-chip routing density. This design is based on the combinatorics of the balanced incomplete block design, and has good fault tolerant properties that lead to uniform bus load and processor fanout. This grant includes support for two undergraduate students under the Research Experiences for Undergraduates Program. Weidlinger Associates; Gregory L. Wojcik; SBIR: An Application Specific Computer for Laplace's Equation; (ISI-9061185); 6 months (Joint support with the Small Business Innovation Research Program - Total Grant $49,998). The proposal is to investigate an application-specific computer architecture to solve a class of computer aided design simulation computations based on the Laplace equation. The goal is to solve typical problems in the class at five times the speed and one hundredth the cost of current supercomputers. The intent is to provide 0(109) floating point operations per second for the solution of 0(106) node 3- D elliptic problems. Research being conducted is: 1. analyze and diagram the arithmetic operations and memory accesses of the iterative Laplace solver; 2. design the corresponding multi-FPU pipelined processor; 3. insure suitability of an existing memory board design and determine the number of vector memory ports (boards) necessary to sustain the pipeline; 4. program a simulator of the pipeline; and 5. plan wirewrap fabrication of the pipelined processor. University of Idaho; Phillip Windley; RIA: Integrating Formal Verification with VLSI Design Tools; (MIP-9109618); $69,000; 24 months. The research is on linking tools for formal verification of IC designs and those to those used for simulating and fabricating the designs. The principal investigator is working with a verification environment (Hol), a set of design tools based on a hardware description language (Bolt), and a simulation language (Nova). Research being undertaken includes: 1. find a theoretical basis for the translation between the behavioral (verification tool) and structural (design tool) models; 2. develop a formal semantics for Bolt and embed the formal semantics into the Hol system; 3. develop common abstraction mechanisms for Bolt and Nova; investigate parameterized, generic models for circuits to serve as a guide to translation between the verification model and the design tools; develop software for carrying out the combined design and verification process. University of Illinois; Larry Jones; Automatic Generation of Incremental Environments for Digital Design; (MIP-9101051); $98,296; 24 months. Existing incremental switch-level simulators are interpretive in the sense that, upon stimulation of a transistor sub-network, they use a graph representation method to dynamically evaluate its function. In contrast, compiled switch-level simulators hardcode their evaluation procedures into fast boolean operations and exploit the bit parallelism inherent in machine words. This research examines compiled simulation techniques in the context of incremental capture and simulation. A prototype system capable of automatically generating compiled incremental resimulators is being built. This generation system is designed to interface directly with an incremental capture environment system and to generate incremental resimulators that will interface directly with ICE. Specific problems being addressed include: 1. using boolean logic equations and/or binary decision diagrams as a basis for incremental switch-level simulation; 2. exploiting the user-defined hierarchy to accelerate incremental circuit analysis: 3. investigate bit parallelism in incremental simulation; 4. automatically generate compiled incremental resimulators; and 5. explore time and space efficient methods for storing design information and history traces (data compression and data persistence). University of Illinois; David Knapp; Controller Designer; (MIP- 8922015 A01, A02 & A03); $97,111; 12 months. This research is on transformational synthesis in which a naive implementation of a control specification is transformed into one that better meets the requirements of the overall design. Two new ideas are being explored. One is to use a search strategy similar to that used by production systems, but which is more efficient because in the search rules it hides much of the detail of the underlying implementation. This gives the flexibility and modularity of the rule based approach while avoiding the performance problems. The other idea is to build feed-back into the system so that decisions about the design of the controller can be evaluated in terms of their impact on the overall design and refined accordingly. Ideas generated in the research are being experimented with in a testbed, a synthesis system that goes from abstract to layout or pseudo-layout. Evaluation of integrating the controller designer into a larger synthesis system is being made. This grant includes support for an undergraduate student under the Research Experiences for Undergraduates Program. This grant includes support for educational activities under the Computer and Information Science and Engineering Directorate Educational Supplement Program. Indiana University; Steven Johnson and David Winkel; Algebra for Digital Design Derivation; (MIP-8921842 A01); $103,000; 12 months. This research is on conceptual structures used in IC and digital system design. It explores aspects of digital design in a functional algebra, a theory chosen for its simplicity, its power and its affinity to digital hardware description. Fundamental abstractions of digital engineering are formalized in order to expose principles for integrating methods and tools. Research is on obtaining correct implementations through a sequence of algebraic transformations on a specification. The synthesis algebra is being automated, providing a tool for interactive digital design derivation. Experimentation with the algebraic approach to synthesis is being pursued. Also, various decompositions of design are considered with the object of distilling general principles for the integration of engineering methods and tools. University of Iowa; Salim Chowdhury; Analysis and Design of Power, Ground and Clock Nets in High Speed Digital Integrated Circuits; (MIP-9003434 A01); $62,234; 12 months. The focus of this research is to; 1. study the problem of current surges in GaAs and ECL circuits to better understand their impacts on chip reliability, 2. develop automated techniques to analyze power, ground and clock distribution systems in order to detect potential reliability problems, 3. find design rules required for enduring reliability, and 4. incorporate design rules and guidelines into tools for design of high performance circuits. Transmission line models for interconnects in IC's are being explored. Analysis methods based on numerical techniques coupled with transmission line theory are being developed. Optimization techniques are being devised for solving problems of effective utilization of chip area. The study of current surges is being approached using experimental, stochastic and analytic techniques. University of Massachusetts - Amherst; Maciej Ciesielski and Israel Koren; FSM Decomposition for Area and Performance Optimization: From Function to Layout; (MIP-9013013); $151,500; 24 months. The goal of this research is the development of an integrated set of tools for the logical and physical design of sequential circuits with reduced area and improved performance. The proposed research is developing analytical techniques for synthesis of sequential circuits by means of functional and physical decompositions so as to minimize the area and maximize the performance (or find a trade off between the two). Algorithms to decompose a sequential circuit into a set of interacting synchronous finite state machines (FSM's) are being investigated. The approach is to find a set of generalized partitions of machine states, so that each machine state can be implemented as a separate machine. The machines corresponding to those partitions interact with each other in a more complex fashion than those derived on the basis of classical theory of closed partitions. For example, in this organization one machine may depend on other machines by using their state bits as primary inputs. University of Michigan; Karem Sakallah, Trevor Mudge and Edward Davidson; Timing Verification and Optimal Clocking of Latch-Controlled Synchronous Digital Circuits; (MIP-9014058); $37,000; 12 months (Joint support with the Microelectronic Systems Architecture Program - Total Grant $127,660). This research is focussed on the temporal modeling of latch-controlled synchronous digital systems. The use of level-sensitive latches, as opposed to edge-triggered flip- flops, has become quite common in recent years because latches are easily implemented in MOS VLSI, by far the leading technology for building digital systems. A consistent theoretical framework for describing the timing constraints which must be satisfied by such systems for proper operation is being developed. This framework is being used to develop efficient algorithms for; 1. checking adherence to these constraints (timing verification), and 2. maximizing system performance without violating them (optimal clocking). Both of these problems require the solution of large linear programs (LPs). The special structures of these LPs will be utilized to reduce the solution time so that the algorithms can be used in an interactive design environment. The research is defining an appropriate notion of criticality and devising effective ways of identifying and reporting the critical areas in the system. The application of this framework to several regular circuit structures, such as CPU data paths and pipelines is being examined to obtain analytical expressions for the minimum cycle time. Such closed-form optimality criteria will guide the synthesis of these structures for maximum performance. The practical significance of this framework and associated algorithms is being assessed experimentally on actual industrial VLSI designs. University of Minnesota; Ramesh Harjani; RIA: Automatic Synthesis of Custom and Semi-Custom Analog Integrated Circuits; (MIP- 9110719); $69,756; 24 months. The research is on synthesis of custom and semi-custom analog integrated circuits (IC's). The focus is on developing macromodeling techniques applicable to higher level analog and mixed analog/digital circuits. These macromodels predict the behavior of a circuit for a set of input performance parameters. The approach is to explore statistical experiment design techniques to generate high quality models efficiently. A set of models and modeling techniques that help in predicting the behavior of circuits synthesized by a hierarchal design tool is being developed. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Princeton University; Andrea S. LaPaugh; FAW: Research in Design of Digital Systems; (MIP-9023542); $50,000; 12 months. This research is on models and tools for the design of board-level digital systems. Research includes: 1. Developing a formal specification language for board- level systems based on well defined building blocks, where some of the building blocks are very complex, such as microprocessors. In building the language, only a few critical component types, including programmable parts, are being represented rather than representing all possible parts. 2. Finding simplified models for the board specification and verification processes. The use of hierarchy in the verification process is being explored through specifying the actual behavior of an implementation as well as the desired behavior of the system. Algorithms for timing relationships are being examined also. 3. Algorithms for layout problems, particularly hierarchal compaction of circuit layouts and the interaction of placement and detailed routing, are being investigated. This grant is made under the Faculty Awards for Women Scientists and Engineers Program. Princeton University; Wayne Wolf; RIA: Behavioral Synthesis of Control-Dominated Architectures; (MIP-9009960 A02); $6,000. Computer architectures dominated by control logic are an important class, since many application specific designs are dominated by design of the control logic. This research is developing new optimization algorithms for behavioral synthesis of control dominated machines. Research is being done on; 1. how to partition the behavioral synthesis process into manageable compilation steps, 2. finding optimization algorithms to implement the compilation steps, 3. determining how various optimizations interact, and 4. developing reliable methods to estimate hardware costs to drive optimization. An experimental behavioral synthesis compiler is being built and will be used to experiment with real examples to test the optimization algorithms. This grant includes support for undergraduate students through the Research in Undergraduate Institutions Program. Columbia University; Charles Zukowski; PYI: Very Large Scale Integrated Circuit Theory and Design; (MIP-8658112 A04); $62,500; 12 months. Research is on IC circuit design and analysis for circuits built in high speed technologies. Such as mixed (COMS) silicon and gallium arsenide (GaAs). Focus is on telecommunications circuits. Projects in circuit design include: 1. testing and analysis of a prototype cylinder packet switch chip, 2. implementing a network interface chip that can achieve data rates of 1 GBit/sec using 2 micron CMOS, 3. investigating the VLSI implementation of a teranet switching node, 4. exploring the limitations of the matched-delay approach by fabricating an advanced prototype of a matched-delay multiplexer, 5. design, test and evaluation of a digital phase locked loop model which can handle slow variations over a wide range of frequencies, and 6. design, test and evaluation of an image coding chip. Circuits analysis projects include: 1. building a working simulator for large digital Bipolar and BiCMOS circuits, which uses dynamic partitioning; large examples are being simulated, and 2. large scale testing of BiCMOS circuits to determine circuit optimization strategies. Cornell University; Miriam Leeser; RIA: MTV: A Multiple Event Timing Verifier; (MIP-9111146); $60,000; 24 months. This research is on verification of circuit timing properties. The approach is to develop a multiple event timing verifier, which operates at gate level, represents multiple transitions on signals, and works on synchronous as well as asynchronous circuits. Algorithms that perform analysis in low order polynomial time and use a data representation that accurately captures the digital behavior of the system are being explored. Algorithm designs and related software being developed include functions such as: verifying circuits with non-trivial input timing, allowing multiple transitions on signals, doing functional analysis to eliminate false paths, allowing designers to input and examine data easily, verifying different clocking disciplines, and detecting various timing problems, such as clock rate not met, pulse shortening, hazards and races. Portland State University; Marek Perkowski; RIA: A Multi-Level Logic Optimization Program Which Generates Optimal Mix of AND, OR and EXOR Gates; (MIP-9110772); $60,000; 24 months. There is now potential to synthesize circuits which contain Exclusive OR (EXOR) gates without the size and speed penalties formerly associated with them. Circuits designed with EXOR technology can be efficiently implemented and are inherently testable. Three research topics are being pursued. The first is to develop a comprehensive theory for design in the EXOR technology. This theory forms the basis for development of algorithms for EXOR design. Second is to develop a tool which produces an optimal, multilevel design that is a mixture of EXOR, OR and AND gates for a given function. Hierarchical decomposition is being used to recognize different subfunctions. Algorithms are being developed for decomposition, factorization, and resynthesis. Third is to explore algorithms for optimal design using programmable gate arrays and programmable logic devices. Carnegie-Mellon University; Elizabeth D. Lagnese; Statistical Evaluation of the Relationship Between Structural and Physical Interconnect in Integrated Circuits; (MIP-9109205); $18,000; 12 months. This is a Research Planning Grant for Women Scientists and Engineers. A research program in determining the relationship between structural and physical interconnect in IC design is being planned. Typical area estimates for the structural design of an IC count the number and size of the structural modules, but ignores interconnect, which can use significant chip space. The approach is to design experiments which explore the relationships between register-transfer (RT) measures and area devoted to interconnects. Experiments are being conducted with designs produced by IC synthesis tools and statistical evaluation is being used to determine correlations. The first experiment is to establish relationships between the RT design and the physical design independent of particular cell bindings. The second experiment is to determine if a relationship exists among RT measures and interconnect area, and to characterize that relationship. Carnegie-Mellon University; Rob A. Rutenbar; PYI: Knowledge-Based Synthesis of Analog Integrated Circuits; (MIP-8657369 A05 & A06); $102,808; 12 months. Research is in three areas of analog tool design: 1. system level synthesis, 2. analog CAD tool frameworks, and 3. adding new functionality to analog layout tools. Research in system level synthesis concerns completion of a pipelined A/D converter and using it to assess strategy for system level problems. New work includes using numerically based synthesis strategies in system design tools. Research on design frameworks includes revisiting the entire framework software developed so far and restructuring it as a tool kit of objects for representing and manipulating design objects. Tool functionality research is motivated by the observation that often the cause of poor quality designs is insufficient coordination between the placement and routing tools. Hence tools for simultaneously doing the placement and routing tasks are being developed. A second task is using asymptotic waveform estimation to reduce complex extracted circuit networks. This grant provides a supplement for distributing and sharing research software under the Software Capitalization Grants Program. Carnegie-Mellon University; Donald Thomas and Daniel Siewiorek; Digital System Level Synthesis Tools; (MIP-9112930); $222,572; 12 months. This research is on algorithms and techniques for system- level synthesis of digital electronic systems. Inputs to the system are intended to be high level specifications in term of system behavior. Research builds on two existing design systems; the system architects workbench, and a microcomputer board design system. Research issues being investigated are: partitioning specifications for appropriate style selection, synthesis - including learning and problem solving approaches, internal representation of design information, system level design representation and human interface, and controlled iteration with different types of synthesis tools (design process control). The algorithms are being integrated into a prototype system-level synthesis framework. University of Pittsburgh; Dorothy Setliff; Towards Automatic Synthesis of Behavioral VHDL; (MIP-9109379); $17,857; 12 months. This is a Research Planning Grant for Women Scientists and Engineers. A research program in automating the synthesis of the VHDL language from a very high-level architectural specification is being planned. The approach is to transition state-of-the-art software synthesis technology to CAD design automation. A feasibility study of several high-level architectural specification models, which are implementation independent, is being conducted. Other activities are: defining a set of VHDL software synthesis operators, developing representations of pure behavioral level architectural specifications, and performing feasibility tests of these specifications. University of Utah; Ganesh Gopalakrishnan; Making the Specification Driven Design of Custom Hardware Practical; (MIP-8902558 A01); $82,840; 12 months. This research is on specification driven design of custom hardware. The principal investigator is working with three consultants: Richard Fujimoto of the Georgia Institute of Techology, Graham Birtwistle at Calgary, and Paliath Narendran of GE Corporate Research. The group is developing a uniform theory for describing mixed synchronous and asynchronous IC systems, as well as a theory for deriving implementations from specification written in a purely asynchronous style. The research is based on the "HOP" family of hardware description languages. The language being investigated is HOP-COP, which is a semantically simple, well characterized language. It specifically addresses description of mixed synchronous and asynchronous designs. Research topics include: 1. identifying a minimal primitive basis for HOP-COP; 2. developing verification techniques based on the language; 3. finding algorithms for IC design tools; developing proof of correctness algorithms and tools; and 4. exploring implementations of the research ideas. Testing University of California - Santa Cruz; Frankie J. Ferguson; PYI: Hierarchal Test Pattern Generation for Manufacturing Defects; (MIP- 9158491); $25,000; 12 months. This research is on developing testing methodologies that detect more defective ICs than current methods in a cost effective manner. Recent evidence using both defect simulation and fabricated ICs support the thesis that testing manufactured ICs for non-single-stuck-at-faults should detect additional defects and raise the quality level of tested ICs. In the last decade there have been two trends in testing research; one is toward the use of high-level fault models to reduce test generation costs, and the other is toward low- level technology-specific fault models to increase the quality of circuits that have passed the tests. High-level fault models reduce the quality of the resulting tests, and low- level fault models cause testing costs to mushroom. The focus of the research is to integrate these two techniques so that tests can be generated that detect virtually all plausible manufacturing defects without excessive automatic test pattern generation costs. This is being done by developing a method that combines low-level defect analysis and fault modeling with high-level automatic test pattern generation. University of California - Santa Cruz; Tracy Larrabee; PYI: Automatic test pattern generation based on Boolean-satisfiability; (MIP-9158490); $25,000; 12 months. In the past three decades, practical automatic test pattern generation (ATPG) systems have all searched for tests using structural search. The principal investigator has developed an ATPG system (called Nemesis) that generates a test pattern for a given fault by first constructing a formula representing all possible tests for the fault, and then applying a Boolean satisfiability algorithm to the resulting formula. This method separates the formula extraction from the formula satisfaction thus providing flexibility and generality. It has been shown to be effective. A testing system, based on Nemesis, that will generate tests detecting all realistic manufacturing defects in both combinational and sequential ICs is being developed. University of Southern California; Sarma Sastry; RIA: Stochastic Models in Partitioning for Testability of Digital Circuits; (MIP- 9111206); $60,000; 24 months. This research is on developing models for evaluating hierarchal testability of circuits. Three topics are being pursued. First, stochastic models for circuit structure, used for obtaining analytical expressions of controllability and observability, are being explored. New testability metrics and fast testability evaluation of IC's based on the stochastic models are being investigated. Second, a unified framework for expressing hierarchal testability metrics is being established. This allows representation of any testability metric for the purpose of hierarchical testability analysis. Formal methods of composing testability metrics are being developed. Third, efficient algorithms for doing the partitioning for testability are being designed and evaluated. Central to the approach is the development of a discrete hazard function that quantifies for each level in a circuit the difficulty to fault propagation exhibited by the circuit structure. University of Iowa; Irith Pomeranz; RIA: Test Generation for Synchronous Sequential Circuits; (MIP-9109568); $69,697; 24 months. This research is on complete fault coverage (at low cost) for synchronous sequential circuits. A test generation strategy, which alleviates the limitations of existing test generation procedures is being investigated. The essence of the approach is in the multiple time observation assumption, which removes the requirement that a fault be detected by observation of the circuit's response at a single time unit. A fault model comprised of single and multiple transition faults, which covers many gate level stuck at faults, is being explored. Test generation procedures are aimed at medium sized and large circuits, described as interconnections of submachines. Algorithms for decomposition of such circuits are being investigated. A comprehensive set of tools is being designed especially for the purpose of achieving complete fault coverage. The tools include: test generation, simulation, extraction and decomposition procedures. University of Michigan; John Hayes; Easily Testable VLSI-Based Systems; (MIP-8805517 A02 & A03); $97,610; 12 months. The research exploits hierarchal structure and regularity, inherent in many VLSI designs, to obtain better testability at both chip and system levels. Current investigation is into test generation properties of high level VLSI circuit models. The theory of ambiguity sets is being developed to characterize multi-step and partial transparency of circuits with respect to test package propagation. This propagation theory is being extended from combinational circuits to small sequential circuits. The mathematics of symbolic manipulation of test packages is being further investigated. The aim of microprocessor testing is to integrate control and datapath testing, and to deal with the difficult but important class of faults occurring at system boundaries. Also ways to enhance testability properties such as transparency for both external testing and self-testing are being studied. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. University of Michigan; Pinaki Mazumder; Design of Fabrication of Electronic Neural Networks for Built-in Self-Repair of VLSI Chips with Embedded Logic Memory Arrays; (MIP-9013092); $115,000; 24 months. This research is using the combinatorial optimization capabilities of neural networks to devise algorithms for solving repair problems in IC chips. Two classes of array repair problems are being examined. The first is repair by replacement of rows and columns (as in memory arrays). The second is repair by replacement of individual cells (as in systolic arrays and iterative logic). Theoretical models of array repair problems are being analyzed and simulated to measure the performance of neural repair algorithms. Electronic neural networks that will execute the neural repair algorithms for memory, systolic and iterative arrays are being designed and fabricated. University of Nebraska - Lincoln; Sharad Seth, Jitender Deogun and Vishwani Agrawal; Design for Testability and Test Generation with Multiple Clocks; (MIP-9015115); $47,871; 12 months. This research is on design for testability (DFT), and associated test generation algorithms for clocked synchronous circuits. The research approach is to partition the flip- flops into tow groups, each controllable by its own independent clock line in the test mode. The result is the decomposition of the original state machine into two communicating submachines, with flip-flops grouped so as to simplify the test generation process. Flip-flop partitioning is stated in terms of a graph representation of flip-flop connectivity. A two clock decomposition algorithm is being implemented. In test generation, existing test generators are being adapted to serve as an early proof of the DFT scheme. Also, a test generation algorithm, based on a two dimensional generalization of the standard time frame expansion is being implemented and tested against benchmark circuits. Rutgers University; Michael Bushnell; PYI: Computed-Aided Design of ULSI Circuits; (MIP-9058536 A01); $100,000; 12 months. The research is in two areas; automatic test pattern generation (ATPG) and CAD framework design. In ATPG, circuit models and test pattern generation algorithms for parallel computers are being investigated. This includes fast test pattern generation algorithms for combinational and sequential CMOS transistor circuits, and delay fault models. Also under investigation are built-in self-test circuits for delay fault testing, efficient computation of binary decision diagrams, and finding circuit sub-classes that are easily testable. In the framework design area, problems of integrated CAD frameworks for automatic design consistency and automatic CAD tool control flow are being explored. Topics include: 1. graphical user interfaces that permit the designer to describe easily an automated CAD tool control method to the framework; 2. automatic consistency maintenance in which inconsistent data between CAD files is automatically detected and corrected; and 3. automatic flow control for CAD tools in which the framework plans a series of design steps, attempts them, backs up if design failure occurs, and replans subsequent design steps. InfoLogic Software, Inc.; David Kass; SBIR: Development of Methods to Test a Large, Complex VLSI CAD Software System; (ISI-9061066); 6 months (Joint support with the Small Business Innovative Research Program - Total Grant $49,991). The proposal is to develop a methodology for testing VLSI CAD software tools, and validate it on a portion of the Berkeley VLSI CAD tool set. The validation approach is to adapt known software testing techniques to testing of CAD tool software. Comparison of runs on the test cases is made to the results defined in the specifications. Research being done includes: 1. define the form of the functional specifications; 2. develop test strategies based on the specifications, but recognize that specs are not complete and use some test strategies that are independent of input-output specifications; 3. determine test cases generated from the tool's functional specifications and logical - structural properties; and 4. prototype test the methodology. State University of New York - Buffalo; Sreejit Chakravarty; On Testing and Diagnosis of Bridging and Leakage Faults in CMOS Circuits Using Current Monitors; (MIP-9102509); $86,309; 24 months. This research is on the test and diagnostic tools for bridging and leakage faults, and to build a comprehensive fault model for these faults. The approach is to utilize current monitoring techniques, which provide better observability than present response generators. This technique is being used to develop simpler, more efficient, more effective algorithms which will form the basis for a new set of test tools. Algorithms for computing test strings for partial scan, test generators for built-in-self-test, fault simulation, and diagnostic tools are being explored. The impact of these algorithms on circuit design is also being explored through the building of tools based on the algorithms Lafayette College; Hong Dai; RIA: Testing of Large Analog and Mixed-Signal Circuits; (MIP-9110196); $70,000; 24 months. This research is on testing large scale analog and mixed- signal circuits. There are two tasks being undertaken. First is to develop a decomposition approach for testing large scale analog and mixed-signal circuits. A time domain decomposition method, which does not break interconnections, is being explored. It is based on taking voltage measurements at partition points and formulating new test equations at these nodes. Thus measurement errors are reduced to a local area and computations are performed in each subcircuit. Second is to establish a test strategy for automatic testing of large scale circuits. A set of pre-test computations to be done off-line during the design stage or before manufacturing test is being developed. Methods for extracting parameters during test are being investigated. A set of post-test computations using the parameters extracted during the test is being developed. Software for doing the tests is being built and benchmark circuits are being tested. Texas A & M University; Fabrizio Lombardi; Testable Approaches and Design for Array Systems; (MIP-9025017); $80,000; 24 months. Three topics in constant testability (or C-testability) for homogeneous structures of combinational and sequential circuits is being investigated. C-testability methods are being extended into new areas including systems. The first topic is cell decomposition where an unrestricted, component level fault model is being analyzed. Questions of dividing a cell into components that can be tested, and of devising a hierarchal organization for the testing process are being investigated. Second is C-testability of sequential arrays using checking and touring methods. Procedures for testing sequential arrays using unique input-output sequences are being examined. Third is concurrent C-testability for Built- in-Self-Test (BIST). Structures amenable to array implementation in combinational and sequential cells are being explored to assess latency time and hardware overhead. University of Wisconsin; Charles Kime; Low-Cost Testing Techniques for VLSI Systems; (MIP-9003292 A01); 73,175; 12 months. Testing sequential logic circuits and systems is the research subject. Solutions to this problem usually require costs that are, for many applications, too formidable to allow wide adoption. The goal is to find low cost techniques that reduce overhead of time, area and test data volume while maintaining high fault coverage. The approach is to use the notions of partial scan and partial BIST, which show promise in testing sequential circuits. One research thrust is to develop new measures of testability and a new criteria for scan element selection. A second is to systematically avoid elements of existing techniques that contribute to high costs. Effective test algorithms are being developed by using scan elements derived from information about the actual test generator. Substantial empirical evaluation with experimental tools using real circuits is being done. University of Wisconsin; Kewal Saluja; Test Algorithms for Physical Neighborhood Pattern Sensitive Faults in Reconfigurable Rams; (MIP- 9111886); $60,000; 12 months. This research addresses testing large, reconfigurable random access memories for faults due to certain patterns originating in the physical neighborhood of a memory cell. These are pattern sensitive faults (PSF). Current methods do not deal with 5 and 9 cell neighborhoods, and are complex. In this research, necessary and sufficient conditions to test check bits for neighborhood pattern sensitive faults are being developed. Based on these results, algorithms to test for 5 and 9 cell physical neighborhood PSFs in random access memories (RAMs) are being designed. These algorithms deal with the cases where logical and physical address spaces are not identical, memories are reconfigured, and where built in error detection and correction techniques are employed. The use of these algorithms for built in self test implementation is being explored. Simulation University of Arizona; Olgierd Palusinski; Techniques for Accelerated Simulation of Electronic Circuits and Systems; (MIP- 9017037); $49,897; 12 months. The proposed research is on application of spectral techniques in time domain simulation of integrated circuits and systems. The methods have been used to compute transient effects in multiple transmission lines. They are based on the expansion of unknown variables in the Chebyshev series. This assures very compact representation of waveforms, thus reducing computational penalties imposed by data interchange in relaxation methods. Tasks include: 1. Modeling of MOS circuits based on modified nodal analysis, with comparison to other methods; 2. Application of spectral algorithms to bipolar circuits with various model switching strategies; 3. Construction of an efficient spectral algorithm for computation of transients in lossy transmission lines with frequency dependent parameters and nonlinear termination networks; and 4. Using spectral techniques in a relaxation framework. University of Illinois; Resve Saleh; PYI: A Simulation Environment for the Analysis of Transient Faults In VLSI Circuits; (MIP-9057887 A01 & A02); $100,000; 12 months. A structured basis for analytical modeling of transient faults is necessary for the design of reliable, complex IC circuits. In this research, an experimental system and an associated set of tools to aid in the design of reliable circuits are being developed. The simulation environment includes tools for fault specification, fault analysis and visualization. Techniques are being developed to evaluate the susceptibility of a proposed design to transient faults, and to examine the approaches that enhance the design reliability in a cost-effective manner. A transient fault simulation tool, which will allow designers to specify a particular fault and have a situation performed to analyze the effect of the fault, is being investigated. The approach taken here makes a tool so efficient that it simultaneously simulates a number of transient faults at the electrical level in one simulation run. Massachusetts Institute of Technology; Jacob K. White; PYI: Simulation of Switching Filter and Phase-Lock Loop Circuits; (MIP- 8858764 A03); $75,000; 12 months. This research is on numerical algorithms for simulating high frequency circuits and devices. Topics in circuit simulation being pursued are: 1. algorithms for 3-D capacitance extraction including capacitance of IC interconnect, cross-talk simulation, and inductance extraction; 2. simulation of clocked analog circuits including the control circuits of closed-loop switching converters; and 3. mixed circuit and device simulation. The latter work is based on finding an effective computational way of using waveform relaxation (WP) techniques for device simulation. This requires mixing direct methods for the circuit part of the problem with the WP techniques for the device part. University of North Carolina - Greensboro; Jack V. Briner; RIA: Parallel, Mixed-Level Simulation of Digital Systems; (MIP-9108906); $60,000; 24 months. This research is on accelerating mixed level simulation using general purpose parallel processors. An existing parallel simulator is being developed to make the code more portable and available for others to use on several parallel machines. Parallel discrete event simulation is being used. Topics addressed are: 1. finding techniques for improving the simulation of large fan-out circuit nodes; and 2. developing graph manipulation algorithms to improve partitioning, to improve signal distribution, and to do asynchronous message passing. Experiments on benchmark designs with the simulator to study the effects of partitioning, component migration, task size, and computational and communication overhead are being conducted. Carnegie-Mellon University; Randal Bryant and Carl-Johan Seger; Symbolic Simulation and Its Application to VLSI System Verification; (MIP-8913667 A01); $112,835; 12 months. This research is on symbolic simulation as a basis for formally verifying complex IC designs. Symbolic simulation differs from classical simulation in that during simulator operation the user can set inputs not only to 0 or 1 but also to Boolean variables. The research exploits existing simulation technology by taking a behavioral approach to circuit verification. In this, the verifier applies logic simulation to compute the circuit's response to a series of stimuli chosen to detect all possible design errors. Research tasks include: 1. selecting a set of simulation patterns which can defeat a malicious adversary attempting to foil the verifier; 2. determining structures and algorithms, based on binary decision diagrams, for manipulating Boolean formulas; and 3. investigating the use of the simulation system as an aid to debugging and design. University of Texas; Lawrence T. Pillage; PYI: CAD Tools for New Circuit Technologies.; (MIP-9157363); $25,000; 12 months. This research is on analysis and design of high-speed interconnects for digital systems. In this case the "analog" behavior of the interconnect must be represented and analyzed. Four problems are being considered. 1. An implementation of an efficient asymptotic waveform evaluator is being extended to evaluate distributed element models, including distributed capacitance and inductance coupling. 2. Design rules that permit the reliable construction and operation of high-frequency signal paths are being developed. 3. Efficient methods for handling nonlinear effects from both the driver and the load ends of the line are being investigated, as are approximations which consider best/worst case waveform bounding. 4. Optimization algorithms that minimize delay and skew in the design of clock trees are being evaluated. Algorithms for performance-driven placement and for improving device reliability are being devised. Other University of California - Santa Cruz; Kevin Karplus; Using If- Then-Else DAGs for Multi-level Logic Minimization; (MIP-8903555 A03); $23,935. This research is on converting a Boolean function circuit description into a circuit that implements the functions, meeting time and cost constraints. The approach is to use transformations of if-then-else directed acyclic graphs (DAGs) in multi-level minimization. These are particularly attractive for representing Boolean functions because they can compactly express useful functions, such as arithmetic and parity functions, that require exponentially larger representations in the sum-of-products format, since even fairly crude transformations provide effective minimization. Research is on algorithms for: factoring, to reduce the complexity of expressions; sharing common sub-expressions to eliminate redundant circuitry; and using don't care information to handle partially specified functions. This grant provides a supplement for distributing and sharing research software under the Software Capitalization Grants Program. University of California - Santa Cruz ; Kevin Karplus; 1991 Advanced Research in VLSI Conference, March 26-28, 1991, University of California, Santa Cruz; (MIP-9014762); $2,400; 12 months (Joint support with the Systems Prototyping and Fabrication Program, Microelectronic Systems Architecture Program, the Circuits and Signal Processing Program, and the Experimental Systems Program - Total Grant $12,000). This award supports one conference in a series that has alternated between the east and west coasts for more than a decade. The conference is intended to publicize innovative, interdisciplinary research with a strong VLSI component. This year's focus is on systems integration. The National Science Foundation support allows reduced registration rates for students, university employees, and program committee members, and travel and registration expenses for the invited speakers. Ohio State University; Joanne DeGroat; Digital VLSI Design Laboratory; (MIP-9112130); $63,016; 12 months. This grant is for equipment to establish a digital VLSI design laboratory. The research projects that are making use of the equipment include: VLSI architectures for adaptive filters and adders, a high-level design system, and a concurrent hardware/software specification and design system. University of Pittsburgh; Steven Levitan; Distribution of VLSI Design Software for Education and Research; (MIP-9101656); $48,809; 24 months (Joint support with the Systems Prototyping and Fabrication Program - Total Grant $97,618). CAD design tools, produced in research projects at the University of Pittsburgh are being developed and enhanced so as to bring them into a state for distribution to general users. These tools are designed to help researchers and educators investigate synthesis of VLSI systems from VHDL descriptions. Specific tools being enhanced are: 1. "Vcomp" a compiler for a subset of the VHDL language, 2. "Vsim" the corresponding simulator, 3. A companion schematic editor written for X11, and 4. A companion netlist-to-schematic tool written for X11, 5. "VF2VHDL" a reverse translator from netlists to VHDL. Tasks include: 1. Generate documentation for the compiler and simulator with tutorials, examples, and classroom aides; 2. Enhance the simulator to provide a graphical waveform display package based on IRSIM/Analyzer from Stanford; 3. Convert the old schematic tool from sunview to X11; 4. Clean up for distribution the netlist to schematic tool for X11; 5. Update the netlist to VHDL tool. Technical support via email will be provided. This grant provides a supplement for distributing and sharing research software under the Software Capitalization Grants Program. University of Washington; Carl Ebeling, Gaetano Borriello, and Lawrence Snyder; Packaging and Distribution of Electronic CAD Software; (MIP-9018224); $50,000; 12 months (Joint support with the Systems Prototyping and Fabrication Program - Total Grant $100,00). Three CAD design tools, produced in research projects at the University of Washington, are being developed and enhanced so as to bring them into a state for distribution to general users. These tools are: 1. MacTester, an interactive testing and debugging environment built around the Mackintosh computer; 2. WireC, a mixed graphical and procedural language for describing hardware systems; and 3. Gemini, a VLSI layout verification program that compares the circuit specification with the circuit layout. This grant provides a supplement for distributing and sharing research software under the Software Capitalization Grants Program. Microelectronic Systems Architecture Dr. Pen-Chung Yew, Program Director (202) 357-7853 pyew@note.nsf.gov The Program The Microelectronic Systems Architecture (MSA) program supports research on innovative design of computer systems at the physical and the system level to achieve high system performance. It encourages studies on the impact of new hardware and software technologies, as well as the impact of new applications and algorithms on computer system architectures. The style of architectural research employed includes theoretical and analytical studies, simulations and limited proof-of-concept prototyping. TECHNOLOGY-DRIVEN ARCHITECTURES The program supports architectural research which explores the capabilities and limitations of current and future hardware and software technologies. The objective in these studies is to better understand and to extend the performance, programmability, applications span, and reliability of microelectronic systems. Typical issues which are addressed include: * Methodologies for system design that map high-level abstractions and system specifications to low-level physical implementations while considering the design tradeoffs of chip area, power consumption, clock rate, packaging, cost and programmability. * Design of general-purpose and special-purpose computers, such as superscalar processors, parallel processors, distributed and real-time systems. The design issues may include cache and high performance memory systems, multi-threading, interconnection strategies, pipelines, networking and I/O systems, co-processor architectures, etc. * Studies of system programmability, architectural support for programming languages and system software, compiler techniques to exploit and to enhance system architectural features. * Studies of both software and hardware strategies to enhance the reliability, availability, and fault tolerance of microelectronic systems. APPLICATION-DRIVEN ARCHITECTURES The program supports the design of special-purpose computers for applications that can better utilize emerging microelectronic technologies in a cost-effective manner. Projects focusing on the design and development of application-driven computing systems must involve innovative architectural research. Primary emphasis is placed on obtaining new architectural and design knowledge. A secondary emphasis is placed on studies that can provide a better understanding of the problem solving methods using microelectronic technologies. Typical issues which are addressed include: * Requirement specification, analysis, decomposition of problems and mapping of problem subcomponents onto functional building blocks, and analysis of the cost-performance trade-offs; * The design of special-purpose computers and their required software for applications whose requirements, such as performance, memory size and physical size, cannot be met by available general purpose computers (for example, speech processing, graphics, simulation, image processing, signal processing, artifical intelligence and neural networks); Initiatives and Opportunities The Microelectronic Systems Architecture program is actively soliciting proposals particulary in the areas which related to the HPCC program, and recent FCCSET initiatives such as material processing, biotechnology and manufacturing-related research. Possible research includes: * Innovative application-specific machine architectures that can tackle grand challenge problems, e.g. biotechnology and environmental studies, etc., and recent FCCSET initiatives, such as manufacturing and material research. * Design of innovative microelectronic systems using new device and packaging technologies such as optoelectronics, optical interconnects, VLSI, GaAs, MCM packaging, and analog-digital devices. * Design of high performance memory systems, including I/O, for superscalar and parallel machines. * Performance evaluation of microelectronic systems using a combination of analytical modeling, simulation, benchmarking and measurements on such systems. * Experimental research that deals with building of small-scale, proof-of-concept prototypes, simulation or emulation of new system designs using software or FPGA emulators. Awards Technology Driven Architecture University of California - Davis; Kent Wilken; RIA: Low-Cost Error Detection Techniques Based on Program Behavior; (MIP-9111677); $70,000; 24 months. The objectives of this project are: 1. development of low-cost techniques for concurrent detection of program execution errors caused by processor hardware faults, 2. development of low-cost techniques for tolerating errors caused by transient faults, 3. development of theory that sets optimal bounds on the effectiveness of such techniques, and 4. development of analytical and experimental methods for evaluating the techniques against the theoretical bounds. New techniques are being developed using a behavior-based- error-detection paradigm, in which a simple error-detection coprocessor monitors a program for deviations from a compiler- formed abstraction of its behavior. Existing low-cost behavior-based techniques exploit certain redundancies in program codes or in a processor's architecture. New techniques are also being developed by using the methods from information theory, coding theory, and graph theory to identify new redundancies to exploit, and by using experimental data to uncover gaps between existing techniques and theoretical bounds. Concurrent error detection is necessary because transient faults are becoming more frequent as device size decreases, as the number of devices per processor and the number of processors per computer grows, and as more computers are subjected to noisy environments. University of California - Santa Cruz; Anujan Varma; RIA: High-Speed Interconnection Technologies for Digital Systems; (MIP- 9111241); $69,990; 24 months. This research focusses on applying high-speed digital communication and switching technologies to computer systems. Typical applications to be considered include the interconnection of processor subsystems for multiprocessor configurations, and the interconnection of processors to I/O subsystems. These applications require high bandwidth, low latency, and low error-rate beyond the capability of current electrical busses and I/O channels. The applicability of various high-speed interconnection technologies and the development of new architectures and protocols suitable in a computer system environment is being explored. Fiber-optics as the link technology and high-speed crossbar switches for interconnection is being utilized. The structure of these switches, as well as mechanisms for connection setup, routing, and flow-control across multiple cascaded switches are being studied. In addition, fast connection setup within a switch by the use of multiple controllers are also being investigated. Alternate interconnection technologies such as optical subcarrier multiplexing are also being investigated as low-cost replacements for the switch. Experimental evaluation of the concepts is being performed on a testbed consisting of IBM RS/6000 AND PS/2 high-performance workstations equipped with 100 Mbits/s optical link cards University of Southern California; Keith B. Jenkins; PYI: Optical Computing; (MIP-8858094 A03); $62,500; 12 months. Research is being pursued on: 1. digital optical computing; 2. reconfigurable interconnection networks; 3. neural networks; and 4. parallel computation models and computational complexity. Emphasis is placed on neural networks and parallel computation models. Specifically, the implementation of learning algorithms, number representation including analog, digital, and bipolar numbers, and new optical architectures and architectural issues, such as hardware and time complexity, uniformity, and energy requirements, are being studied. Some of the neural network architectures developed are being demonstrated experimentally. Finally, optical or hybrid optical/electronic architectures based on parallel computation models are being developed. These include an analysis of runtime performance of fundamental algorithms on the new architectures, hardware and device requirements for their physical realization and experimental demonstration of some of the architectures. The focus during this time period is on the study of the impact of more random connections in the optical interconnection networks and on the study of the potential application of incoherent/ coherent holographic recording and readout to novel multiplexed, holographic optical elements. University of Southern California; Cauligi Raghavendra; Task Reconfiguration Problems in High-Performance Distributed-Memory Machines; (MIP-9101875); $49,118; 12 months. This research explores the techniques for reconfiguration of task modules on distributed-memory machines when faults occur. The problems include: 1. trade offs between local and global reconfiguration; 2. communications required to support reconfiguration and task computation after successful recovery; 3. fault-tolerant embedding; 4. analysis of the residual structure of networks in the presence of faulty nodes; and 5. extensive simulations of fault patterns and the performance of algorithms developed on a Paralex 10- dimensional hypercube. Georgia Institute of Technology; Umakishore Ramachandran; PYI: Architectural Issues in Parallel and Distributed Computing; (MIP- 9058430 A01); $62,500; 12 months. The goal of this research is to determine the software abstractions appropriate in parallel and distributed environments, and the hardware primitives that result in their efficient implementation. In parallel systems the primary focus is to understand the algorithm-, software-, and hardware-related limitations to achieving perfect speedup. Expected contributions in this area include the design of novel multiprocessor cache protocols with a view to reducing latency as well as network traffic in large shared memory systems; an exposition of the synergy between system software (compiler runtime, operating system) and the machine architecture for efficiently using the new cache protocols; and a study of the scheduling, synchronization, and memory management issues in parallel systems and hardware support for their efficient implementation. In distributed systems, the research focuses on investigating (through experimentation) the structure of distributed applications and the suitability of the two models of interprocess communication, shared memory and message passing, with respect to criteria such as execution time, number of messages, and system load; identifying hardware support for distributed shared memory mechanisms; exploring mechanisms in the distributed shared memory model for efficiently supporting failure tolerance; and identifying efficient memory management features in each node for supporting failure tolerance. University of Illinois; Prithviraj Banerjee: PYI: Fault Tolerance in Parallel Processor Systems; (MIP-8657563 A04); $62,500; 12 months. This research is directed at two issues related to the design of parallel architectures: 1. I/O issues in parallel processor systems, and 2. communication coprocessor for parallel processor systems. Current parallel processor systems do not provide efficient support of Input/Output (I/O) operations. To achieve faster I/O, multiple disks have to be put together to form an I/O subsystem. Different types of multiple disk organization, namely, data declustering, disk synchronization and combinations of these techniques are being considered. Extensive performance studies to evaluate the relative merits of the different organizations, on the basis of simulated workloads and traces from scientific computations are underway. Specific issues in attaching such a parallel I/O system to shared memory and message-passing parallel processors are being explored. A major problem in current generation message passing parallel processor systems such as hypercubes is the tremendous overhead incurred during message transfers between processors, which makes the use of fine grain parallelism impossible. This research deals with efficient hardware support for message passing in parallel architectures. On the basis of evaluating the communication characteristics of a large number of parallel programs, various interesting characteristics regarding temporal and spacial locality in their message passing behavior have been observed. The design and performance evaluation of a message passing coprocessor and a virtual channel router which can accelerate message communication in message passing parallel processors are being investigated. Massachusetts Institute of Technology; William J. Dally; PYI: Concurrent VLSI Architecture; (MIP-8657531 A06); $62,500; 12 months. New methods for applying VLSI technology to the construction of high performance computing systems are being investigated. The goal is to make the performance of systems incrementally extensible by adding chips just as the memory capacity of today's systems is extensible. The approach is based on the object model of computation. A computing system is viewed at both the hardware and the software level as a collection of objects that communicate by passing messages. Within this framework, three areas are being explored: 1. develop interconnection network technology to improve the performance of message delivery; 2. design a message-driven processing element that efficiently supports fine-grain concurrent computation; and 3. explore abstractions for concurrency that facilitate the use of the machines being designed by an unsophisticated community. During this next period, the focus is on the evaluation of prototype MDP components, the assembling of a single-board (64-processor) J-Machine system, the revision of the MDP component design as required, and the beginning of the assembly of a 16-board (1024-processor) J-Machine system. Concurrent Smalltalk and Concurrent Aggregates programming systems are being installed on the J-Machine and some small applications are being demonstrated. University of Massachusetts - Amherst; Wayne Burleson; RIA: Designing VLSI Arithmetic Arrays to Satisfy Precision Constraints; (MIP-9108086); $60,000; 24 months. Digital signal processing (DSP) structures introduce arithmetic error due to finite wordlength effects. Different number representations and arithmetic methods have their associated precision. This research provides a unifying framework within which many different design alternatives can be compared. It studies various VLSI architectures which satisfy constraints on arithmetic precision while maximizing performance and minimizing VLSI system cost. It uses a statistical state-space model to evaluate the precision of fixed-point, floating -point, signed-digital, residue, logarithmic and rational number systems and CORDIC arithmetics. Unlike conventional microprocessors, dedicated systems can have a multitude of distinct internal wordlengths. The choice of an adequate wordlength can be defined as an integer optimization problem in which arithmetic precision is a constraint and some measures of system cost such as VLSI area (A), latency (L), or period (P), as the objective function. CAD tool will be built to explore the large design space of alternate arithmetic, architectures and wordlengths. University of Michigan; Karem Sakallah, Trevor Mudge, and Edward Davidson; Timing Verification and Optimal Clocking of Latch-Controlled Synchronous Digital Circuits; (MIP-9014058); $90,660; 12 months (Joint support with the Design, Tools and Test Program - Total Grant $127,660). This research is focussed on the temporal modeling of latch-controlled synchronous digital systems. The use of level-sensitive latches, as opposed to edge-triggered flip- flops, has become quite common in recent years because latches are easily implemented in MOS VLSI, by far the leading technology for building digital systems. A consistent theoretical framework for describing the timing constraints which must be satisfied by such systems for proper operation is being developed. This framework is being used to develop efficient algorithms for; 1. checking adherence to these constraints (timing verification), and 2. maximizing system performance without violating them (optimal clocking). Both of these problems require the solution of large linear programs (LPs). The special structures of these LPs will be utilized to reduce the solution time so that the algorithms can be used in an interactive design environment. The research is defining an appropriate notion of criticality and devising effective ways of identifying and reporting the critical areas in the system. The application of this framework to several regular circuit structures, such as CPU data paths and pipelines is being examined to obtain analytical expressions for the minimum cycle time. Such closed-form optimality criteria will guide the synthesis of these structures for maximum performance. The practical significance of this framework and associated algorithms is being assessed experimentally on actual industrial VLSI designs. University of Michigan; Kang Shin; Probabilistic System-Level Fault Diagnosis in Multiprocessor /Multicomputer Systems; (MIP-9012549 & A01); $82,088; 12 months. Due to their potential for high reliability and throughput via the multiplicity of components, distributed computing systems are being increasingly used for reliability- and time- critical applications. However, the probability of having one or more component failures in a distributed system increases with the number of components used. Thus, the ability of locating faulty components, isolating them, reconfiguring the system, and resuming the computation is a key to the success in realizing the potential of any distributed system. As part of the global goal of designing fault-tolerant distributed systems, this project is concerned with system-level fault diagnosis for large multiprocessor/ multicomputer systems. A thorough survey of the currently available methods for diagnosis of large systems has led to a conclusion that a probabilistic approach is the most promising. A probabilistic diagnosis approach allows for the diagnosis of multiprocessor/multicomputer systems with both intermittent and permanent faults, using high-level tests which have imperfect fault coverage and/or are executed on-line. The first step of this project is to develop optimal solutions to the probabilistic diagnosis problem using single or multiple fault syndromes. The second step is to efficiently implement these solutions as distributed algorithms. This task is non- trivial since the identity of faulty communication links and processing nodes cannot be assumed to be known "a priori". Finally, the probabilistic diagnosis will be implemented and their performance will be measured on an experimental multicomputer system, called HARTS, which is currently being built at the Real-Time Computing Laboratory (RTCL), the University of Michigan. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Princeton University; Kenneth Steiglitz; Linear Speedup Architectures; (MIP-8912100 A01); $106,221; 12 months. Limiting factors in highly parallel computation that stem from timing, communication, and reliability requirements are being studied. Particular emphasis is placed on large regular structures, which are especially well suited for signal processing and iterative scientific computation, such as solving differential equations and simulating cellular automata. Computers with a million or more processors will be needed in some of these applications and mathematical results are needed to guide designers' decisions on synchronization method, memory organization and use of redundancy. In the area of timing, work continues in statistical modeling of clock skew in synchronous systems, the analysis of throughput in self-timed arrays, and an ongoing comparison between these two synchronization paradigms. In the area of communication, work continues on bounds based on pebbling arguments. Analogous questions about the reliability of large parallel arrays are addressed. The question of whether or not linear speedup is attainable with fixed reliability is a central problem here, and is the subject of theoretical work. Syracuse University; Alok Choudhary; RIA: Design, Analysis, Simulation, and Evaluation of Multi-Level Caches for Scalable Multiprocessors; (MIP-9110810); $69,936; 24 months. This research studies the role and the performance of multi-level caches in scalable shared memory multiprocessors. The objectives of this project are: 1. evaluate and characterize performance of multi-level caches for multiprocessors as a function of cache parameters, cache coherency protocol, and program characteristics using trace-driven simulations; 2. evaluate cost-performance tradeoffs in selecting various multi-level cache configurations for scalable shared memory architectures using different workload parameters; 3. investigate and develop trace reduction and sampling techniques to speed up simulations to study multi-level cache performance in multiprocessors; 4. develop analytical models to evaluate performance of multi-level caches in scalable architectures; and 5. investigate what minimal set of characteristic metrics must be measured to predict multi-level cache performance over a wide range of cache parameters. Kent State University; Kenneth Batcher; Perfect Shuffle Machines; (MIP-9004127 A01); $80,342; 12 months. Massively parallel architectures have been proposed for large scale problems in associative processing, database management, artificial intelligence, image processing, etc. In a very large machine the cost of the Processor Element interconnections is significant. The perfect shuffle and exchange connections are an attractive alternative to other connection schemes such as two-dimensional meshes, multi-stage networks and hypercubes. This project studies a massively parallel architecture where the processing elements are interconnected with perfect shuffle and exchange networks. Issues being investigated include how to best divide a machine into modules, how to add redundancy, how to control data routing, the architecture of each processing element and how to control each processing element. Ohio State University; Mohan Ahuja; RIA: Designing Concurrent Systems Using Hierarchy of Communication Speeds; (MIP-9111045); $60,000; 24 months. The layered approach to system design has become popular in designing correct systems because of the ease of visualizing the execution as a sequence of processes each resulting from the execution of a layer. This research aims at a lighter- weight approach compared to sequentialization of processes. It defines a notion of communication speed as it is relevant to distributed systems, and a hierarchy of communication speeds as an alternative to sequentialization of processes. It also defines some synchronization abstractions that combine the concept of hierarchy of communication speeds and some channel primitives. These abstractions result in the same ease as sequentialization of processes and are expected to improve the system efficiency in a layered system. They are also useful for designing concurrent programs. This research has several objectives: 1. Develop a framework of thinking based on speeds of communication. 2. Develop some implementations of these abstractions. 3. Develop a scheme for decentralized control using hierarchical channel Primitives. 4. Implement a distributed operating system using hierarchical channel primitives and evaluate its performance. 5. Develop concurrent algorithms that use and a concurrent programming environment that supports hierarchical channel primitives. Pennsylvania State University; Tse-Yun Feng; Efficient VLSI Implementation; (MIP-8912455 A02); $94,859; 12 months. This research aims at defining, studying, and simulating a functional processor architecture which provides high performance and efficient VLSI implementations. The architecture under consideration uses functional decomposition of the execute-access mechanisms and hardware implementation of the access mechanisms of data structures. Research is directed towards the design of the access portion of the processor, the development of an appropriate instruction set for the architecture, the specification of the required execute-access intercommunication, the exploration on the use of area-inexpensive queue caches, and the detailed simulation of the architecture. Pennsylvania State University; Tse-Yun Feng and Chitaranjan Das; Evaluation Techniques for Hypercube and MIN-Based Architectures; (MIP-9104485); $80,171; 12 months. Hypercube and multistage interconnection network (MIN)- based architectures are two promising classes of parallel computers that have received considerable attention in recent years. The objective of this research is to develop analytical evaluation techniques for predicting performance, dependability, and performance-related dependability behavior of these two classes of multiprocessors. The performance model is based on a three-level approach - network level, task (job) level, and system level. The network level analysis gives the average communication delay which can be used in finding job completion time at the task level. Using the job completion time, system level parameters such as throughput and response time could be computed from an appropriate queueing model. Dependability analysis would consider the degradation of the computing elements and communication network for finding task-based reliability or availability. The dependability model is aimed at finding a proper subcube for the execution of a task. The analysis covers both unique-path and multi-path MINs. Performance- related dependability models developed by associating suitable performance measures with the structure states of a multiprocessor. This project will make available a complete set of tools for analyzing these two classes of multiprocessors that have a great promise for different applications. University of South Carolina; M. Sridhar; Task Reconfiguration Problems in High-Performance Distributed-Memory Machines; (MIP- 9103086); $29,990; 12 months. This research concerns the exploration of techniques for reconfiguration of task modules on distributed-memory machines when faults occur. The problems include: 1. trade offs between local and global reconfiguration; 2. communications required to support reconfiguration and task computation after successful recovery; 3. fault-tolerant embedding; 4. analysis of the residual structure of networks in the presence of faulty nodes; and 5. extensive simulations of fault patterns and the performance of algorithms developed on a Paralex 10- dimensional hypercube. Texas A & M University; Laxmi Bhuyan; Design and Analysis of Cache Coherence Protocols for MIN Based Multiprocessors; (MIP-9002353 A01); $85,508; 12 months. This project studies various configurations of shared busses in a Multistage Interconnection Network (MIN) based system in order to maintain cache coherence. These busses can be put separately or inside the MIN switches. It is also possible to avoid using shared busses by redesigning a MIN switch to handle broadcasting. The design and analysis of these cache coherence protocols are the research objectives of this project. A thorough analysis and comparison of various approaches can guide the future multiprocessor system designers in their design process. Multiprocessor organizations are generally defined based on their processor memory interconnections. MINs form a very suitable interconnection medium for building large scale multiprocessor systems. The efficiency of these systems can be further enhanced by putting cache memories with the processors to reduce the memory access demands. However, such cache memories give rise to inconsistency of shared data due to lack of coordination among the processors while changing their shared cache contents. A hardware solution to this problem is to broadcast the state changes through a shared bus interconnection. Since MINs do not have a shared bus, maintaining cache coherence in MIN based multiprocessors poses a serious challenge for research. University of Texas; Robert S. Boyer; Mechanized Code Proofs Based on a Formal Microprocessor Specification; (MIP-9017499); $34,905; 12 months (Joint support with the Experimental Systems Program - Total Grant $69,809). This project is to build an experimental software system to test the feasibility of mathematically formalizing physically realizable microprocessor architectures and to test the feasibility of mechanically checking the correctness of system software written in microprocessor machine language for such microprocessors. Although in principle such formalization and checking are possible, they are widely suspected to be practically infeasible. However, a few recent experiments do demonstrate that a new approach of directly formalizing a machine architecture as a mathematical function in a mechanized logic offers promise. A substantial fraction of a commonly used processor has been formalized and several machine code programs have been rigorously checked mechanically. The initial prototype of the system will be used (a) to test the feasibility of proofs of compiler correctness and (b) to explore the development of a formalized programming environment for system software, with focus on such issues as formalizing memory management and rigorously demonstrating performance figures. The experiment will involve formalizing, in a mechanized logic, the user instruction set of several commonly-used microprocessors, and will also involve formalizing the semantics of some system traps. The work will explore new areas in formalization, including cache consistency, memory protection, and interrupts. The machine architecture issues of proving correct high level language compilers are also being explored. University of Utah; Erik Brunvand; RIA: Asynchronous Computer Architecture; (MIP-9111793); $70,000; 24 months. Computer architecture is currently dominated by traditional synchronous implementationss. As techniques for building asynchronous circuits, and self-timed circuits in particular, improve, it becomes more reasonable to ask what effect these circuit techniques might have on the architecture of computers. Clocked synchronous circuits, for example, tend to reflect worst-case behavior because the clock cycle must be long enough for the worst-case path through some component of the system. Asynchronous systems, on the other hand, tend to reflect average-case performance because results from one component are available as the computation completes. Asynchronous systems can also take direct advantage of incremental process or technology improvements and can thus remain productive over longer periods of time. This research studies the ways in which asynchrony can be used to increase performance at an architectural level. A framework is being developed that allows different processor organizations to be considered. The result of this study will be a proposed architecture that uses asynchrony to improve processor performance. Parts of the proposed architecture will be tested by using an automatic translation system to generate circuits from program descriptions. Virginia Polytechnic Institute & State University; F. Gail Gray and Nathaniel J. Davis; Distributed Control in Dynamically Reconfigurable Architecture; (MIP-9019809); $130,000; 24 months. This project proposes an innovative multi-layered array of computing elements in which dynamic reconfiguration of the interconnection structure is possible. Reconfiguration will be performed using distributed control, eliminating a potential performance bottleneck and single point failure (hardcore) of a centralized control mechanism. An architecture of this type provides an increased level of system performance not typically found in current parallel processing systems. Through the reconfiguration process, the underlying architecture can be optimized to improve algorithm performance, faulty modules can be isolated, and the yield of wafer scale based technology can be improved. The principal investigators are investigating this architecture for more efficient utilization of spares and to provide on line fault detection and data roll-back features. They are also studying the problem of how to optimize the reconfiguration algorithm for cost effective implementation in the emerging wafer scale integration technology. Finally they are extending the reconfiguration algorithm to permit its use on alternative parallel processing architectures such as hypercubes and tori. University of Washington; Susan Eggers; PYI: Code Generation for Uniprocessors; (MIP-9058139 A01); $62,500; 12 months. The long range focus of this research is on improving the performance of parallel programs. One thrust uses compiler technology to obtain better multiprocessor cache performance, i.e., reduce the amount of sharing-related bus traffic. The initial phase of that work is the detection of false sharing and measuring its impact on cache miss ratios and bus utilization. Another area of investigation is measuring the amount and type of program level parallelism in nonscientific parallel programs and determining the levels of hardware parallelism that best execute them. University of Wisconsin; Mark D. Hill; PYI: Cache Memory Design; (MIP-8957278 A02); $62,500; 12 months. The long range focus of this research is on the performance evaluation and implementation of multiprocessors that are easy to program, provide orders of magnitude more computing power, and can be implemented cost effectively. The principal thrust of this work is on the design of cache memories, with specific tasks on the development of multiprocessor compilers, the implementation and semantics of address translation, the implementation of coherent shared memory in systems with multiple caches, the interaction between cache design and implementation factors, and the future effectiveness of cache hierarchies. In the area of design and analysis of secondary caches using very long traces, two current goals are: 1. Examine and characterize what happens when programs have bugs that cause them to violate agreed-upon constraints. Research on data race detection and on weak ordering is being extended so that both operate correctly until the "first" (set of) data race(s). 2. Current weak ordering work ignores semantics of synchronization operations. By incorporating some synchronization semantics, even higher performance memory system implementations can be achieved. A model that allows synchronization semantics to be used is being developed. In the topic of shared-memory models two current goals are: 1. To determine whether and how page placement algorithms should be modified to improve the performance of large real-indexed caches. 2. To investigate whether short samples of the address traces can be combined to estimate the performance of the full trace in the study of secondary caches. Two problems are how best to compensate for cold-start effects and for non-stationary average miss ratios. Application Driven Architecture San Jose State University; Belle Wei; VLSI Architectures and Circuits for Data Compression; (MIP-9019862); $74,733; 24 months. The gap between huge data requirements of many applications and limited capabilities of transmission and storage systems dictated the need for developing high-performance data compression hardware. This research focuses on the study and implementation of nonstationary multi-alphabet source coding algorithms suitable for VLSI implementations, given the current and near-future technology; the implementation of high-throughput circuits for real-time applications; and the determination of an optimal tradeoff between the compression performance and circuit configurations given constrained chip area. The developed algorithms and circuits are expected to offer more cost-effective data compression for communication and storage systems. University of California - Berkeley; John Wawrzynek; PYI: Application-Specific VLSI Architectures; (MIP-8958568 A03); $25,000; 12 months. The focus of this research is on designing application- specific VLSI architectures, specifically for the generation of realistic musical sounds using parallel processing and VLSI technologies. The approach is to develop mathematical models for the motions of physical musical instruments, and to numerically solve these models to produce sounds using "synthesis by simulation." Generating sound by solving the equations of motion of an instrument captures a natural parametrization of the instrument and includes many of the musically important physical characterization of the sound not possible using other methods. The design of efficient and fast application-specific VLSI computing architectures is crucial to the success of this project due to the requirements on massive computations and on real-time human interaction. University of California - Irvine; Isaac D. Scherson; A Bit- Parallel, Word-Parallel, Massively Parallel Associative Processor for Scientific Computing; (MIP-9106949); $228,236; 24 months. Classical SIMD associative processors execute arithmetic in a bit-serial word-parallel manner. Current massively parallel machines also perform bit-serial arithmetic in a fine grain computing environment. This bit-serial property is a limiting factor in increasing processing speeds. A simple but powerful new architecture based on the classical associative processor model is proposed here. By distributing logic among slices of storage cells such that a number of bit-planes share a simple logic unit, bit-parallel arithmetic in a massively parallel environment becomes feasible. For m-bit operands, complex operations such as multiplications execute in O(m) cycle as opposed to O(m2) for bit-serial machines. The simplicity of the architecture enables its implementation using VLSI technology, and hence allows the construction of a word- Parallel, bit-Parallel, massively Parallel (P3) computing system. The main goal of this research is to build an experimental P3 machine which is to be embedded into a heterogeneous supercomputing environment. The need for such architecture and for an actual working prototype stems from a number of applications which cannot be solved in conventional supercomputers. Space science applications are outlined as this work is also partially supported by NASA. University of California - Los Angeles; Milos Ercegovac and Tomas Lang; Composite Operations Using On-line Arithmetic for Application Specific Parallel Architectures: Algorithms, Design, and Experimental Studies; (MIP-8813340 A02); $102,882; 12 months. The objective of this research is to formulate, design, and evaluate composite arithmetic algorithms using an on-line approach, so that the resulting algorithms are better suited for VLSI and are more adaptable to the requirements of application-specific parallel systems for numeric computations. On-line arithmetic is characterized by; 1. digit-serial operations, 2. the most-significant-digit-first mode in all operations, 3. no carry propagation, and 4 highly modular organization. This research combines theoretical, experimental, and methodological aspects to develop and adapt algorithms to the basic operations and to combine the basic operations into more complex functions. Design modules are being developed in VLSI, which are then incorporated into application-specific systems. The focus is on matrix computations that are pervasive in signal processing applications. University of California - Berkeley; David G. Messerschmitt; Research in High-Speed Dedicated Signal Processing Architectures; (MIP-8903668); $94,636; 12 months. This research aims at studying bottlenecks related to chip inputs/outputs, memory bandwidth, memory size, and data dependencies in special-purpose computers for signal processing application. The goal is to theoretically quantify these problems and systematically ease them through algorithm design, algorithm transformation, and joint design of architectures and algorithms. Research is carried out on identifying the nature of problems in signal processing applications and experimenting with applications related to video compression and continuous-speech recognition. Real-time signal processing systems often lack proper tradeoffs among performance requirements, cost goals, and VLSI technological limitations. There is a strong need to study dedicated or application-specific architectures that are very efficient for the given application. This research addresses the issues related to the bottlenecks that arise in the implementation of application-specific signal processing systems. It extends previous work, which mainly addressed problems related to throughput. Results obtained allow better design tradeoffs in the design of special-purpose signal processing systems. University of Southern California; V. K. Prasanna-Kumar; Parallel Techniques for Image Processing and Vision; (IST-8905243 A01); $45,402; 12 months (Joint support with the Robotics and Machine Intelligence Program - Total Grant $90,804). This research will develop parallel algorithms for machine vision, especially for the interface between image processing and image understanding, with further study of new and existing parallel architectures for efficient execution of these algorithms. Architectures to be studied include fixed- size arrays, reconfigurable meshes, reduced VLSI arrays, and arrays with hypercube connections such as the Connection Machine. Data movement techniques will be designed to support parallel solutions to image computations in mid-level and high-level vision. Specific high-level problems to be studied are motion analysis, image matching, and stereo matching, as well as several discrete relaxation techniques. Neural-net approaches to vision will be supported by design of routing techniques based on preprocessing of the underlying neural graph and by mapping of such structures onto fine-grain parallel machines. A Connection Machine at USC Information Sciences Institute will be used to evaluate data partitioning, data routing, and mapping techniques. University of South Florida; V. K. Jain, D. L. Landis and N. Ranganathan; Reduced Cycle Nonlinear Multi-function Chips; (MIP- 9103286); $83,724; 12 months. Ultra high speed signal processing on silicon is a key requirement in applications such as computer vision, three- dimensional graphics, and phased array radar. The majority of these advanced algorithms require the use of nonlinear functions such as the reciprocal, the square-root, and trigonometric functions. This project is to research and develop Reduced Cycle Nonlinear Multi-Function Chips that meet these critical needs and to gain new knowledge in this key area. The proposed approach would reduce such nonlinear computations for 32-bit data to two clock cycles, or even a single cycle at the expense of increased complexity. The approach is also function independent. It offers the potential of providing multi-function capability on a single chip/cell. The functions chosen for the proposed research, based upon a study of signal processing algorithms, are the reciprocal, square-root, sine/cosine, and the arctan functions. It will also study two-argument functions. The specific functions chosen are complex (and thus two-argument) magnitude, phase, reciprocal, and logarithm functions. University of Illinois; Benjamin W. Wah; Design of Multiprocessing Systems for Learning Strategies; (MIP-8810584A 01); $53,047; 12 months (Joint support with the Robotics and Machine Intelligence Program and the Knowledge & Database Systems Program - Total Grant $106,093). The goal of this research is to develop a multiprocessing system that automatically learns strategies for dynamic decision problems, both in hardware and software, such that the strategy learned is targeted for a multiprocessing system, and that the strategy is best among those that can be found in the given time and resource constraints of the learning system and knowledge provided by the users. The class of problems being explored is called dynamic decision problems. These problems may possess one or more of the following properties: the decisions necessary to solve the problem may be interrelated, the decisions are based on dynamic parameters which may be uncertain, the number of possible strategies is very large, and the effects of a decision may not be immediately available after the decision is made. A dynamic decision problem is solved by a combination of domain knowledge and meta-knowledge. Domain knowledge consists of the definition of the problem and its parameters, its representation, and possibly an algorithm (or class of algorithms) to solve it. Meta-level knowledge is knowledge about selecting alternatives to implement in the algorithm and its representation, or finding new ways of solving the given problem. Domain knowledge can be implemented in the form of a program in a computer, for instance, while meta-knowledge is provided by the designers. This research focuses on the development of methods so that some of the meta-knowledge supplied by designers can be obtained by automatic learning methods. Initially, the focus is on dynamic decision problems for which there are some known solutions, and on learning strategies to solve these problems. The dynamic decision problems addressed include combinatorial search problems and real-time decision problems. The distinguishing features of this research are as follows: First, architectural constraints are being included so that the learning system will find the best strategy given a fixed amount of time and resources, and the computer on which the learning system is implemented. Second, the strategy learned by the learning system also includes consideration of the computer on which it will be implemented. University of Southwestern Louisiana; Weijia Shang; RIA: A Theoretical Framework for Programming and Design Bit-Level Processor Arrays; (MIP-9110940); $59,236; 24 months. The objective of this research is the development of a theoretical framework based on which a software tool can be developed to program and design systematically bit-level processor arrays. The resulting software tool should be portable and should explore fully the parallelism at bit-level and word-level which current system software of bit level systems lacks or does not fully explore. It can reduce significantly the programming burdens on users and improve the situation that only a low percentage of the performance offered by hardware is utilized because of the insufficient support of system software. The class of algorithms for which the theoretical framework is developed are frequently used in digital signal processing, image processing and other scientific computations. This project is divided into three steps: 1. expanding the conventional algorithm at word-level into bit-level to explore the parallelism at bit-level; 2. analyzing the parallelism and dependence structure of these n-dimensional (often four or five dimensional) bit- level algorithms (algorithms with n nested loops); and 3. finding time-optimal mapping into (k-l) dimensional (often two dimensional) bit-level processor arrays without computational conflicts. New Mexico State University; Jaime Ramirez-Angulo; RIA: Real Time Solution of Laplace Equation Using Analog VLSI Circuits; (MIP- 9111278); $59,962; 24 months. The objective of the proposed research is the investigation and the implementation of partial differential equation solvers using analog VLSI circuits, specifically the design, fabrication and evaluation of a programmable analog Laplace equation solvers. The main advantages of the proposed approach include speed improvement, robustness and fault tolerance. This research demonstrates the feasibility of analog VLSI partial differential equation solvers. It will have an impact on engineering disciplines where a real time solution of these types of equations is required. State University of New York - Stony Brook; Arie Kaufman; A Three-Dimensional Voxel-Based Graphics System; (MIP-8805130 A05 & A06); $47,708; 12 months (Joint support with the Computer Systems Architecture Program - Total Grant $87,415). Research is being performed on a three-dimensional (3-D) graphics workstation based on the CUBE architecture, which is centered around a cubic frame-buffer of voxels with three processors accessing the cubic memory to input geometric and scanned data, to manipulate, to project, and to render the 3-D images. An inherent 3-D user interface employing a true 3-D input device and the complementing 3-D screen environment is being developed. Skewed memory organization of the cubic frame buffer that provides conflict-free access to beams in arbitrary directions is being simulated. Finally, a viewing architecture that allows direct and faster arbitrary parallel and perspective projections is being simulated. The architecture designed is a versatile 3-D workbench for medical, engineering, biological, geological, and other 3-D visualization applications. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. North Carolina State University; Dharma Agrawal; Functional VLSI Module-based Multiprocessor Architectures for Real-time Vision; (MIP-8912767 AO2); $71,878; 12 months. This project focuses on system integration of VLSI functional modules by creating uniformity in their interface behavior. Such integration is aimed at designing flexible, real-time, low-level vision systems using pipelining of modules that correspond to individual operations. Two tasks are addressed in this research. First, functional modules for benchmarking vision operations are being designed and characterized. Emphasis is placed on maintaining invariance in control and input/output requirements with respect to changes in the parameters of each operation. Second, algorithms and associated software for static mapping of a given sequence of operations on a set of functional modules connected through a network of switching nodes are being developed. These algorithms balance the workload of the modules in order to achieve high throughput. The characterization of individual VLSI modules in terms of operation-specific input/output, buffer sizes, and memory size allows more realistic prediction of system performance. University of North Carolina - Chapel Hill; Hiroyuki Watanabe; Exploration and Evaluation of Architectures for Fuzzy Logic Processor; (MIP-9103338); $12,954; 12 months. This research is for design and evaluation of architectures suitable for fuzzy logic applications. It uses a reduced instruction set computer (RISC) as a core processor and adds a fuzzy logic instruction set. It also contains special hardware functional units for fuzzy logic related operations. Simulation and detailed prototype design will be carried out to evaluate various design tradeoffs. Oregon State University; Bella Bose; Balanced Codes; (MIP-9016143); $46,897; 24 months. In a balanced code, each code word contains equal number of 1's and 0's. These codes find many applications - data transmission in fiber optics, data integrity maintenance in optical discs, fault tolerant memory design, etc. The objective of this research is to develop design methods for error correcting balanced codes. The aim is to design codes which have high information rate and at the same time have simple encoding/decoding algorithms. Other related topics such as DC-free coset codes, balanced codes over higher alphabets, etc., are also being investigated. Oregon State University; Sayfe Kiaei; RIA; Synthesis and Implementation of Multi-Rate Arrays; (MIP-9011227 A01); $4,687. In this project, previous design methods for systolic arrays are being extended to Multi-Rate Arrays (MRAs) in several directions. First, recent results on the synthesis of MRAs are being developed to obtain a formal procedure for the design automation of MRAs. Secondly, the application of MRAs to DSP and image processing algorithms is being explored. This includes a thorough comparison of MRAs with local broadcast (systolic), bounded-broadcast, and global broadcast arrays in terms of speedup, efficiency, architecture complexity, number of data paths, and VLSI chip area. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Carnegie-Mellon University; John Shen; Architecture Synthesis and Retargetability for High-Performance Application-Specific Processors; (MIP-9007678); $70,001; 12 months. A new instruction-level parallel (ILP) architecture called XIMD is proposed. This architecture model can provide better performance and efficiency for a broader range of applications than traditional scalar uniprocessors including pipelined RISC. The XIMD architecture employs multiple functional units and can support the concurrent execution of a variable number of instruction streams. The regularity and simplicity of its hardware implementation makes it highly scalable and retargetable to accommodate diverse application code characteristics. Currently, the XIMD architecture model is being developed and evaluated. A functional level simulator for a baseline XIMD architecture is being implemented. Using this simulator the performance and efficiency of the XIMD can be quantitatively analyzed. A practical prototype XIMD machine is also being designed. Implementation feasibility and achievable performance using currently available parts are being assessed. Compilation methodology and techniques to support the XIMD architecture model are also being investigated. Pennsylvania State University; Mary Irwin and Robert Owens; High Performance, Fine Grained, Application Specific, VLSI Architectures; (MIP-9102500); $79,758; 12 months. The primary thrust of the research is the investigation of more general purpose signal processing architecture which achieves the same level of performance as special-purpose signal processing architectures without an accompanying increase in granularity. Most attempts at trying to expand an architecture to accommodate a more general class of applications usually result in an increase in granularity. With the increase in granularity either the overall size of the processor increases or, to keep the overall physical size constant, fewer processing cells are utilized. In either case, the end result is a decrease in performance. This project attempts the development of more general purpose signal processing architectures which avoid this difficulty. They maintain the granularity, performance, and physical size of fine grain processors with the flexibility of more coarse grain processors. Brown University; Daniel Lopresti; Investigations in Programmable Systolic Architectures; (MIP-9020570); $90,441; 24 months (Joint support with the Experimental Systems Program - Total Grant $180,882). This is a project to continue work on the B-SYS system for biological sequence comparison. This is a linear array of chips, each chip containing 47 small processors that can do fixed point arithmetic and the character comparisons needed for sequence comparison. Previous work has resulted in a 10- chip prototype. In this project, the principal investigator is expanding the prototype, developing software to aid in programming and debugging the array, and studying testability and fault-tolerance issues. The goal is to produce an inexpensive coprocessor with supercomputer performance on this problem. Brown University; Harvey Silverman and Sumit Ghosh; BRAHMA: The Brown Adaptive Hardware Machine Architecture: A New Direction in Computing; (MIP-9021118); $76,408; 12 months. This research focuses on the development of the concept, design, and implementation of a software- support/reconfigurable-hardware platform that accepts an application program, written in C, and automatically generates both the hardware-configuration data and the program to execute on a reconfigured hardware. The initial application programs to be looked at are the kernel algorithms currently used in speech processing, distributed decision-making, computer vision, CAD of digital systems, robotics, and signal processing algorithms. New Conversion techniques for the expanded set of C functions are being developed. The construction of a limited hardware platform to verify the expected advantages of the software system is being explored. Texas A & M University; Ugur Cilingiroglu; RIA: Charge-Pumping Neural Networks; (MIP-9103424); $20,000; 24 months (Joint support with the Systems Prototyping and Fabrication Program - Total Grant $49,160). The general objective of this project is to fully explore the "Charge-Pumping Network (CPN)" concept, and to transform it into a very densely integrable family of microelectronic neural networks. The CPN, in its generic form, is the simplest interconnected array of MOS gated-diodes. It is capable of performing inner product and thresholding operations in one direction and weighted averaging in the opposite in a simultaneous fashion over the same synaptic matrix. Yet, no visible feedback path exists in the array. This creates a very rich bidirectional neural functionality in a very compact network. The research plan includes the development and refinement of network synthesis procedures, a search for self-learning ability and the entire design/fab/test cycle for implementing five different target architectures. The goal is to extend the knowledge base in neural network synthesis by offering a general procedure for non-negative synaptic matrix design, and to develop a knowledge base in collective multistability through an analysis of this fundamental concept. University of Washington; Robert Darling and Robert Pinter; Sensory Neural Networks for Advanced Photodetectors; (MIP-8822121 A02); $97,891; 12 months. This project focuses on implementing arrays of photodetectors and associated first-level sensory neural networks in monolithic integrated circuit format for purposes of applying the field of vision to engineering applications of advanced photodetectors and for producing a physical model that can be used in the study of multiplicative lateral inhibition in visual systems. The photodetectors and electronic circuitry are fabricated using gallium-arsenide technology to allow for upward compatibility with integrated optoelectronic systems and to allow the constructed detector assemblies to function in harsher operating environments. Additionally, the monolithic integration of an array of photodetectors with a first level neural network offers advantages in size, reliability, cost, and hence commercial applicability. Biological visual systems perform many demanding tasks that are far from the present capabilities of electronic hardware. The success of many biological systems can mostly be attributed to a large degree of pre-processing of the visual image at the most peripheral level of the sensory system. This pre-processing has been described by many models, one of which is multiplicative lateral inhibition, which has the advantage of simple implementation in electronic circuitry and is being studied in this project. Since recognition is studied in a fully parallel context, advanced photonic sensors based upon this developing technology have the potential to approach the spatial recognition performance of biological retinas and to greatly surpass them in temporal performance of feature acquisition. Marquette University; Lee Belfore; RIA: Modeling Faulty Neural Networks; (MIP-9110441); $69,162; 24 months. This research is on modeling faulty neural networks using a fault polynomial representation to facilitate the creation of a Markov model. The initial method has the capability of analyzing neural networks with about twelve neurons. This research extends the technique to allow the analysis of a neural network with several hundred neurons. Other related research objectives include studying different fault models, fault analysis accuracy, related disciplines for analysis techniques, dynamic behavior measurements, other neuron models, learning, and different neural network architectures. Digital computer techniques are used to compute and verify the results. This research allows the faulty behavior of neural networks to be analyzed. It provides a motivation for implementing neural networks with a large numbers of neurons. It also models some aspects of the dynamic behavior of such neural networks. Other University of California - Santa Cruz; Kevin Karplus; 1991 Advanced Research in VLSI Conference, March 26-28, 1991, University of California, Santa Cruz; (MIP-9014762); $2,400; 12 months (Joint support with the Design, Tools and Test Program, Systems Prototyping and Fabrication Program, the Circuits and Signal Processing Program, and the Experimental Systems Program - Total Grant $12,000). This award supports one conference in a series that has alternated between the east and west coasts for more than a decade. The conference is intended to publicize innovative, interdisciplinary research with a strong VLSI component. This year's focus is on systems integration. The National Science Foundation support allows reduced registration rates for students, university employees, and program committee members, and travel and registration expenses for the invited speakers. Pennsylvania State University; Mary Irwin; Group Travel for U.S. Participants in the 10th International Symposium on Computer Arithmetic; June 26-28, 1991, Grenoble, France; (MIP-9110427); $8,100; 6 months. The 10th International Symposium on Computer Arithmetic was held June 26 through June 28, 1991 in Grenoble, France. The Symposium Chairman is Dr. Jean-Michel Muller of LIP-IMAG. This international symposium is the tenth in a series and the third held outside the U.S. Travel funds provided allow U.S. researchers to participate in the symposium, to present research results, and to exchange research ideas with their European colleagues. Circuits and Signal Processing Dr. John H. Cozzens, Program Director (202)357-7853 jcozzens@nsf.gov The Program The Circuits and Signal Processing (CSP) program supports basic research in the areas of digital signal processing, analog signal processing, and supporting hardware and software systems. This research is typically driven by important applications and emerging technology. CSP is a highly active research area that covers a wide spectrum ranging from theory to VLSI implementations and applications. Often driven by advances in technology, it also serves as a catalyst for new technological innovations. A classification of CSP research based on signal characteristics, applications, and technology is as follows: 1. One-Dimensional Digital Signal Processing (1-D DSP) research is concerned with the representation of 1-D signals (example, audio, and EKG signals) in digital form, and the processing of such signals using digital technology. 2. Multi-Dimensional Digital Signal Processing (M-D DSP) research is directed towards signals which are inherently functions of two or more independent variables, including images and video signals. 3. VLSI Signal Processing research deals with algorithms/architectures that can be mapped (with ease?) onto VLSI circuits. 4. Circuits research is concerned with the better understanding of non-linear and high-frequency circuits. Areas of research include: 1-D DSP: signal compression for reduced data rate with applications to personal communications, signal enhancement and recovery from partial information, signal representation and models, optimization and approximation, algorithm development, and alternate architectures for implementation. M-D DSP: algorithms for M-D signal processing and applications to specific signal categories (image, for example) and multimedia communications. Digital Representation: efficient and effective A/D conversion, compression, decomposition, and modeling of data. VLSI SP: analog, digital, mixed analog/digital VLSI for signal processing, and CAD for signal processing architectures. Circuits: emphasis is placed on analysis and design of circuits including neural networks for signal processing applications. In 1-D DSP new approaches to non-linear signal processing, new orthogonal decompositions and representations, and new filter structures and designs are encouraged. M-D DSP is a growing area driven by problems arising in important application such as HDTV. In VLSI signal processing, analog, mixed analog/digital VLSI, and CAD for analog design are areas of importance, and are all presently supported in part by the CSP program. Since the CSP Program is profoundly effected by technological change, it is important to (a), constantly assess the potential impact of new topic areas, (b), delineate emerging research areas, and (c), target (potential) program initiatives, both in the short term and long term. Initiatives and Opportunities There are many opportunities and special programs that are available through the CSP Program. Several which are most frequently inquired about are: HIGH PERFORMANCE COMPUTING and COMMUNICATIONS (HPCC) The Circuits and Signal Processing Program will focus on a wide range of signal processing problems needing high performance computing, and will continue to serve as an application driver for high-performance computing research. A component of the HPCC program - Grand Challenge Applications Groups - will provide funding for multidisciplinary groups of scientists, engineers, and mathematicians to apply emerging high performance computing and communications systems to advance the solution of diverse science and engineering problems. The emphasis will be on support for groups requiring HPCC capabilities, where such focused, cross-disciplinary support is generally unavailable or difficult to obtain. Projects will include design of models, algorithms and software to fully realize the potential of parallel, distributed and heterogeneous computing systems on Grand Challenge Application problems. BIOTECHNOLOGY In addition to the biotech-related research areas currently supported by CISE, new opportunities for the CSP Program will include: Bio-based or bio-motivated sensing, sensor development, sensor integration; new methods for computer imaging, 3-d recognition, interactive visualization for medical diagnosis; new models and architectures for building "intelligence" (perception, learning, language, speech) in human-computer interfaces. Representative contemporary and novel applications to various areas of 1-D and M-D DSP include: Array Processing: echo imaging problems that arise in diagnostic medicine; remote sensing; nondestructive testing; (adaptive) noise suppression or cancellation with applications to fetal heartbeat monitors and hearing aid applications. (Adaptive) Filters: modeling biological phenomena, e.g., whitening filters which model neural noise; ARMA modeling of nonstationary (REAL!) processes, e.g., speech processing. Image Processing: medical image compression (VQ, subband coding, for archival purposes - X-ray to MRI; restoration of blurred or noise-corrupted images; tomographic imaging of time-varying data, e.g., aperiodic time variations in data resulting from diseased human hearts; 3D image reconstruction with limited tomographic data. Spectral Analysis: detection of malignant tumors by multivariate analysis of proton magnetic resonance spectra of (blood) serum. MANUFACTURING and MATERIALS Potential applications areas include: Imaging: remote sensing - radar, acoustic, IR, UV Microstructure: x-ray crystallography, electron and confocal microscopy; electronic photography Image Understanding: robotics - inspection and quality control; security; VHS; agriculture - environmental monitoring Array Processing: source localization; active noise control Other: manufacturing processes (digital process control); automotive electronics (IVHS) Awards Circuits University of California - Berkeley; Leon Chua; Nonlinear Networks and Systems; (MIP-8912639 A01); $90,000; 12 months. The thrust of this project is on fundamental and applied research problems in Nonlinear Circuits and Systems. In particular, the research is concentrated in two broad areas. Under the heading, "Nonlinear Modeling and Simulation," applications of Volterra series representations in model validation, adaptive filtering, and in characterizing microwave devices and communications channels are being investigated. Canonical Piecewise-linear analysis and higher order circuit elements are employed in order to reduce the computational effort in computer-aided circuit analysis, and to model accurately frequency-dependent effects in high speed integrated circuits. Under the heading, "Nonlinear Dynamical Systems," research is concentrated on the development of global properties of these systems. Analytical tools, software, and instrumentation with the objective of making the investigation of nonlinear systems accessible to the wider electrical engineering community are being developed. Through studies of bifurcation phenomena, non-functional modes and failure boundaries in synchronization and phase-locking systems can be identified. Pennsylvania State University; Nirmal K. Bose; Analysis and Training of Neural Networks Using Voronoi Diagrams and Graph Decomposition; (MIP-9114997); $14,866; 12 months (Joint support with the Robotics and Machine Intelligence Program - Total Grant $29,731). This project focuses on a novel solution to the multilayer neural network training problem - determining the connection weights of the neurons -using a very powerful construct from combinatorial geometry called a Voronoi diagram. The equally important problem of how to partition a neural network into smaller subnetworks which can then be used either individually or together to approximate the function of the original network is being addressed. Analog Signal Processing MEI Research; Juzer S. Mogri; SBIR: A 4 GHZ Multi-chip System for High Resolution Analog-to-digital Converters; (ISI-9060826); 6 months (Joint support with the Small Business Innovation Research Program - Total Grant $50,000). Gigasample analog-to-digital converters (ADCs) are needed in many fields, from advanced information processing systems to scientific instrumentation used in high energy Physics experiments. As higher speed ADCs become available, signal capture functions which were only possible in the analog domain, now become possible to achieve in the digital domain, where sophisticated (and far more accurate) signal processing functions may be applied. Gallium Arsenide (GaAs) technology promises a tenfold increase in conversion rate in addition to providing overall improvement in issues such as power dissipation, radiation resistance, operability at higher temperature, etc., as compared to silicon technology. However, GaAs ADC development has been plagued with problems for which no near term solution is foreseen. The thrust of this research is in the development of a technology driven architecture that draws upon the strengths of both GaAs and Si technologies. In particular, the research makes use of the significant difference between the signal bandwith ("rise time) and realizable current gain (or current drive) between the technologies to realize low-power high resolution ADCs. The work involves feasibility research, modeling and prototype building. Stanford University; Robert Gray; Analog to Digital Conversion and Data Compression; (MIP-9014335); $62,589; 12 months. The goal of this research program is to provide exact analyses of over-sampled A/Ds, especially a variety of architectures for implementing a method known as sigma-delta modulation. Through the use of old and new techniques from nonlinear and linear systems theory, random processes, and ergodic theory, exact descriptions of the behavior of quantization noise for single-loop and multi-stage (cascaded) sigma-delta modulatory architectures for a variety of input signals (both with and without dithering) have been obtained. The thrust of the present project is to extend these results to other architectures including a sigma-delta loop with leaky integrators and non-unity gain, modulo sigma-delta and parallel sigma-delta modulatory. Also, other related topics will be examined including stability issues in feedback quantization, and the incorporation of subsequent signal processing (such as compression and linear filtering) into the A/D converter are included in the research. University of California - Berkeley; Paul R. Gray and Robert Meyer; Research in High-Frequency Analog Electronic Circuits for Communication Systems; (MIP-9101525); $100,000; 12 months. This research is in the area of monolithic high-frequency communication circuits and is directed at exploring new ways to use silicon integrated-circuit technology to improve the performance and reduce the cost of communication systems of various kinds with primary emphasis on data communications. Particular emphasis is placed on circuit techniques applicable to clock recovery in high-speed communications systems, optimization of the sensitivity in fiber-optic communication receivers, and the application of BICMOS technology for the particular needs of high speed communications applications. Continuing efforts in this area are likely to produce results which will have significant impact on integrated-circuit design for communication systems. University of California - Los Angeles; Kenneth W. Martin; Analog CMOS Realizations of IIR Adaptive Filters; (MIP-8913164 A02); $130,827; 12 months. Continuous-time analog CMOS circuits are being developed to realize infinite-impulse-response (IIR) adaptive filters that were recently developed. These structures are all based on notch biquads that have good sensitivity properties and are easy to adapt while guaranteeing that the filter is stable. Previous research has concentrated on the theoretical analysis of these structures, where global convergence was proved and it was shown the structures have very small biases. This research focuses on the development of the adaptive filters for practical applications. These include FM demodulatory, frequency-assisted phase-locked-loops and Costas-loops, carrier and clock-extraction circuits, efficient channel-banks for spectral analysis and periodic noise cancellation circuits, to name only a few. In many ways, these new circuits represent a generalization of the phase-locked-loop to the multi-sinusoidal case. As such, they will be the first ever analog circuits capable of isolating individual sinusoids having unknown frequencies from a multisinusoidal input signal in real-time. University of California - Los Angeles; Gabor Temes; Digitally Corrected Oversampling Data Convealers; (MIP-9196199 A01); $74,501; 12 months (Joint support with the Experimental Systems Program - Total Grant $102,415). This project is to develop faster and more accurate analog- to-digital and digital-to-analog data converters. The converters considered are of the interpolating type, which effectively trade conversion speed for accuracy. That is, in interpolating type converters, the use of a multibit (rather than a single-bit) front end in an interpolating converter can lead to a higher resolution for a given speed or a higher conversion speed for a fixed resolution. However, a multibit front-end requires an analog component accuracy which cannot be achieved without complicated and expensive trimming and or randomizing techniques. A novel digital self-calibration and correction technique which achieves the requisite accuracy of the multibit system without any trimming or randomizing, using only a simple additional digital stage is being developed in this research. The work involves an architectural study of the novel system, development of the circuit blocks needed and design, fabrication and testing of several fully integrated converters based on the novel principle. This new approach should lead to faster and/or more accurate converters than any of the currently available ones. Such converters will lead to further developments in important applications such as digital radio and television, digital audio, ISDN, radar, etc. University of Illinois; Bang-Sup Song; Background Trimming of Data Converters and Filters; (MIP-9013166 A01); $68,511; 12 months. This research involves exploring a background electronic trimming principle to compensate for errors arising from the component mismatch and the process variations in data converters and filters and to prove its effectiveness through theoretical simulations and experimental prototyping. The basic idea behind this work is to replace the component trimming procedure usually done in the factory prior to chip packaging by a hidden electronic calibration running in the background. The goal is to improve the performance of inherently fast data conversion and filtering systems by maintaining simple system architectures and moving the operation of sophisticated trimming circuits to the background. As a result, systems of this kind can be faster than systems electronically calibrated in foreground. This basic research will help to develop a family of high- performance monolithic analog/digital interface circuits with premium speeds not readily available in monolithic forms. Top-Vu Technology, Inc.; Tho T. Vu; GaAs Readout and Preprocessing Electronics for Linear One-Dimensional Infrared Focal Plane Array Sensors; (ISI-9022291); 24 months (Joint support with the Small Business Innovation Research Program - Total Grant $123,830). This is a Phase II SBIR project to develop the GaAs readout and preprocessing electronics chip of the linear one-dimensional (1-D) and 2-dimensional (2-D) infrared focal plane array (IR-FPA). Top-Vu Technology will design, layout, fabricate and test a prototype chip using commercial GaAs MESFET technology. The expected results will include a Nx1 array readout and multiplexing GaAs chip. The preprocessing electronics will be designed, fabricated and tested. Ohio State University; Mohammed El-Naggar; PYI: VLSI Design of Electronic Circuits; (MIP-8896244 A05); $37,500. The thrust of this project is in the area of continuous- time analog MOS VLSI circuit implementation. It involves design stimulation, fabrication and testing. Simple analog circuits that will form cells of an analog VLSI cell library are being designed. The application of these circuits is mixed analog/digital telecommunication systems, and implementation of neural networks are being explored. Array Processing University of Michigan; Gregory H. Wakefield; PYI: Constrained Spectral Estimation; (MIP-8657884 A04); $62,500; 12 months. Constrained spectral estimation is being considered with respect to Angle-of-Arrival estimation for array signal processing, time-varying spectral analysis for speech enhancement, and the implications that constraints impose on the class of spectral solutions. Methods of spatially structured covariance estimation are being developed for non- uniform arrays in colored and partially coherent noise and signal environments. Dependencies among adjacent covariance matrices are being modelled, and structured estimates proposed for speech enhancement. Analysis of nonlinear operator theory and alternative measures of spectral deviation are being introduced. These efforts contribute to the understanding of the fundamental limits of spectral estimation. University of Minnesota; Kevin M. Buckley; PYI: Digital Signal Processing for Hearing Aids and Source Localization; (MIP-9057071 A01); $57,000; 12 months. In the general area of source localization estimation (SLE), this project; 1. continues SLE algorithm performance evaluation(s), 2. investigates algorithm improvements based on considerations of analytical performance expressions derived during the course of this project, and 3. addresses the combined issue of robust estimation and high resolution in the presence of modeling errors. In the area of acoustical/biomedical digital signal processing, specifically hearing aids, efforts are being directed towards the specification of algorithm constraints and appropriate cost functions, incorporating robustness, and real-time testing both in the laboratory and in the field. Adaptive methods are being examined for use in active noise cancellation of undesired nonstationary noise. University of Minnesota; Mostafa Kaveh; Wideband Sensor Array Signal Processing; (MIP-8813204 A02); $87,476; 12 months. This research is focused in the area of wideband sensor array signal processing. The primary thrust of this phase is on the further development of the Coherent Signal-subspace Method with emphasis on real-time adaptive structures for wideband array focussing, integration of processors and the implementation of the adaptive algorithms on an ultrasonic array testbed. The intention is to fully explore the practical utility of the estimators that have been developed within the framework of hardware implementation. Also, theoretical investigation of the statistical properties of the proposed detectors and estimators is being pursued. State University of New York - Buffalo; Mehrdad Soumekh; Synthetic Aperture Echo Imaging; (MIP-9004996 A01); $41,443; 12 months. The objectives of this investigation are to develop inversion principles, data acquisition strategies, and information processing algorithms for an echo imaging system that utilizes the motion of a single element transducer (SET) to synthesize the effect of a phased array with a size equal to the path length that the SET traverses. A mobile SET, with a dimension much smaller than a phased array's size, brings flexibility in data acquisition and processing for echo imaging systems. Synthetic aperture echo imaging also opens ways for imaging an object that cannot be studied with phased arrays due to constraints imposed by the object's anatomy. Unlike the dynamic focusing inversion used in conjunction with stationary phased arrays, the proposed inversion produces an image scene by integrating the recorded echoed signals at the available coordinates of the mobile SET that possesses a wide-beam radiation pattern. The investigation includes modifying an inversion developed for a translational or rotational SET for practical echo imaging problems that arise in diagnostic medicine, remote sensing and non-destructive testing. The required sampling and processing issues associated with the recorded data as well as the resolution anticipated in a given synthetic aperture echo imaging problem are also being studied. The results will be used to develop efficient data collection strategies for two-dimensional and three-dimensional echo imaging systems. Sub-band processing of the available spatial frequency data to reduce the computational time for image reconstruction is included. Brigham Young University; A. Lee Swindlehurst; RIA: Subspace Fitting Algorithms for State Space System Identification; (MIP- 9110112); $60,000; 24 months. The objective of this project is to explore connections between the well-understood problem of emitter localization using an antenna array and the less studied problem of state space system identification. The key to this connection is the concept of subspace fitting a paradigm for algorithms which find structured models that "best" fit the subspace of observed data. The specific goals of this project are to: 1. derive optimal weightings of the observational subspace to minimize the variance of the identified system parameters due to an impersistently exciting input or marginal observability; 2. show how subspace fitting algorithms can efficiently incorporate prior information about a system's structure to further minimize estimate variance; 3. develop analytical expressions for the variance of the parameter estimates and for the corresponding Cramer-Rao lower bound on the variance; compare algorithm performance with that obtained by prediction-error methods; and 4. validate algorithm performance using space structure data obtained from General Electric, Inc. University of Wisconsin; Barry Van Veen; PYI: Detection and Estimation in Low Dimensional Subspaces; (MIP-8958559 A01); $26,830; 12 months. This work concerns the development and evaluation of efficient, high performance signal processing algorithms for signal estimation and detection. Algorithms for processing data collected at arrays of sensors and for analysis of time series are of particular interest. One technique for reducing the complexity and improving the performance of signal processing algorithms is based on mapping data into subspaces prior to processing. Mapping of data into subspaces is appropriate for almost all signal processing problems, and is especially applicable, if not mandatory, to problems in which large quantities of data must be processed. Reducing the dimension of the data leads to a reduction in computational requirements since the computational burden of most signal processing algorithms is directly related to data dimension. The performance associated with the original data size can be retained or even enhanced in performance aspects which are impacted by mapping the data into a subspace. A key issue under study is the design of linear transformations which maximize performance while minimizing subspace dimension. Processing of data mapped into subspaces is currently being explored in adaptive beamforming, adaptive filtering, spectrum estimation, and source location estimation problems, as well as in more general nonlinear signal processing algorithms. Determination of appropriate performance criteria for transformation design and tradeoffs between performance and complexity are under investigation. Statistical analysis and simulation are utilized to analyze the performance of the resulting algorithms. Filters (both Linear and Non-linear) Auburn University; Jitendra Tugnait; Higher Order Statistical Signal Processing And Analysis; (MIP-9101457); $36,762; 12 months. This research project is concerned with development and evaluation of algorithms for signal processing and analysis that exploit higher order statistics of signals in addition to, or in lieu of, the usual second order statistics. Whereas the second order statistics of signals are a function of only the underlying system transfer function magnitude, higher order statistics depend upon both the system transfer function magnitude as well as its phase. Both time series (only system output is observed) and system identification (both input and output are observed) formulations are considered. Several aspects of the problem including inverse and direct modeling, deconvolution and parameter estimation, are being investigated. Emphasis in this project is on optimization of an appropriately defined fourth cumulant criterion in conjunction with, or in lieu of, the usual second order error criterion, thereby obviating the need for explicit data cumulant matching. Among the applications being investigated are: 1. time delay estimation in unknown spatially uncorrelated Gaussian noise, 2. blind deconvolution with unknown, possibly nonminimum phase, channels, and 3. system identification with noisy inputs. University of California - Davis; Benjamin Friedlander; Adaptive Channel Equalization Based on High-Order Statistics; (MIP-9017221); $86,325; 12 months. During the past few years, there has been increasing interest in the development of processing techniques based on the high-order statistics of the signals. These techniques take into account, in a systematic manner, the non-Gaussian aspects of the signals. Digital communication signals represent a particularly important class of non-Gaussian signals for which high-order processing techniques appear to be well suited. Adaptive channel equalizers are an indispensable part of modern digital communication systems which attempt to correct the distorting effect of the channel to allow higher transmission rates. The objective of this project is the development and analysis of high-order processing techniques, focusing on their applications to communications processing. As a particularly promising application of these techniques, this project presents a novel approach to channel equalization, based on recent theoretical results on parameter estimation based on high-order statistics developed by the principal investigator. Other applications of high-order statistics to communications signal processing including signal analysis, signal detection, and signal design are being investigated as well. University of California - Davis; William Gardner; Exploitation of Modulation Properties For Blind Adaptive Signal Processing; (MIP- 8812902 A02 & A03); $96,673; 12 months. This research focuses on the signal selective source location problem for communications and telemetry signals in highly corrupted noise and interference environments. Specifically, under the existing grant, new algorithms have been devised that enable the simultaneous use of data from one or more widely separated reception platforms, each with one or more closely-spaced sensors. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program who will work jointly with the principal investigator and graduate students in implementing the new algorithms in software, and in designing and running specific simulation experiments. University of Colorado - Boulder; Delores Etter; Adaptive IIR Filtering Using a Stochastic Filter; (MIP-9106126); $41,608; 12 months. A stochastic filter consists of a bank of fixed filters with a set of corresponding probabilities. The fixed filters form a basis set of filters for the stochastic filter, and the probabilities determine the specific realization represented by the stochastic filter. This project investigates the use of a stochastic filter to adaptively model an Infinite Impulse Response (IIR) system. Guidelines for selecting the basic set of filters are being developed in order to represent an adaptive IIR filter and to meet both accuracy and convergence speed constraints. University of Delaware; Gonzalo Arce; Micro Statistics in Signal Decomposition and the Optimal Filtering Problem; (MIP-9020667); $125,777; 24 months. Optimal Filtering is a statistical signal processing problem of major engineering importance. Historically, the most frequently used signal processing tools have been linear in nature, but despite rich linear systems theories, many signal processing problems have not been satisfactorily addressed through the use of linear schemes. This research aims at the theoretical development of a large class of non- linear filters which are based in combination of either the observation vector, the sorted observation vector, or in general a non-linear transformation of the observation vector. Thus, we achieve non-linear filter response characteristics but with the machinery of linear systems theory available for their optimization and design. The optimal solution requires the statistical characterization of the set of decomposed signals (micro-statistics). The filtering problem reduces to a set of filters operating on the decomposed signals (micro- statistic filters) where the output is a weighted sum of the decomposed filtered signals. In this work, a theory is developed for robust micro-statistic filters in which linear operations are executed in a sorted observation vector. For environments with unknown or non-stationary characteristics, adaptive micro-statistic filters are developed and issues such as convergence, lattice structures, fast algorithms, and complexity are addressed. Illinois Institute of Technology; Peter Clarkson and Geoffrey Williamson; The Development of Median and Order Statistic Adaptive Filters; (MIP-9102620); $114,254; 24 months. This project is concerned with the development and analysis of a class of robust adaptive filtering algorithms which can offer effective performance in an environment of impulsive and other non-gaussian noise. These filters are termed Order Statistic Least Mean Square (OSLMS) adaptive filters. A special case of this class is the Median LMS (MLMS) algorithm, derived from the familiar LMS algorithm by replacing the instantaneous measurements of the gradient of the mean square error performance surface, used by the LMS filter, with the sample median of that quantity. Perinatronics Medical Systems, Inc.; Thomas H. Frank; The Feasibility of an Adaptive Canceller for Real-Time Signal Extraction; (ISI-9022657 & A01); 24 months (Joint support with the Small Business Innovative Research Program - Total Grant $250,000). This project is an extension of a Small Business Innovative Research Phase I research. In that work, the feasibility was shown of methods for characterizing electromyographic (EMG) noise by a set of lattice reflection coefficients, and for canceling EMB noise by utilizing a gradient adaptive lattice (GAL) algorithm. The thrust of the research is to extend these methods to: 1. develop a recursive least squares (LS) lattice adaptive joint process estimator; 2. characterize non-stationary maternal EMG noise from recorded fetal ECG data; 3. develop a real time implementation of the lattice LS estimator based on AT&T's DSP-32 IC chip; and 4. extend the development of this recursive LS lattice J-P estimator to provide a purely-order update algorithm to obtain long term stability. Carnegie-Mellon University; Virginia L. Stonick; PYI: Globally Optimal and Stable Adaptive Filtering Algorithm.; (MIP-9157221); $25,000; 12 months. This research addresses the use of numerical optimization methods to develop real-time adaptive filters for estimating, identifying, or predicting time-varying and potentially nonlinear processes. The first phase of this work is devoted to the development, analysis, and simulation of an optimal adaptive infinite-impulse response (IIA) filtering algorithm for telecommunications using homotopy continuation methods to perform the necessary nonlinear optimization. This research will increase our understanding of IIA filter structures in time-varying environments, and will ultimately lead to their more widespread use. University of Utah; V. John Mathews; Algorithms for Adaptive Nonlinear Filtering; (MIP-8922146 A01); $8,000. This work is concerned with the study of adaptive nonlinear filters and their applications. While the concept of linear filtering has had enormous impact on the development of various techniques for processing stationary and nonstationary signals, there are several applications in which the performance of linear filters is unacceptable. Channel equalization and echo cancellation in high performance communication systems, image processing, characterization of semiconductor devices, and modeling biological phenomenon are some of the examples of applications in which nonlinear filters have been successfully employed. Possibly because of the computational complexity associated with adaptive nonlinear filters, it is only recently that active research on such systems have begun. The work deals with the following four closely related problems in adaptive nonlinear filtering: 1. development of efficient algorithms for adaptive nonlinear filters using nonlinearities modeled with finite Volterra series expansions; 2. development of numerically stable fast recursive least- square Volterra filters based on QR-decomposition techniques; 3. study of adaptive nonlinear filtering algorithms for systems where the nonlinearity is modeled using recursive, nonlinear difference equations, with relatively few parameters to adequately represent a large class of nonlinearities; and 4. analytical and empirical performance evaluation of the algorithms developed. These are all challenging and important problems that when solved will result in substantial advances in our nonlinear signal processing capabilities. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Image Processing Stanford University; Robert Gray; Image Compression Using Vector Quantization and Decision Trees; (MIP-9016974); $93,962; 12 months. Research is conducted on variable rate tree-structured vector quantizers for image compression applications. The emphasis is on tree-structured codes designed using extensions and variations of the tree design techniques of Breiman, Friedman, Olshen and Stone, but a variety of other vector quantization structures will be considered in combination with the tree-structured codes. In particular, both finite state and predictive vector quantizers are being considered for incorporating memory in the coding; and two-dimensional subsampling techniques are being considered as a means of increasing effective vector size, improving prediction accuracy and providing a natural data structure. The dual use of trees for classification and compression in finite-state vector quantizers is being explored, as is the combination of compression of image sequences such as video and multimodal images of a common object or view, as in multispectral and color imaging. Experiments are being conducted with medical images, computer image data consisting of mixed video, imagery, and graphics, and satellite imagery, especially multispectral. University of California - Berkeley; Avideh Zakhor; PYI: Signal Interpretation and Representation Using Neural Architectures; (MIP- 9057466 A01); $62,500; 12 months. This project deals with signal and image representation, synthesis, and restoration, and issues related to the applicability of multiresolution decompositions to various classes of signals and signal processing algorithms. These include fractal-type signals to which traditional wavelet analysis applies, and those which are best represented by different filters at different resolutions. For the latter class of signals, an adaptive subband coding algorithm is being developed to effect the multiresolution decompositions. University of Maryland; Nariman Farvardin; PYI: Design and Analysis of Data Compression Schemes for Nonstationary Sources; (MIP-8657311 A04); $69,250; 12 months. This research entails: 1. subjective-based, low bit rate image coding based on the newly developed three-component image model; 2. development of optimal vector-scalar quantizers and multi-stage vector quantizers both in the absence and presence of channel noise; and 3. study of relative performance of feed-forward adaptive vector quantization as compared to FSVQ in the absence and presence of channel noise. Harvard University; Petros Maragos; PYI: Morphological Signal and Image Processing; (MIP-8658150 A04); $51,548; 12 months. This research is concerned with morphological systems and their use in signal and image processing. In particular, threshold-linear systems (systems that obey a threshold superposition) and their use as analytic tools for further understanding of morphological and related nonlinear filters are being studied. Some of the applications under consideration are: 1. application of morphological filters to fractals, 2. application of morphological filtering to moving image analysis, and 3. application of morphological filtering to character recognition. Clarkson University; Ping Wong; RIA: Wavelet Transform and Data Compression; (MIP-9108410); $59,378; 24 months. This research is investigating the applicability of the wavelet transform, in particular, a recently developed decomposition procedure based on a stochastic wavelet transform and a spectral factorization theorem, in the design of efficient and high quality data compression algorithms for use with wide-sense stationary random processes. In addition, the research also focuses on developing a non-stationary theory for the wavelet transform, and applying this to efficiently encode non-stationary random processes including transient pulse processes, cyclostationary processes, and asymptotic mean stationary processes. Columbia University; Dimitris Anastassiou; PYI: Digital Image Processing; (MIP-8451499 A09); $37,500. The thrust of this research is in digital image processing and applications. In particular, motion video processing and its application to high definition TV are being studied. Also research in motion compensated de-interlacing and hierarchical data compression of video sequences is being carried out. Columbia University; Martin Vetterli; Wavelet, Filter Banks and Multiresolution Signal Analysis: New Techniques and Applications; (MIP-9014189 A01); $50,163; 12 months. Techniques for handling non-stationarity in signals and images have been explored in different areas; wavelet analysis in applied mathematics, multi-rate filter bank techniques in digital signal processing and multi-resolution analysis in computer vision. Recent developments have indicated the connection between these various techniques. A goal of this work is to make a thorough exploration of the connections between these three areas and present a unified view of existing problems, techniques and applications. Further, design of new regular wavelets, examination of associated computational structures and a study of the complexity of the wavelet transform are also being pursued. Finally, design of non-separable multi-dimensional wavelets, new wavelet-based- sampling methods and application to image coding are being attempted. Rensselaer Polytechnic Institute; Howard Kaufman and John Woods; Motion Compensated Processing of Image Sequences; (MIP-9013247); $114,763; 24 months. The focus of this research is on model-based image sequence estimation and restoration and implementation using multi- processing. The image estimation is achieved using extended reduced order model Kalman filter and estimates of displacement parameters. Simulated annealing type algorithms are being explored in solving the resulting estimation problem. The restoration is achieved by the incorporation of velocity constraints explicitly into the 3D space-time model. Finally, the algorithms are implemented on massively parallel computers such as DAP and MASPAR configured as a linear array. University of Utah; V. John Mathews; Image Compression Using Models of Human Visual System and Subband Vector Quantization; (MIP- 9016331); $50,373; 12 months. The focus of this research is on perceptually lossless or very low distortion, low-rate image coders using subband vector quantization and models of the human visual system (HVS). The use of visual models in the image coders attempts to remove the psychophysical redundancies present in the images in addition to the statistical redundancies that most image coders attempt to remove. Two different strategies that make use of the current knowledge of the HVS are being employed in the data compression systems: The first quantizes the intensity image after processing it with a homomorphic visual model. The idea behind this approach is that the intensity image, when processed with the visual model, will look more like the image that the "eye sees" and therefore minimizing the distortion in the transformed image is appropriate. The second approach is to define a masking function for the image to be quantized. This function defines threshold values for each pixel of the image such that distortions of magnitude smaller than the threshold function will not be perceived by the eye. The approach then is to develop a data compression system such that the quantization produces errors of magnitude smaller than this threshold function. The specific problems being studied are: 1. Development of a subband vector quantizer that is equipped with a multichannel, homomorphic model of the HVS. 2. Development of a subband vector quantizer that used a perceptual masking function. A combination of the two schemes, as well as incorporation of predictive vector quantization and/or optimum pre-post-processing filters into the subband coder is being explored. Preliminary work on extending the ideas to color image sequence coding problems is also being carried out. University of Washington; Eve A. Riskin; RIA: Vector Quantization for Image Compression and Processing; (MIP-9110508); $70,000; 24 months. Image compression is the process of reducing the amount of data required to store or transmit an image. It has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Areas in which image compression and image processing may be combined into a single operation, and also areas in which image compression may lead to improved or simpler image processing operations are being investigated. Vector quantization algorithms for the compression of halftoned binary images are being developed and images are being compared that are first compressed and then halftoned with images that are halftoned and then compressed. Vector quantizers that can simultaneously perform both compression and image processing tasks such as halftoning, inverse Fourier transforming, and contract enhancement are being investigated. This leads to a reduction in computational complexity if the complexity of the compression step is less than that of the image processing operation. Finally, examinations are being done to see if image compression will lead to simpler or improved image processing operations such as image segmentation, target detection, and pre-screening for suspect regions in medical images. Image/Signal Reconstruction University of Southern California; Ramalingam Chellappa; Representation and Recovery of Discontinuities in Some Image Processing Problems; (MIP-9100655); $53,565; 12 months (Joint support with the Robotics and Machine Intelligence Program - Total Grant $63,565). This project deals with the general problem of recovery of discontinuities in image processing problems such as image restoration, edge detection, and surface interpolation from sparse depth data. A natural generalization of the basic Geman and Geman line process to handle arbitrary real-valued directions and magnitudes, and methods such as Besag's pseudo-likelihood methods to estimate parameters is being used. University of Illinois; Yoram Bresler; PYI: Efficient Algorithms for Image Reconstruction.; (MIP-9157377); $25,000; 12 months. This project is comprised of four broad areas: image reconstruction, reconstruction of time-varying distributions, sensor array processing, and visualization of multiparameter data. In the area of image processing, the principal objective is to develop the theory and associated computational algorithms for superresolution image reconstruction from partial and noisy data, using statistical models. For the second area, the goal is to develop optimum signal acquisition schemes subject to physical or economic constraints, and the associated efficient reconstruction algorithms, for imaging spatial data that is time varying during the acquisition process. In the area of sensor area processing, several issues are being addressed including the design of computationally efficient algorithms for the (sub)optimal solutions of model fitting problems, wideband sources location, and imaging with sensor arrays. Finally, in the last area, the goal is to address the effective fusion, display, and visualization of multi-parameter spatially related data, such as is acquired in multispectral, or multi-modality remote sensing and diagnostic imaging. University of Illinois; Arun Karalamangala and Douglas Jones; Modeling of Time-Varying Signals; (MIP-9012747 A01); $75,529; 12 months. The objective of this research program is to investigate new approaches to the problem of modeling and tracking of a large class of time-varying signals. The class of signals under consideration includes signals relatively concentrated in time-frequency, signals with slowly varying amplitude and frequency, and some signals with rapid variation in spectral content. Both parametric and nonparametric approaches are being developed. Parsimonious models incorporating parameters that reflect non-stationarity that are specific to the class of signals under consideration are also being considered. Those models will provide an accurate and efficient parametrization of signals in these classes. Algorithms are being developed in this project for efficient estimation of the model parameters from noisy observations of the signal. A nonparametric approach utilizing new time-frequency representations based on determination of optimal signal- dependent kernels is being developed to overcome several problems which have plagued current time-frequency methods. Purdue University; Peter Doerschuk; RIA: Multidimensional Bayesian Signal Reconstruction Motivated by X-ray Crystallography; (MIP- 9110919); $60,000; 24 months. At the heart of this project is the very difficult X-ray crystallography reconstruction problem - given the magnitude of the Fourier transform of the electron density of a crystal, reconstruct the electron density, and, hence, determine the locations of the atoms of the molecule to be imaged. The major theme is a new statistical approach based on Markov random fields which promises to permit easier and more effective incorporation of prior information such as the possible locations of the atoms, and on the valence structure of the bonds, and a more consistent use of statistical information. Methods developed during the course of this investigation will be useful in other contexts such as pattern recognition and nonconvex optimization. Michigan State University; John Deller and Majid Nayeri; Applications and Performance Evaluation of Set-Membership-based Signal Processing Algorithms; (MIP-9016734); $135,914; 24 months. This research is centered on the application of complex vector valued signal set-membership weighted recursive least squares (SM-WRLS) algorithms to problems of current interest and the understanding of the basic operational principles of SM-WRLS with regard to adaptation strategies, data selection criteria (optimal and suboptimal) and computational efficiency. Efficient computational algorithms which can ultimately be implemented using systolic processing are also being investigated. Component tasks include: 1. neural network learning algorithms; 2. speech and image coding; and 3. general developments and performance evaluation. Washington University; Donald Snyder; Stochastic Inverse Problems for Computational Imaging; (MIP-9101991); $61,465; 12 months (Joint support with the Computational Mathematics Program - Total Grant $91,465). The goal of this research project is to develop improved image restoration algorithms for dealing with stochastic inverse problems, methods for assessing the quality of the images which are produced, and strategies for parallel implementation of the resulting algorithms. The primary focus is on quantum-limited data which arises in such diverse applications as imaging faint objects in outer space and imaging radioactivity concentrations within human tissue. Rensselaer Polytechnic Institute; John Woods; 2-D Spectral Distribution Parameter Estimation; (MIP-9120377); $48,990; 12 months. This project exploits recent theoretical findings by the coPI, Francos, which lead to an improved estimation procedure for the spectral distribution function of a 2-D homogeneous random field (which can also be referred to as a problem in estimating a 2-D mixed spectrum). By imposing a total order on the discrete (regular) random field and using a new 2-D Wold-like decomposition for homogeneous random fields, the original random field can be decomposed into a purely-deterministic component, modelled as a sum of 2-D harmonic components with random amplitude and phase, a generalized evanescent component which is characterized by its spectral distribution attributes, and a purely-indeterministic component which is modelled by a 2-D AR process. The purpose of this research is to develop a combined detection/estimation procedure for detecting the presence of the deterministic components - the purely-deterministic and generalized evanescent components, and for estimating the necessary parameters of the components which are deemed present in the original decomposition. Rochester Institute of Technology; Mysore Raghuveer; RUI: Bispectral Reconstruction of Multidimensional Signals; (MIP-8909701 A01); $9,750. The phenomenon of speckle is commonly seen in imaging systems which use a coherent source of radiation. Examples are ultrasound imaging, radar, sonar and laser imaging. Speckle has the effect of limiting the resolution of the recorded image. Therefore, in order to uncover image detail, it is necessary to explore ways of reducing the effects of speckle in the digitized image. Preliminary investigation based on the commonly used multiplicative model of speckle has revealed the potential of the bispectrum in reducing speckle because of its phase retention, insensitivity to various types of noise and invariance with respect to object translation. These properties also endow a bispectral approach with advantages relative to techniques such as direct averaging. The usefulness of any technique is best brought out by experimental verification using images of objects actually illuminated by coherent sources. Studies on speckle removal in digital images have largely concentrated on computer simulations with very little experimental work. The objectives of this research are: 1. the development of bispectral approaches to reducing speckle in digitized images, and 2. to conduct experimental studies using laser-illuminated objects to assess the effectiveness of different approaches to digital speckle removal. Such studies will be very beneficial to all imaging applications mentioned above. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Ohio State University; Lee C. Potter; RIA: Constraint-Based Nonlinear Signal Processing for High Resolution Signal Recovery; (MIP-9111044); $60,000; 24 months. This project investigates the use of set theoretic and optimization techniques for the recovery of signals from incomplete measurement data and partial constraints. The objectives are to provide an abstract framework for fully and robustly exploiting a priori information, to develop computationally attractive algorithms, to assess the fundamental performance limitations in the signal recovery problem, to utilize these performance bounds in the improved design of data acquisition experiments, and to enhance the accessibility of the results through application to inverse problems of practical interest. Pennsylvania State University; Sergio D. Cabrera; Optimal Signal Recovery From Multiresolution Decompositions; (MIP-9018998); $12,000; 12 months. This research work deals with the development of optimal linear reconstruction schemes and algorithms for bandlimited signals from subband and other multiresolution decompositions. It is expected that the main results will be the development and assessment of very flexible reconstruction algorithms which will satisfy multiple desirable criteria in the reconstruction process, such as: robustness to additive noise and coding errors, as well as efficiency and flexibility in implementation. Signal decomposition choices which perform better will be identified and evaluated. These signal recovery techniques will be improvements and generalizations of existing rigid schemes based on Quadature Mirror Filter (QMF) banks, and Perfect Reconstruction (PR) filter banks. These techniques do not directly take into account the effects of noise or implementation issues. Incorporation of new developments in multiresolution representations, including wavelet transforms, will be part of this research. The potential applications of these techniques are in systems for efficient and flexible storage or transmissions of signals such as: speech, audio, images and image sequences (video). This last case is of major importance today in research for the following systems: high definition television, digital television, visual communication through ISDN, and space/earth transmission of visual scientific information. Brown University; Stuart Geman, Ulf Grenander, and Donald E. McClure; A Mathematical Framework for Image Analysis; (DMS-8813699 A01); $15,000; 12 months (Joint support with the Computational Mathematics Program, the Robotics and Machine Intelligence Program, and the Statistics and Probability Program - Total Grant $165,000). This research is aimed at the development and the application of a mathematical framework for image analysis. The approach is through a Bayesian paradigm. The presumption is that a properly conceived prior distribution on relevant scene attributes can be an effective basis for image processing. Applications are to texture segmentation and classification, boundary detection, computerized tomography, and global image analysis. Brigham Young University; Brian Jeffs; RIA: Optimally Sparse Restoration of Blurred Star Field Images; (MIP-9110187); $59,980; 24 months. This proposal addresses the problem of removing blur from, or sharpening, astronomical star field intensity images. A new image restoration algorithm is introduced which recovers image detail using a constrained optimization theoretic approach. Ideal star images may be modeled as a few point sources in a uniform background. It is therefore argued that a direct measure of image sparseness is the appropriate optimization criterion for deconvolving the image blurring function. A criterion based on the Ip quasinorm is presented and an algorithm for sparse reconstruction is described. Existing methods (e.g. CLEAN) do not fully utilize the prior information about the correct solution inherent in this sparseness assumption. We believe this research will lead to practical algorithms which will outperform existing methods in restoring resolution to astronomical images blurred by atmospheric turbulence, finite aperture size, motion blur, or other optical effects. Multidimensional Signal Processing California Institute of Technology; P. Vaidyanathan; New Techniques for Design and Implementation of One-Dimensional and Two-Dimensional Multirate Digital Filter Banks; (MIP-8919196 A01); $89,747; 12 months. This project focuses on the study of multirate digital filter bank systems for application in one-dimensional and two-dimensional signal processing. The basic philosophy behind these systems is that if a discrete-time signal is split into a number of adjacent frequency bands, then the information in each of these bands can be coded separately. This often results in improved efficiency in storage/transmission of such signals. The improvement is a consequence of the fact that, the peculiarities of each frequency band (such as the average energy, the impact on human perception, etc.) can be exploited in the process of judging the precision required for each sub-band. In this way a signal can be compressed even if it is not band-limited, simply by exploiting the fact that some frequency bands are more important than others. Efforts are directed towards the design and implementation of the set of filters which will perform the band-splitting and reconstruction. This is a very unconventional filter- design problem, because of aliasing and other types of distortions which result from the use of non-ideal band- splitting filters and decimators. Because of these, more complicated algebraic tools (such as paraunitariness and losslessness) are required in the design and implementation phase. In addition to its application in a number of engineering problems, the multirate filter bank structure is closely related to fundamentals of signal sampling, reconstruction, and block processing. A thorough understanding of this will therefore unify several aspects of interest to the signal-processing scientific community. University of Michigan; Andrew Yagle; PYI: Fast Algorithms and Inverse Scattering; (MIP-8858082 A02); $25,000; 12 months. This research focuses on several different problems in inverse scattering and signal processing. The emphasis is on deriving new algorithm(s) that solve the problem using much less computation than existing algorithms. The new algorithms developed will be applied to problems such as computer-aided tomography. Polytechnic University; Unnikrishna Pillai; Nonrational Systems and their Rational Approximations; (MIP-9020501); $126,745; 24 months. The fundamental problem of obtaining the parameters of the model (assumed) of a physical system from measured samples of its output response to a known input, is one that impacts on many different and important fields of interest. Naturally, it has a long and continuing history. The purpose of this work is to break fresh ground by utilizing a new simple deterministic theory founded squarely on well established passive network concepts. Specifically, the approach is used to achieve two main goals: 1. stable rational minimum-phase transfer functions can be identified without a priori knowledge of either numerator or denominator degrees, and 2. stable rational minimum-phase Pade-like approximations appear to be generated automatically in the nonrational case. Detailed theoretical analysis of the basic ideas and extensive numerical simulation are being carried out. State University of New York - Stony Brook; Petar Djuric; RIA: Systems and Signals Analysis by Predictive Densities; (MIP- 9110628); $60,000; 24 months. Two problems related to analysis of systems and signals by models are construction or conjecture of a set of competing models from observed signals, and the selection of the optimal model from the hypothesized set of rival models. In this project three topics are proposed with a common goal to study the second problem. Two of them are frequently met in engineering practice (detection of number of signals in multichannel time series and optimal sequential segmentation of nonstationary signals), and one is theoretical in nature (quasi-predictive densities). All the problems will be tackled by predictive densities. The Bayesian paradigm and the estimation-validation concept are core principles in this approach. The anticipated results of this project may find applications in communications, speech and image processing, automatic control, mechanical engineering, acoustics, seismology, geology, biomedical engineering, circuit design, aeronautics, and astronautics. Miscellaneous Stanford University; John M. Cioffi; PYI: Digital Signal Processing for Communication; (MIP-8657266 A04); $62,500; 12 months. This research is concerned with digital signal processing techniques for digital communication/digital storage applications. In particular, research is concentrated on combined equalization and coding methods for data transmission channels with severe intersymbol interference, such as high- speed digital subscriber loops and magnetic-disk storage channels. Also, time-varying wideband digital mobile radio channels and adaptive receivers for data transmission over those channels are investigated. University of California - Berkeley; Edward Lee; PYI: Communications, Signal Processing Applications of Computer Software & Hardware; (MIP-8657523 A04); $62,500; 12 months. This project focuses on design synthesis, models of computation, and signal processing applications. Methods for generating highly optimized code are being developed, with the aim of matching the performance of an expert assembly language programmer. In addition, hardware/software co-design issues are addressed, so that the architecture can be customized to the application. The synchronous dataflow (SDF) model of computation, which was developed through analytical techniques, is too limited for many practical applications. Two methods for augmenting this model are pursued. The first introduces control constructs analogous to those in classical programming languages, such as do-while, for, etc. The second uses purely data-driven style for control. A "token flow model" has been developed to extend the analytical methods of SDF to more general dataflow graphs. Necessary extensions to this token flow model are made in order to be able to systematically construct annotated schedules and annotated precedence graphs, from which optimized multiprocessor realizations can be synthesized. Non-dataflow models of computation are also pursued, particularly for applications that require highly dynamic real-time control. In order to test the efficacy of the design environment, and in order to be able to use it in the classroom, practical systems are being built using this environment. This includes sophisticated signal processing applications, image processing, and communications. The use of a new software environment, called Ptolemy, is extended from a graduate course in advanced signal processing to a graduate course in digital communications, and possibly to an undergraduate course in signal processing, if hardware resources permit. University of California - Berkeley; Jan Rabaey; PYI: Architectures and Synthesis for Digital Signal Processing; (MIP-8958578 A02); $62,500; 12 months. Research efforts focus on the development of the HYPER high level synthesis environment. In order to obtain a complete and more functional environment, extensions and improvements are being sought that provide more accurate cost and performance predictions, transform (recursive) structures into alternate forms which achieve maximally fast performance (amongst others), and incorporate an optimization environment for background memory and input/output interfaces. University of California - Davis; Bernard Levy; Statistical Signal Processing and Estimation for Spatial and Multidimensional Data: Multiscale Analysis of Markov Random Fields, and Efficient Algorithms; (MIP-9015271); $121,215; 36 months (Joint support with the Statistics and Probability Program - Total Grant $131,215). The focus of this research is on estimation and signal processing of multidimensional data. The work involves two main areas of research. The first deals with the development of multiscale models and estimation methods for Markov random fields (MRF) and their application to problems in image processing such as segmentation, edge detection and coding. The goal of this effort is to develop multiresolution techniques for MRF's which extend the wavelet methods introduced recently for the multiscale representation of deterministic signals. The second part of the research focusses on the development of efficient multigrid and domain decomposition methods for the solution of multidimensional estimation problems, and for inverse problems or impedance imaging. This work is part of a collaborative research effort with a group at MIT led by Professor Alan Willsky. University of California - Santa Cruz; Kevin Karplus; 1991 Advanced Research in VLSI Conference, March 26-28, 1991, University of California, Santa Cruz; (MIP-9014762); $2,400; 12 months (Joint support with the Design, Tools and Test Program, Microelectronic Systems Architecture Program, the Systems Prototyping and Fabricaction Program, and the Experimental Systems Program - Total Grant $12,000). This award supports one conference in a series that has alternated between the east and west coasts for more than a decade. The conference is intended to publicize innovative, interdisciplinary research with a strong VLSI component. This year's focus is on systems integration. The National Science Foundation support allows reduced registration rates for students, university employees, and program committee members, and travel and registration expenses for the invited speakers. University of Illinois; W. Kenneth Jenkins; Reliable VLSI Systems for Digital Signal Processing; (MIP-9100212); $66,313; 12 months. In this project, number-theoretic techniques are being considered to provide hardware modularity that facilitates high data rates, testability, reliability and fault tolerance in VLSI design. This research program addresses questions of reliability and fault tolerance on both the integrated circuit and the higher system level. Modular designs of a convolutional back-projection (CBP) digital processor for synthetic aperture radar (SAR) image processing are being studied as vehicles to address both circuit and system level fault tolerance, and to study the interaction between circuit level error checking and system level fault tolerance mechanisms. Massachusetts Institute of Technology; Alan Willsky; Statistical Signal Processing and Estimation for Spatial and Multidimensional Data: Multiresolution Methods, Efficient Algorithms and Geometric Reconstruction; (MIP-9015281); $40,000; 12 months (Joint support with the Statistics and Probability Program and the Engineering Systems Program - Total Grant $65,000). There are three major components of this research program which has as its overall objective the development of new classes of algorithms for statistical signal processing and estimation of spatial and multidimensional data. The first of these areas deals with the development of statistical models and methods for multiscale signal analysis and processing in one and several dimensions. Problems that are addressed include the development of a statistical counterpart to the emerging theory of multiresolution signal decompositions and wavelet transforms, the investigation of iterative, multigrid signal processing algorithms, and applications of these methods to topics ranging from inverse reconstruction problems to problems of signal or image segmentation. The second research area deals with the development of efficient algorithms for processing spatial data. This topic focuses on the exploitation of the structure of noncausal models, such as those described by partial difference equations or Markov random fields, in order to develop extremely efficient and highly parallel algorithms. Specific problems being investigated include the employment of radially inward and outward recursions for multidimensional signal processing, parallel processing structures based on spatial partitioning of multidimensional signal processing, and the development of efficient algorithms for tracking motion and other temporal changes in space-time random fields. The third segment of this research project involves developing statistical methods for estimating or reconstructing geometric features in multidimensional data given uncertain measurements of various quantities, such as the support of a convex object in 2-D or 3-D, or the 2-D silhouette of 3-D objects. Michigan Technological University; Mohamad Namazi; A New Iterative and Computationally Efficient Frame-to-Frame Motion Estimator; (MIP-8912990 A01); $8,878. Image frames are generated by scanning a scene several times a second. Frame-to-frame motion estimation in the moving part(s) of the scene are of interest. The objective of this work is the development and implementation of a new iterative and computationally efficient algorithm for estimating non-uniform frame-to-frame motion from noisy data. The algorithm is based on the generalized maximum likelihood criterion and referred to as the GML algorithm. The algorithm is implemented using 2D fast-Fourier transform techniques using Markovian random field models to represent the motion vectors. Performance and sensitivity analysis are made to test the suitability of the algorithm for real world applications. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Drexel University; Nihat Bilgutay; Enhanced Detection and Imaging with Frequency Diverse Data; (MIP-8920602 A02); $22,837. The primary goal of this project is the development of signal processing algorithms that reliably detect and image targets embedded in a medium of stationary and randomly distributed scatterers. The major contribution of this work is a detailed study of the relationship between scattering structures and the phase behavior of back-scattered signals with respect to frequency and space. Analytical investigations and computer simulations are being developed for predicting the resultant signal features for noise and target echoes by integrating the deterministic and statistical relationships between the signal wavelength, scatterer size, and scatterer spatial distribution. Experimental investigations are being carried out in parallel with the analytical analysis and computer simulations. Data collected from various material samples with known flaws, and statistical characterizations of the phase and amplitude behavior over frequency and spatial variations are used to validate theoretical predictions. In addition, design procedures are being examined for developing adaptive nonlinear detectors that utilize frequency-diverse properties of the signals. The results of this study will identify critical signal features for the development of enhanced signal imaging and detection algorithms. George Mason University; Anna Z. Baraniecki; RIA: High-Speed Parallel Signal Processing Algorithms and Architectures; (MIP- 9013277); $60,000; 24 months. This research is directed at efficient high speed signal processing algorithms and architectures and includes theoretical study as well as conceptual design and simulation. In particular, wavelet transforms and related techniques for generalized harmonic analysis of data at multiple resolutions are considered. The relationship between time-frequency representations such as a Wigner-Ville Distribution and the wavelet time-scale representations are explored. The time- scale representations are applied to areas such as parameter estimation, spectral estimation, signal estimation and enhancement. This grant includes support through the Research Opportunities for Women Program. Experimental Systems Dr. Gerald Maguire, Program Director (202) 357-7373 gmaguire@nsf.gov The Program The Experimental Systems program supports research projects that involve building, evaluating, and experimenting with a computer or information-processing system. These are goal-oriented projects generally undertaken by teams of designers, builders, and users. The building of the system must itself represent a major intellectual effort, and offer advances in our understanding of information systems architecture. A system supported by the Experimental Systems program will usually include both hardware and software components. Research on information processing systems involves interaction among diverse elements such as hardware architectures, computational models, compilers, operating systems, applications, performance evaluation tools, and user interfaces. Building and evaluating real experimental systems is the only way to understand these interactions in large systems; other techniques, such as simulation and analysis, have only limited uses in understanding the system issues in such a complex environment. Software simulators, for instance, do not provide the computing speed needed for large experiments, nor the needed performance incentives for porting large application systems for experimentation. Without real experimental systems, important areas of information systems architectures cannot advance. A successful proposal to the Experimental Systems program should demonstrate the feasibility and utility of the project. Feasibility can be shown by describing prior proof-of-concept prototypes or simulation studies that indicate that the proposed system can be built and will meet its design goals. Utility can be shown by demonstrating that building the system will provide substantial advances in computer system architecture, or that the system is inherently useful. Details of the measurement and evaluation procedures that will demonstrate the benefits of the system in an application should be given in the proposal. The system to be built must be novel in some way, and the impact of the novel aspects of the system upon its architecture must be evaluated during the course of the research. To justify construction the new system must be potentially superior to existing systems in the chosen application area. Ideally, building the system would provide new knowledge of systems architecture, open up new application areas, and/or contribute to our knowledge about system building techniques. An appropriate project might be a system built using a new architecture or technology, which addresses an application in a new way. An inappropriate project would be one in which the research uses, simply as a platform, a special purpose machine whose design, fabrication, and evaluation are straightforward. The novel aspects of an experimental system may fall into several different areas; the system might feature application of a new technology, new architecture, or new techniques for performance measurement and evaluation to a computationally stressing problem. Examples of technological innovation are massively parallel analog systems, or applications of superconductivity. Architectural innovations might include new parallel l/O structures, highbandwidth interconnects, or reconfigurable fault-tolerant subsystems or exploration of advanced memory hierarchies on system performance. New evaluation techniques might include instrumentation for performance evaluation or debugging. These innovations might be applied to produce CAD engines, large sensor arrays, or signal processing architectures, for example. To justify support under this program, a proposal should show that system building is necessary for answering significant and timely research questions. The research issues should be such that the best way to address them is to build the proposed system and measure its performance. Building for its own sake is discouraged; analysis and simulation should be performed in sufficient detail before a proposal is sent to the Experimental Systems program. Furthermore, off-the-shelf hardware should be employed in the building stage whenever the research goals do not require custom construction. Potential applicants are encouraged to discuss their research ideas with the program director prior to formal submission. Initiatives and Opportunities Due to special initiatives (such as: High Performance Computing and Communications (HPCC); Materials Processing; Biotechnology; and Manufacturing-related Research) there are an increasing number of opportunities for the construction and evaluation of experimental systems. Issues which need to be addressed include: use of optoelectronics, both as subsystems and for interconnects; use of micromachinery as components; active power management for low power portable devices; increasing use of analog and mixed analog/digital subsystems; ... . In addition, there is a need for systems which can be used to help solve Grand Challenge Problems (see HPCC booklet). This will require new I/O devices, both to interact with the very large amounts of data and to access and store this data, along with new computation and communication systems. The ability to rapidly construct application specific systems for use on a specific problem will require greater design reuse, greater sharing of design/parts/knowledge between groups, utilization of fast turnaround fabrication services, use of a large shared knowledge base of parts and manufacturing information, ... . It is anticipated that greater use will be made of high performance hardware simulators to enable users to quickly "implement" entire systems using a combination of FPGAs and existing parts. For many system studies this may be used as the only realization of the system allowing for rapid construction, sharing of resources, and encouraging reuse of designs and subsystems. Awards Graphics and Solid Modelling Cornell University; Herbert B. Voelcker; Collaborative Project: Research on Parallel Raycasting Engines and CAD/CAE/CAM Applications; (MIP-9007501 A01); $103,995; 12 months (Joint support with the Design and Computer Integrated Engineering Program - Total Grant $197,989). This is part of a joint project with Duke University to continue the successful research on custom hardware for solid modelling. Duke is extending and improving the hardware itself, while Cornell is building software for constructive solid geometry (CSG), integrating the resulting system with applications, and conducting field tests. The initial 8- processor system is being upgraded to 256 processors, to allow parallel processing of more complex objects. At the same time, the numerical precision of the hardware is being increased, and a tag system is included to allow the processors to keep track of software-specified properties of parts of objects. The software and applications are being extended not only to use these hardware changes, but also to include algorithmic improvements. Among the improvements foreseen are enhanced ability to translate from other representations, such as nurbs (non-uniform recursive B- splines) and boundary representations, into the CSG trees that are used by this system. Cornell University; Herbert B. Voelcker; Research and Education in Mechanical Tolerancing and Dimensional Metrology; (DDM-9115435); $30,000; 12 months (Joint support with the Design and Computer Integrated Engineering Program - Total Grant $100,085). Recent national studies have identified serious deficiencies in mechanical tolerancing and metrology standards and practices, and in the teaching of these in universities, training institutes, and industry. The central problem underlying all of these deficiencies is the lack of proper scientific foundations for mechanical tolerancing. In essence, tolerancing and metrology evolved from shop-floor practice, and are taught and applied as collections of special-case techniques. Duke University; Gershon Kedem and John L. Ellis; Parallel Raycasting Engines and CAD/CAE/CAM Applications; (MIP-9007711 A02); $245,970; 12 months. This is part of a joint project with Cornell University to continue the successful research on custom hardware for solid modelling. Duke is extending and improving the hardware itself, while Cornell is building software for constructive solid geometry (CSG), integrating the resulting system with applications, and conducting field tests. The initial 8- processor system is being upgraded to 256 processors, to allow parallel processing of more complex objects. At the same time, the numerical precision of the hardware is being increased, and a tag system is included to allow the processors to keep track of software-specified properties of parts of objects. University of North Carolina - Chapel Hill; Henry Fuchs and John W. Poulton; Supercomputing Power for Interactive Visualization; (MIP- 9000894 A02 & A02); 541,907; 12 months (Joint support with the Defense Advanced Research Projects Agency - Total Grant $1,121,141). This is a continuation amendment for the project to extend the Pixel-Planes 4 and 5 systems to achieve higher performance in computer graphics. Two systems are being built and will be evaluated: an image-parallel system and an object-parallel system. The image-parallel system will be an Intel i860 parallel computer with additional boards for rendering and frame buffering. The object-parallel system will be a scan- line compositor chip that can be used in a tree of compositors to merge rendered images from different objects. These two parallel systems will permit real-time rendering of complex scenes with realistic shading of the type needed in medical imaging and computational molecular chemistry. University of Washington; Lawrence Snyder, Carl Ebeling and Gaetano Borriello; Inquiry Into Chaotic Routing; (MIP-9013274 & A01); $224,837; 12 months. A chaotic router is a non-minimal, adaptive message router that uses randomization in routing and derouting decisions. The use of randomization simplifies the router sufficiently that it's conjectured chaotic routing can be competitive with oblivious routing, achieving equivalent or better average-case performance, greatly improved worst-case performance and fault tolerance. Though the principles of chaotic routing apply to some degree to any topology with multiple paths between nodes, chaotic routers for binary n-cubes are presently under investigation. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program.General Purpose Computing University of California - Los Angeles; Gabor Temes; Digitally Corrected Oversampling Data Convealers; (MIP-9196199 A01); $27,914; 12 months (Joint support with the Circuits & Signal Processing Program - Total Grant $102,415). This project is to develop faster and more accurate analog- to-digital and digital-to-analog data converters. The converters considered are of the interpolating type, which effectively trade conversion speed for accuracy. That is, in interpolating type converters, the use of a multibit (rather than a single-bit) front end in an interpolating converter can lead to a higher resolution for a given speed or a higher conversion speed for a fixed resolution. However, a multibit front-end requires an analog component accuracy which cannot be achieved without complicated and expensive trimming and or randomizing techniques. A novel digital self-calibration and correction technique which achieves the requisite accuracy of the multibit system without any trimming or randomizing, using only a simple additional digital stage is being developed in this research. The work involves an architectural study of the novel system, development of the circuit blocks needed and design, fabrication and testing of several fully integrated converters based on the novel principle. This new approach should lead to faster and or more accurate converters than any of the currently available ones. Such converters will lead to further developments in important applications such as digital radio and television, digital audio, ISDN, radar, etc. University of Illinois; David Kuck, P. Michael Farmwald and Alexander V. Veidenbaum; Architectural Studies and Simulation of Large-Scale Multiprocessor Systems; (MIP-8920891 A02); $445,279; 12 months. This project includes two research efforts related to the Cedar project: enlargement of the Cedar shared memory and architectural studies for new large-scale multiprocessors. The memory enlargement allows realistic benchmarks to be run on the machine, which facilitates the architectural studies. The studies to be carried out include investigations of memory hierarchy organization in multiprocessors, cache design, virtual memory, and new techniques for processor synchronization and instruction execution. These topics are being investigated analytically, by simulation, and experimentally on the enlarged Cedar. Massachusetts Institute of Technology; Anant Agarwal; Automatic Management of Locality in a Scalable Cache-Coherent Multiprocessor: The MIT Alewife Machine; (MIP-9012773); $321,819; 12 months (Joint support with the Computer Systems Architecture Program - Total Grant $371,819). This project is building a new type of scalable shared memory multiprocessor, in which uniform access to the memory is supported not by hardware, but primarily by the compiler and system software. Using a slightly modified version of the SPARC chip and the CalTech routers, the principal investigator is building a mesh-connected multiprocessor with a processor- memory pair at each node of the network. Of course, such a machine does not actually have uniform access to all memory nodes by all processors, but compiler and caching technology will be used to make the machine behave as though it did. Simulations show that by placing shared variables close within the network to the tasks that use them, and by using a cache at every network node, all references can be made fast. Massachusetts Institute of Technology; Anant Agarwal; PYI: Automatic Locality Management in Scalable Multiprocessors; (MIP- 9157393); $25,000; 12 months. Parallel computers can be made both scalable and easily programmable through architectures that exploit and automatically manage communication locality. The goal of this research is to discover and to evaluate techniques for automatic locality management in scalable multiprocessors. As the vehicle for this research, an experimental parallel machine called the Alewife is being implemented. Alewife employs techniques for: 1. communication latency minimization, using scalable coherent caches and software partitioning and placement of programs, and 2. communication latency tolerance, using a new rapid- context-switching processor architecture. Alewife implements a new protocol called "limitless directories" for scalable cache coherence. This scheme uses a combination of hardware and software techniques to realize the performance of a full-map directory with the memory overhead of a limited directory. A rapid-context-switching processor called Sparcle is also being designed. Sparcle can switch in about 10 cycles to another thread when it suffers a cache miss that requires service over the interconnection network. New York University; Allan Gottlieb; Ultra III: Implementing a Scalable Shared-Memory Multiprocessor; (MIP-8915488 A01); $300,000; 12 months (Joint support with the Computer Systems Architecture Program - Total Grant $350,000). This is a project to build and evaluate a 16-processor multiprocessor using combining switches in the processor- memory interconnection network. This new processor follows upon the design of the earlier Ultra-II processor with the bus being replaced by a combining network. University of Texas; Robert S. Boyer; Mechanized Code Proofs Based on a Formal Microprocessor Specification; (MIP-9017499); $34,904; 12 months (Joint support with the Microelectronic Systems Architecture Program - Total Grant $69,809). The purpose is to build an experimental software system to test the feasibility of mathematically formalizing physically realizable microprocessor architectures and to test the feasibility of mechanically checking the correctness of system software written in microprocessor machine language for such microprocessors. The experiment will involve formalizing, in a mechanized logic, the user instruction set of several commonly-used microprocessors, and will also involve formalizing the semantics of some system traps. The work will explore new areas in formalization, including cache consistency, memory protection, and interrupts. The machine architecture issues of proving correct high level language compilers will also be explored. University of Washington; Theodore H. Kehl; Self-Timed Logic in Multiprocessors; (MIP-9101464); $183,457; 12 months. This project involves building and measuring two self-timed components inserted into an existing shared-memory multiprocessor computer system. The two components are a multilayered backplane with self-timed arbitration logic and a self-timed memory module. The goals are to demonstrate the ability to increase the number of processors while also doubling memory performance for this system. This project will test the viability of self-tuning systems (the system is self-tuning in that the operating margins are adjusted based on the actual components used in the system). Application Specific Computing International Computer Science Institute; Nelson Morgan; Application of Signal Processing CAD to the Digital Realization of Artificial Neural Networks; (MIP-8922354 A02); $156,704; 12 months. This research focuses on extending the project to extend the Lager CAD system with cells and tools for neural network design. The extended system is being used to design several systems of increasing sizes, the largest being a machine to recognize connected speech after being trained for each speaker. The resulting machine is expected to run much faster in the training phase than existing machines. Stanford University; Michael J. Flynn; Sub-nanosecond Arithmetic; (MIP-8822961 A02); $521,324; 12 months. This research focuses on the interaction between algorithms, fabrication technology, packaging, and CAD (computer-aided design) tools in a very fast arithmetic processor. The investigators are building a multi-function arithmetic processor that can do floating point arithmetic as well as higher level functions (trig, log, etc.). The processor is constructed in BiCMOS (bipolar complementary metal oxide semiconductor), and is packaged on an active silicon substrate that contains etched interconnection lines and active devices for configuring the lines. New CAD tools are needed to allow estimation and matching of delays in the logic blocks and the interconnection. University of Southern California; Dan Moldovan; Research and Development of SNAP: Semantic Network Array Processor; (MIP- 9009109 A01); $389,632; 12 months. This is part of a joint project with Carnegie Mellon University to build a massively parallel processor and use it for natural language translation. The system which is being built is an array of custom processors that are specialized for marker passing in semantic networks. A specific goal of this project is to implement a real-time speech-to-speech dialogue translation system with about 500 words in its vocabulary. This system will necessarily include natural language understanding and generation components. The culmination of this project will be an array of 16,384 semantic network nodes with software implementing the translation system. Massachusetts Institute of Technology; Tomaso Poggio; Single Chip Supercomputer; (ASC-9109509); $20,000; 24 months (Joint support with the New Technologies Program - Total Grant $38,000). This research focuses on building vision systems with supercomputer capability into fast, small, low-power, analog integrated circuits. The research utilizes today's general purpose supercomputers to develop the appropriate algorithms for designing tomorrow's application specific single-chip supercomputers (analog vision chips). The Connection Machine will be used for algorithm simulation and device design of these self-calibrating, adaptive vision chips. Massachusetts Institute of Technology; Gerald J. Sussman and Harold Abelson; The Supercomputer Toolkit: Towards a General Theory of Special Computing; (MIP-9001651 A01); $392,655; 12 months. This research focuses on building cost-effective specialized computers for scientific computation. A set of hardware and software is being developed to allow rapid construction of very specialized computers: computers that are capable of running exactly one program. The computers are built from many data path parts, interconnected to reflect the control flow of the program to be executed. Several of these computers are under construction for different problems, including the n-body problem of predicting planetary motions, some electronic simulation problems, and large-scale particle simulations. Comparison with other computational approaches to this problem will allow an accurate evaluation of this technique. Massachusetts Institute of Technology; John R. Wyatt; Smart Vision Sensors: Analog VLSI Systems for Integrated Image Acquisition and Early Vision Processing; (MIP-8814612 A03); (Joint support with the Defense Advanced Research Projects Agency - Total Grant $250,000). This supplement supports the development of analog integrated circuits for low-level machine vision tasks, such as edge detection and the determination of optical flow. This is an interdisciplinary effort that includes research on fabrication processes, devices, circuits and system integration of analog and digital components. Work under this supplement will support the overall goal of an integrated vision system for robotic navigation of real-time machine vision. University of North Carolina - Chapel Hill; Raj K. Singh; BioSCAN: A VLSI-Based System for Biosequence Analysis; (MIP-9024585); $150,100; 12 months (Joint support with the Systems Prototyping and Fabrication Program - Total Grant $250,100). The goal of this project is to construct an attached processor using application specific integrated circuits. This processor will perform high speed partial pattern matching of biological sequence data (such as DNA or protein sequences) against entries in a database. This processor will serve as a filter to provide information on partial matches (of segments) at a very high speed. This partial match information is used by the host processor to determine which sequences merit further detailed comparison (i.e., this prefiltering greatly limits the number of sequences which must be subsequently compared when considering insertions and deletions of subsequences). The subsequent full match will be carried out in the host processor. Carnegie-Mellon University; L. Richard Carley and Takeo Kanade; A Three Dimensional Imaging System Integrating Parallel Analog Signal Processing and IC Sensors; (MIP-8915969 A02, A03 & A04); $181,247; 12 months (Joint support with the Defense Advanced Research Projects Agency - Total Grant $356,247). The topic of this research is the design and use of a smart sensor system for light-stripe range finding. A plane of light is swept over a three-dimensional object that is imaged on a two-dimensional array of pixels. (The array of pixels and the plane of light are parallel planes.) At the time that a point on the object is illuminated, the corresponding pixel will receive its maximum light intensity. By recording the times at which pixels receive their maximum light intensities, the three-dimensional structure of the object can be determined. The sensor uses photodiodes integrated on a chip with analog circuitry at each pixel that can determine the time of maximum illumination. This circuitry is augmented with inter-pixel signal processing circuitry to increase the accuracy of the sensor, and with A/D converters and multiplexors to allow communication with a computer. A 28x32 sensor array has been fabricated and is currently being calibrated. Carnegie-Mellon University; Jon A. Webb; Studying Efficiency and Generality in Parallel Computing Using a Machine-Independent Programming Language; (MIP-8920420 A01); $124,341; 12 months. This research focuses on the project extending, implementing, and evaluating a special-purpose programming language for computer vision. The language, called Apply, includes parallel constructs for local vision operators that are compiled differently for different architectures. The extension being developed includes constructs for global control through divide-and-conquer, and is intended to allow portable parallel programs that can be ported between dissimilar architectures to be written. A compiler for extended Apply is available for each of several different parallel architectures, and compiled code is being compared to handwritten code on each architecture. In addition a widely used package of vision routines is being rewritten in the extended language and distributed for evaluation within several applications groups. Carnegie-Mellon University; Masaru Tomita; Research and Development of SNAP: Semantic Network Array Processor; (MIP-9009111 A01); $59,964; 12 months. This is part of a joint project with the University of Southern California to build a massively parallel processor and use it for natural language translation. The system which is being built is an array of custom processors that are specialized for marker passing in semantic networks. A specific goal of this project is to implement a real-time speech-to-speech dialogue translation system with about 500 words in its vocabulary. This system will necessarily include natural language understanding and generation components. The culmination of this project will be an array of 16,384 semantic network nodes with software implementing the translation system. Pennsylvania State University; Mary Irwin and Robert M. Owens; The Arithmetic Cube System Prototype; (MIP-8902636 A02); $254,412; 12 months. The focus of this research is on building a prototype of the Arithmetic Cube, which is a high-speed programmable VLSI processor for solving linear digital signal processing problems. The structure of the cube is based on algorithms for the discrete Fourier transform of small sets of points, which require small numbers of multipliers. Brown University; Daniel Lopresti; Investigations in Programmable Systolic Architectures; (MIP-9020570); $90,441; 24 months (Joint support with the Microelectronic Systems Architecture Program - Total Grant $180,882). This is a project to continue work on the B-SYS system for biological sequence comparison. This is a linear array of chips, each chip containing 47 small processors that can do fixed point arithmetic and the character comparisons needed for sequence comparison. Previous work has resulted in a 10- chip prototype. During this project, the principal investigator is expanding the prototype, developing software to aid in programming and debugging the array, and studying testability and fault-tolerance issues. The goal is to produce an inexpensive coprocessor with supercomputer performance on this problem. Other University of California - Santa Cruz; Kevin Karplus; 1991 Advanced Research in VLSI Conference, March 26-28, 1991, University of California, Santa Cruz; (MIP-9014762); $2,400; 12 months (Joint support with the Design, Tools and Test Program, Microelectronic Systems Architecture Program, the Circuits and Signal Processing Program, and the Systems Prototyping and Fabrication Program - Total Grant $12,000). This award supports one conference in a series that has alternated between the east and west coasts for more than a decade. The conference is intended to publicize innovative, interdisciplinary research with a strong VLSI component. This year's focus is on systems integration. The National Science Foundation support allows reduced registration rates for students and university employees, and program committee members, and travel and registration expenses for the invited speakers. University of Utah; Lee A. Hollaar; Implementation and Evaluation of a Parallel Text Searcher for Very Large Databases; (MIP- 9023174); $449,902; 12 months. This project concerns the application of the Utah Retrieval System Architecture to very large databases of full-text documents. This effort involves the development of a medium- scale (4 to 10 GBytes) parallel backend search server using augmented RISC processors as the searching engines. Data will be gathered and analyzed to determine if the existence of a high-speed search server changes the complexity and arrival rate of queries by real users (law students). In addition, the researchers seek to determine a suitable partitioning of functionality such that remote users can be supported by such a searching engine over medium-speed networks (such as ISDN). The researchers will also examine how the system can be reconfigured to deal with disk and searcher failures. Defense Advanced Research Projects Agency; John C. Toole; NSF/DARPA Agreement for Use of DARPA VLSI Implementation; (MIP-9015601 A01); $20,560; 12 months (Joint support with the Experimental Systems Program, the Directorate for Education and Human Resources and the Directorate for Engineering - Total Grant $470,560). A 1986 Memorandum of Understanding (MOU) between the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA) established a three-year joint program supporting VLSI (Very Large Scale Integration fabrication by MOSIS (Metal Oxide Semiconductor Implementation Service) for qualifying universities. This is a three-year continuation to that agreement, commencing October 1, 1989. This continuation of the MOU expands the original program to accelerate critical capabilities for Microsystems Design and Prototyping in U.S. universities. This includes expanding the services and technologies beyond those originally available to universities by MOSIS (e.g. semi-custom and gallium arsenide chip fabrication); stressing VLSI education, especially undergraduate education needed for designing future electronic systems; exploring new fabrication services designed specifically for the research and education community's desire for cost effective experimentation of state-of-the-art technologies and rapid prototyping methodologies, tools and services needed for complete systems. Systems Prototyping and Fabrication Dr. Paul T. Hulina, Program Director (202) 357-7853 phulina@nsf.gov The Program The Systems Prototyping and Fabrication Program (SPF) has three principal interrelated thrusts. The first (prototyping) supports research in rapid system prototyping of experimental information processing systems. The second (fabrication) supports research related to state of the art problems in, and the infrastructures and services needed for the fabrication of parts for these systems. The third (overlapping) area is assistance to undergraduate microelectronics education. This includes the support and administrative oversight of MOSIS. SYSTEMS PROTOTYPING Systems Prototyping deals with the issues involved in the engineering of rapid prototypes of experimental information processing systems. The goal is to develop methodologies and technologies that reduce the time needed to prototype interesting experimental systems. To make this possible, the program element seeks to provide the necessary infrastructure, environments and services for rapid prototyping. The knowledge, methodologies and services developed in this program will be useful both to industrial and university research groups interested in building experimental systems for experimentation and evaluation of new ideas and concepts. Research is supported on: systems-level design tools for specification and synthesis of systems (jointly with the Design, Tools and Test program), design frames (at the chip and board level), the interface problem, specifications, formats at the system level, standards and new intermediate packaging technologies. MICROELECTRONIC FABRICATION The Microelectronic Fabrication element supports basic research needed to understand, model and control the microfabrication process. This involves work on new technology, pattern definition and transfer, modeling and simulation, process automation (computer integrated manufacture of integrated systems). The emphasis is on work at the system level as opposed to addressing materials and device physics issues. This program element encourages research proposals from university groups knowledgeable of industrial problems in the systems-fabrication area. Solutions to existing problems might require the development of: architectures for manufacturing, simulation and real-time control; new disciplines for modelling semiconductor manufacturing equipment and processes; new test structures, sensors and instrumentation for process monitoring; modelling and simulation at the process, device and circuit level; and integration of CAD, CAM and CAT. EDUCATION This element consists of two components. The first is MOSIS (MOS Implementation Service) which serves as a broker to the semiconductor foundry industry and plays the major role in fabricating student chips. Secondly this includes funding proposals dealing with new technology issues at the undergraduate level such as: sponsorship of educationally oriented conferences and workshops, funding of innovative technology development that significantly impacts the educational infrastructure as regards system prototyping, distribution of preliminary versions of innovative educational materials (both hardware and software), encouragement of the upgrading of subject matter, curriculum, laboratories, and faculty. Initiatives and Opportunities The Systems Prototyping and Fabrication Program offers opportunities in the development of new technologies necessary for the prototyping of experimental systems, and provides access to these technologies for the research community. This program also supports the development of educational materials and access to these new technologies for educational use in order to provide a new generation of highly qualified system designers and implementors. The pace of technological innovation is accelerating and this acceleration offers both industry and academia new technologies and methodologies to exploit. For example, while MOSIS has and will continue to provide a valuable service to the university education and research community, other methods of implementation must be made available. Prototyping, package design, design for testing and manufacture must be integrated with and closely coupled to systems and circuit design. The needs for higher performance and reliability with smaller size, lower cost and lower power also dictate the same. This means the universities must adopt more of a systems outlook to educate those designers, and provide them with a more comprehensive design experience. New technologies (mini-fab production lines, Field-Programmable Gate Arrays (FPGA), Multi-Chip Modules (MCM), optical interconnect, micro-sensors, etc), new methodologies (fast prototyping, top-down design, powerful CAD tools, design libraries, etc) have the potential to meet this need if coupled with innovative services and an updated infrastructure. This program currently supports and is actively soliciting proposals in areas related to High Performance Computing and Communications (HPCC), and recent FCCSET initiatives such as Material Processing and Manufacturing-related Research. Listed below are some of the key issues related to these initiatives. * Overcoming the performance limitations due to packaging by integrating new packaging concepts into the design process. * Reducing the cost and time for fabrication and prototyping with new tools, equipment, and services. * Exploiting new technologies (field programmable gate arrays, multichip modules, etc), and new methodologies in research and education. * Simplifying and automating the fabrication processes. * Establishing package and multi-chip module packaging standards. * Functional and physical partitioning across and within package levels. * Establishing a better relationship between tool designers and those doing fabrication, packaging and prototyping, in areas such as requirements, integration, and evaluation of performance. * Exposing students to the above considerations in a system-level design experience, with practice in the optimized selection among alternatives and exposure to design verification. * Innovative use of curricular materials, compression of topics, and curriculum updating to introduce these advances into an overcrowded undergraduate curriculum. Awards Systems Prototyping and Fabrication Stanford University; Teresa H.Y. Meng; PYI: Digital Signal Processing Techniques for Image Analysis; (MIP-8957058 A02); $62,500; 12 months. This research focuses on architectures and applications for digital signal processing (DSP). A specific goal is a machine vision system for optical lithography alignment using DSP techniques. To accomplish this goal, new adaptive filtering techniques for edge and line detection and overlay measurements are being developed. To help with the design of the algorithm, a high-level simulator for evaluating algorithms is being developed to run on a distributed computer system. During this time increment, the focus in on the further evaluation of the performance of the proposed adaptive filtering technique for the alignment task by applying the method to real-time scanning-confocal microscope images of resist overlay features. Two sets of data obtained from industrial partners (critical dimension measurement and lithography) are being used to explore the tracking capability of the adaptive algorithm. University of California - Berkeley; Paul R. Gray, Donald O. Pederson and Avideh Zakhor; MOS Data Acquisition Circuits; (MIP- 8911017 A01); $188,038; 12 months. This project is investigating MOS data acquisition circuits. The research is directed toward establishing the fundamental factors which limit the performance of MOS analog- digital interface circuits, and toward synthesizing architectures which closely approach those fundamental limits. Specific topics for investigation include self-calibrated pipelined A/D circuits, video CODEC's, investigation of fundamental performance limits in sample and hold circuits, concurrent analog processing in A/D converters, and mixed- level simulation for A/D interface applications. University of California - Berkeley; David A. Hodges, Lawrence A. Rowe, and Costas J. Spanos; Computer Integrated Manufacturing: Database Support and Control Applications; (MIP-9014940); $377,580; 24 months. This research is on systems for instruction in and control of IC (integrated circuit) manufacturing (CIM). There are two parts. First is the user interface, where the primary thrust is developing graphical user interfaces and multimedia applications to improve manufacturing productivity and training. These include a standard desktop interface for control, scheduling, analysis and management programs. Also being developed is an engineer's notebook, and a hypermedia introduction to semiconductor manufacturing. Second is process control and development, where research is focused on designing and prototyping object-oriented systems to support routine manufacturing applications at the equipment level. Activities supported include: real-time monitoring, statistical process control, fault diagnosis, the efficient development of new recipes, and the economical creation of equipment models. Experiments with real-time monitoring of applications on equipment (processing and analytical) and equipment maintenance are being done. Results are being used to create a photo-lithography workcell prototype. University of California - Los Angeles; Jason Cong; RIA: Interconnection Problems for High-Performance VLSI Circuits and Systems; (MIP-9110511); $69,980; 24 months (Joint support with the Design, Tools and Test Program - Total Grant $69,980). The research is on chip-to-chip and on-chip interconnection problems. General formulations and efficient solutions to these problems are being explored. The focus is large chip/system designs with over a million transistors. Algorithms which minimize the interconnection delay and maximize circuit performance are being developed. Topics being addressed are: 1. timing-driven global routing with bounded routing costs for both cell-based designs and building-block designs; 2. high-speed clock routing with minimum skew for cell-based and building-block designs; and 3. chip-to-chip interconnection problems for multichip packaging, including multilayer planer subset, multilayer via minimization, and transmission line problems. University of California - Santa Cruz; Wayne W-M. Dai; PYI: Computer-Aided Design of VLSI Circuits--Constrained Net Embedding for Multichip Modules; (MIP-9058100 A01); $80,000; 12 months (Joint support with the Design, Tools and Test Program - Total Grant $90,000). This research models the interconnection topology and matrix needed for optimally laying out interconnections in multichip modules. Three topics are being pursued. First is performance driven layout for multi-chip modules (MCM). A performance driven layout system for thin film MCM designs is being developed. Variable width, variable spacing, evenly distributed spacing, and thermal via insertion are used to maintain distortion-free propagation of high-speed signals and to control cross-talk, switching noise, and thermal resistance. Second is early system analysis tools, which allow the designer to bridge architecture and technology issues and evaluate tradeoffs. These tools provide a framework for different levels of analysis and more detailed simulation. Third, an investigation is being made into a multiple bus network for parallel processing which matches the MCM requirements of higher I/O pin count and inter-chip routing density. This design is based on the combinatorics of the balanced incomplete block design, and has good fault tolerant properties that lead to uniform bus load and processor fanout. This grant includes support for two undergraduate students under the Research Experiences for Undergraduates Program. University of California - Santa Cruz; Wayne W-M. Dai; 1991 Multichip Module Workshop, March 28-29, 1991, Santa Cruz, California; (MIP-9100078); $16,393; 12 months. This award provided partial funding for a Workshop that investigated the technology and applications of the electronic packaging technique called MultiChip Modules (MCMs) that are particularly relevant and of interest to the university community. MCMs represent the next generation in packaging. Currently packaging is a (if not the) major factor limiting system performance, cost reduction, and new applications. However, the technology is capital-intensive, and heretofore has been beyond the reach of most universities. Progress has now opened an opportunity window for university involvement, and this workshop was intended to investigate how to exploit this opportunity. University of California - Santa Cruz; Kevin Karplus; 1991 Advanced Research in VLSI Conference, March 26-28, 1991, University of California, Santa Cruz; (MIP-9014762); $2,400; 12 months (Joint support with the Design, Tools and Test Program, Microelectronic Systems Architecture Program, the Circuits and Signal Processing Program, and the Experimental Systems Program - Total Grant $12,000). This award supports one conference in a series that has alternated between the east and west coasts for more than a decade. The conference is intended to publicize innovative, interdisciplinary research with a strong VLSI component. This year's focus is on systems integration. The National Science Foundation support allows reduced registration rates for students, university employees, and program committee members, and travel and registration expenses for the invited speakers. University of Colorado - Boulder; Yung-Cheng Lee; Quick Prototyping System for Multichip Modules; (MIP-9106923); $61,365; 12 months. This research addresses several issues critical to the development of the proposed quick prototyping system for multichip modules (MCMs). These issues are: 1. the detailed design of the workcenter; 2. the experimental demonstration of the accuracy and repeatability of two critical units in the workcenter-- the photoplotter and the pick-and-place robot; 3. the evaluation of the solder joint's electrical resistance; 4. the complete description of the user-interface of the workcenter; 5. the electroless plating for single-chip solder bumping; and 6. the explicit comparison to MCC's QTAI and GE's High- Density Interconnect (HDI) alternative methods. Although the system is proposed for quick prototyping, the manufacturing requirements in these issues are being addressed. The requirements for manufacturing are more demanding than those for prototyping. University of Colorado - Boulder; Yung-Cheng Lee; PYI: Multichip Module Design for Manufacturing; (MIP-9058409 A02); $62,500; 12 months. The research addresses some key requirements for the design for manufacture of very-small supercomputers used in intelligent machines such as portable robots. This multidisciplinary effort is centered on developing a compact rapid prototyping and manufacturing center. Microscale laser lithography, flip-chip soldering and robot-controlled pick- and-place techniques are being used. Simulation studies validate the design before prototyping. The focus of this research continues in the area of modeling of the self-alignment mechanism in flip-chip soldering, experimental study on the self-alignment mechanism, and yield modeling of flip-chip soldering. In addition, two new studies in the areas of thermal management of 3-D packaged supercompact systems and new flip-chip connection methods are being initiated. University of Colorado - Boulder; Jon R. Sauer and Robert J Feuerstein; SGER: Initial Prototype of an Optical Multi-Processor Interconnect; (MIP-9119312); $49,966; 6 months. This Small Grant for Exploratory Research (SGER) is for the development of the initial prototype of one of the nodes of an optoelectronic interconnection network ultimately capable of reaching terabyte-per-second capacities. Such a network is a crucial component in a future teraflop, distributed processing system. The required capacity is reachable through innovations in architecture, further development of a few devices that have been demonstrated, and careful systems integration. This project describes the first stage in the development of a single-word transmission, multiply-connected network, ultimately capable of delivering and extracting a sustained average rate of a few 10s of Gb/s from many 100s of electronic hosts simultaneously. The electronic hosts would be at multiway low-latency optical switching nodes separated by a few meters to 10s of kilometers. The techniques to be used are a hot-potato self-routing protocol with no opto- electronic conversion, a minimum-depth and physically flexible topology, optical packet compression by optical wavelength and microwave subcarrier-division multiplexing per word, optical amplification, multiwavelength optical switching, and multitechnology opto-electronic integration and packaging. The principal innovation is to architect and engineer a system that exploits these strengths simultaneously. A large multi- year project will be staged as a series of mutually dependent, increasingly sophisticated system demonstrations and device development efforts. This project constructs the first rudimentary demonstration of the node in a few months with (nearly) off-the-shelf components. Florida Atlantic University; Craig S. Hartley, Ray Barrett, Bela Szabo, Oren Masory, and Karl K. Stevens; Rapid Prototype MCM Test Engineering System; (MIP-9106538); $50,000; 12 months. This Small Grant for Exploratory Research (SGER) award is to prepare detailed plans and protocols for university-based remote-accessed MultiChip Module (MCM) testing services for universities. Current MCM testing is proprietary, product specific, and without university involvement. This research contributes to the development of standard public domain processes that can be used for prototype quantity testing of MCMs whose electrical, physical, and test designs are done at universities. Plans provide for testing of bare chips, bare substrate and assembled modules for various assembly methods and degrees of testing automation. Georgia Institute of Technology; Timothy J. Drabik; RIA: Optically Interconnected Wafer Scale Synchronous Processor Arrays; (MIP- 9110276); $69,960; 24 months. This research studies the feasibility of using free-space optics to interconnect large, monolithic arrays of electronic processing elements (PEs), fabricated at or near the wafer scale, into topologies that are commonly characterized as having "high wire area". Two long-standing problems are addressed in this project: 1. Conventional approaches to wafer-scale integration must forfeit considerable area to provide defect tolerance in the form of redundant or programmable interconnects and a sufficient density of spare PEs. 2. The poor performance of cross-wafer electrical interconnects and the dominance of wire area in the 2-D layout of highly connected topologies strongly favor mesh-like, essentially two-dimensional, locally connected networks. In this research, electronic PEs are provided with optical inputs for all signals entering or leaving the PE. Light signals emitted perpendicularly from the substrate constitute the input to an imaging system which, by virtue of specifically designed diffractive elements, realizes a mapping from the optical source locations to the detector locations that implements the desired interconnect. Because inter-PE connections are optical, the additional complexity required to support defect tolerance is absorbed into the imaging system instead of the circuitry. Areas of investigation include systems and architectural aspects of optically interconnected parallel machines, optimal design procedures and fabrication technology for diffractive elements, technology for light modulators on silicon, and imaging system engineering. A prototype system comprising a butterfly machine for computing fast transforms serves as an experimental vehicle for investigation of technological and systems aspect of optically interconnected wafer-scale processor arrays. University of Hawaii; Michael J.S. Smith; PYI: Computer-Aided Analog Integrated Circuit (IC) Design; (MIP-8957407 A02); $72,500; 12 months. The research is directed to the development and application of CAD (computer-aided design) tools for analog/digital design. Applications are to smart sensors, neural networks, ASIC (applications specific computing systems) design methodology, and the development of related microelectronics educational materials. This grant includes support for undergraduate students under the Research Experiences for Undergraduates Program. Massachusetts Institute of Technology; Hae-Seung Lee; PYI: Analog Device and Circuit Design in Integrated Circuits and Sensors; (MIP- 8858020 A04); $62,500; 12 months. The focus of this research is on a next generation fabrication technology for analog/digital converters and the demonstration of the integrability of the new technology through the implementation of a high-performance analog- digital converter. Research concentrates in two areas: 1. BiCMOS Phase-locked Loop Designs. Testing of the first silicon containing transmit and receive phase-locked loops includes tests at 250 Mhz and a characterization of the master slave phase-lock technique, verification of the accuracy of the SPICE simulation and a check of injection locking. Based on the tests and evaluations, the complete system is then designed and implemented in a second chip design. 2. Integrated Capacitive Sensors and Circuits. Integrated capacitive pressure sensors are being fabricated and tested. Sigma-delta oversampling techniques in the readout are being investigated as a possible replacement for the successive approximation readout technique currently in use. University of Michigan; William P. Birmingham; PYI: Computer-Aided Design Synthesis; (MIP-9057981 A01 & 02); $100,000; 12 months. This research focuses on the development of a set of tools to rapidly prototype digital systems taking into account a need to minimize lifetime costs by building testable systems and including consideration of manufacturing issues. The knowledge-based synthesis system MICON serves as a software workbench. MICON takes as input high-level functional specifications for a microprocessor-based system and generates a complete design. More specifically, this research focuses on three major topics: 1. a design-for-testability extension (DFT); 2. a domain-independent version of MICON (DIM), and 3. an interface to high-level (behavioral) synthesis tools. During this period, the research focuses on the completion and enhancement of the DFT module and furthering the DIM research. Included is a supplement to cover the acquisition of two workstations to be used to develop MICON software for the principal investigator's research on a computer module design synthesis tool. State University of New York - Binghamton; Jiayuan Fang; RIA: Electromagnetic Modeling of Interconnections in High-Speed Digital Integrated Circuits; (MIP-9110203); $69,938; 24 months. This research is on accurate models of the electrical performance of interconnections on digital circuits. Maintenance of the fidelity of a signal as it propagates through various interconnections is a key factor in proper functioning of integrated circuits (IC's). For very high- speed IC's, there is a complicated electromagnetic nature of the signal propagation. Accurate models of the complex interconnection structures are difficult to obtain. In the model being developed transient responses of signal propagation through interconnections are represented by the three-dimensional, full-wave, finite difference solution of Maxwell's equations in the time domain. This solution is being developed to conform to the interconnection geometries found in multilayer IC's. Attention is being paid to excitation and absorbing boundary conditions to ensure a high degree of numerical accuracy. Various interconnection structures are being analyzed, with special consideration being given to bends of different angles (connections on the same layer) and vias of different geometries (connections between adjacent layers). North Carolina State University; Michael B. Steer and Paul D. Franzon; Interconnect Models for Computer Aided Design of Multi-GHz Multichip Modules and Integrated Circuits; (MIP-9017054); $258,614; 24 months. This grant funds the development of electrical models of the interconnections in Multichip Modules (MCMs) that are useful at Gigahertz frequencies. At these frequencies the interconnections have a significant if not dominant effect. MCMs are the next generation packaging technique. Better MCM models are needed to exploit the opportunities that MCMs offer for essential enhancements in the performance of both analog and digital electronics. The models are obtained from experimental, analytical, and simulation studies. The outcome produces tables with routines for interpolation that can be imbedded in MCM design rules and invoked by separate MCM CAD tools. University of North Carolina - Chapel Hill; Raj K. Singh; BioSCAN: A VLSI-Based System for Biosequence Analysis; (MIP-9024585); $100,000; 12 months (Joint support with the Experimental Systems Program - Total Grant $250,100). The goal of this project is to construct an attached processor using application specific integrated circuits. This processor performs high speed partial pattern matching of biological sequence data (such as DNA or protein sequences) against entries in a database. This processor serves as a filter to provide information on partial matches (of segments) at a very high speed. This partial match information is used by the host processor to determine which sequences merit further detailed comparison (i.e., this prefiltering greatly limits the number of sequences which must be subsequently compared when considering insertions and deletions of subsequences). The subsequent full match is carried out in the host processor. University of North Carolina - Charlotte; Bei-Tseng Chu; RUI: Enhancing VLSI Circuit Yields Through Identifying Design Sensitivities; (MIP-9017151); $86,038; 24 months. This grant funds the development of a software tool for the statistical analysis of integrated circuit failures. The tool is used on existing failure rate data to locate unpredicted combinations of circuit parameters that produce failure. Statistical methods are necessary because of the randomness in failure data and the very large number of parameters which interact to as yet poorly understood ways. These parameter combinations yield insight on the nature of those interactions. This insight is then systematized using artificial intelligence techniques. University of North Carolina - Charlotte; Dian Zhou; RIA: Layout Design of VLSI Multichip Modules; (MIP-9110450); $60,000; 24 months. This research is on layout problems in ultra-large-scale- integration (ULSI). The nature of the multichip module (MCM) layout problem and associated design problems are being investigated. Activities include: 1. Exploring MCM layout by characterizing the physical requirements posed by MCM technology. Layout models for the MCM design problem are being built. Physical constraints being included in the model are: functions of layers (signal, power-ground, redistribution), layer ordering, wire width and separation, and grid-gridless geometry. The differences between MCM layout and traditional IC layout are being considered. 2. Finding techniques to decompose a 3-D MCM layout problem into a set of single layer layout problems. 3. Designing efficient algorithms for the MCM layout problem and incorporating performance requirements into the algorithms. Layer minimization, performance and global routing are being addressed. Texas A & M University; Ugur Cilingiroglu; RIA:Charge-Pumping Neural Networks; (MIP-9103424); $29,160; 24 months (Joint support with the Microelectronic Systems Architecture Program - Total Grant $49,160). The general objective of this project is to fully explore the "Charge-Pumping Network (CPN)" concept, and to transform it into a very densely integrable family of microelectronic neural networks. The CPN, in its generic form, is the simplest interconnected array of MOS gated-diodes. It is capable of performing inner product and thresholding operations in one direction and weighted averaging in the opposite direction in a simultaneous fashion over the same synaptic matrix. Yet, no visible feedback path exists in the array. This creates a very rich bidirectional neural functionality in a very compact network. The research plan includes the development and refinement of network synthesis procedures, a search for self-learning ability and the entire design/fab/test cycle for implementing five different target architectures. The goal is to extend the knowledge base in neural network synthesis by offering a general procedure for non-negative synaptic matrix design, and to help develop a knowledge base in collective multistability through an analysis of this fundamental concept. University of Washington; Gaetano Borriello; PYI: CAD for System Integration of Custom Components; (MIP-8858782 A04); $62,500; 12 months. Automatic integration of hardware components is becoming an increasingly conspicuous bottleneck in completing the design of a microelectronic system. This research deals with some of the issues involved in the automatic synthesis and interconnection of digital circuits. One focus is on the specification and synthesis of glue logic in addition to the components themselves. A second focus is on the partitioning of hardware and software components in a system using embedded microcontrollers. The main aspects of the work include: an internal representation for circuit behavior that allows the specification of complex timing constraints as well as mixing structural and behavioral specifications; synthesis methods for mixed synchronous and asynchronous control logic; timing optimization of sequential logic; and new high-level synthesis approaches that take timing constraints into account. The goal is to develop a synthesis tool that takes as input a concurrent program specification of a circuit and partitions it into appropriate elements (to be implemented in hardware and/or software) and then generates the glue logic to tie the components together as well as interconnect to the environment. The focus for this period is on behavioral specification, behavioral synthesis, and on field-programmable gate array architectures. University of Wisconsin; John F. Beetem; Incremental Placement and Routing for Field-Programmable Gate Arrays; (MIP-9102382); $139,806; 24 months. Field-programmable gate arrays (FPGAs) have the potential to revolutionize rapid hardware prototyping both in industry and in university research and instruction. To realize this potential, FPGA placement and routing tools must be orders of magnitude faster than those currently available. This project is investigating the use of incremental placement and routing to speed up FPGA design. Design is an iterative process characterized by small changes. By processing design changes as minimally as possible, incremental placement and routing has the potential to be dramatically faster than conventional tools which must reprocess an entire design from scratch. For placement, a generalized incremental form of the force- directed algorithm with time-varying cost functions is being investigated. Routing is being investigated using incremental penalty-driven iterative improvement and a generalized graph representation of routing resources. The algorithms produced are implemented to demonstrate their efficacy and at the same time provide high-quality FPGA design tools. As a side benefit, the algorithms will also be applicable to conventional IC and PC board placement and routing tasks. Education Conference Management Services; Merry Bush and Paul Losleben; 1991 Microelectronic System Education Conference; (CDA-9150264); $5,000; 12 months (Joint support with the Office of Cross-Disciplinary Activities and Faculty Enhancement Program - Total Grant $35,477). The proposal requests partial funding for a four day Conference and Exposition on the theme "Microelectronic System Education". One of the objectives of the Conference and Exposition is to stimulate opportunities for leveraging NSF's investment in education through joining industry/government support and discounting of hardware and software products needed for education. The principal investigator has a track record in organizing these conferences. University of Pittsburgh; Steven Levitan; Distribution of VLSI Design Software for Education and Research; (MIP-9101656); $48,809; 24 months (Joint support with the Design, Tools and Test Program - Total Grant $97,618). CAD design tools, produced in research projects at the University of Pittsburgh are being developed and enhanced so as to bring them into a state for distribution to general users. These tools are designed to help researchers and educators investigate synthesis of VLSI systems from VHDL descriptions. Specific tools being enhanced are: 1. "Vcomp" a compiler for a subset of the VHDL language, 2. "Vsim" the corresponding simulator, 3. A companion schematic editor written for X11, 4. A companion netlist-to-schematic tool written for X11, 5. "VF2VHDL" a reverse translator from netlists to VHDL. Tasks include: 1. Generate documentation for the compiler and simulator with tutorials, examples, and classroom aides; 2. Enhance the simulator to provide a graphical waveform display package based on IRSIM/Analyzer from Stanford; 3. Convert the old schematic tool from sunview to X11; 4. Clean up for distribution the netlist to schematic tool for X11; and 5. Update the netlist to VHDL tool. Technical support via email will be provided. This grant provides a supplement for distributing and sharing research software under the Software Capitalization Grants Program. University of Washington; Carl Ebeling, Gaetano Borriello, and Lawrence Snyder; Packaging and Distribution of Electronic CAD Software; (MIP-9018224); $50,000; 12 months (Joint support with the Design, Tools and Test Program - Total Grant $100,00). Three CAD design tools, produced in research projects at the University of Washington, are being developed and enhanced so as to bring them into a state for distribution to general users. These tools are: 1. MacTester, an interactive testing and debugging environment built around the Macintosh computer; 2. WireC, a mixed graphical and procedural language for describing hardware systems; and 3. Gemini, a VLSI layout verification program that compares the circuit specification with the circuit layout. This grant is made under the Software Capitalization Program. MOSIS Defense Advanced Research Projects Agency; John C. Toole; NSF/DARPA Agreement for Use of DARPA VLSI Implementation; (MIP-9015601); $462,416; 12 months (Joint support with the Education and Human Resources Directorate - Total Grant $687,400). A 1986 Memorandum of Understanding (MOU) between the National Science Foundation (NSF) and the Defense Advanced Research Projects Agency (DARPA) established a three-year joint program supporting VLSI (Very Large Scale Integration fabrication by MOSIS (Metal Oxide Semiconductor Implementation Service) for qualifying universities. This is a three-year continuation to that agreement, commencing October 1, 1989. This continuation of the MOU expands the original program to accelerate critical capabilities for Microsystems Design and Prototyping in U.S. universities. This includes expanding the services and technologies beyond those originally available to universities by MOSIS (e.g. semi-custom and gallium arsenide chip fabrication); stressing VLSI education, especially undergraduate education needed for designing future electronic systems; exploring new fabrication services designed specifically for the research and education community's desire for cost effective experimentation of state-of-the-art technologies and rapid prototyping methodologies, tools and services needed for complete systems. Summary of MOSIS Support Educational Use of MOSIS Education and Human Resources Directorate $225,000 Engineering Directorate $225,000 Research Use of MOSIS Computer and Information Science and Engineering Directorate Experimental Systems Program $20,560 Index of Presidential Young Investigators Agarwal, Anant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Anastassiou, Dimitris . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Banerjee, Prithviraj. . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Birmingham, William P.. . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Borriello, Gaetano. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Bresler, Yoram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Buckley, Kevin M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Bushnell, Michael . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Cioffi, John M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Dai, Wayne W-M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7, 64 Dally, William J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 De Micheli, Giovanni. . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Dill, David . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Eggers, Susan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 El-Naggar, Mohammed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Farvardin, Nariman. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Hill, Mark D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Jenkins, Keith B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Larrabee, Tracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Lee, Edward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Lee, Hae-Seung. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Lee, Yung-Cheng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Maragos, Petros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Meng, Teresa H.Y. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Pillage, Lawrence T.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Rabaey, Jan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Ramachandran, Umakishore. . . . . . . . . . . . . . . . . . . . . . . . . . 22 Rutenbar, Rob A.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Saleh, Resve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Schlag, Martine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Smith, Michael J.S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Stonick, Virginia L.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Van Veen, Barry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Wakefield, Gregory H. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Wawrzynek, John . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 White, Jacob K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Yagle, Andrew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Zakhor, Avideh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Zukowski, Charles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Index of Research Initiation Investigators Ahuja, Mohan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Baraniecki, Anna Z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Belfore, Lee. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Briner, Jack V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Brunvand,Erik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Burleson, Wayne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Chan, Pak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Cilingiroglu, Ugur. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Cong, Jason . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6, 63 Dai, Hong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Djuric, Petar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Doerschuk, Peter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Drabik, Timothy J.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Fang, Jiayuan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Harjani, Ramesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Jeffs, Brian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Kahng, Andrew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Kiaei, Sayfe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Leeser, Miriam. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Perkowski, Marek. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Pomeranz, Irith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Potter, Lee C.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Riskin, Eve A.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Sastry, Sarma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Shang, Weijia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Swindlehurst, A. Lee. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Varma, Anujan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Wilken, Kent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Wolf, Wayne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Wong, Ping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Zhou, Dian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Index of Principal Investigators A Abelson, Harold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Agarwal, Anant. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Agrawal, Dharma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Agrawal, Vishwani . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Ahuja, Mohan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Anastassiou, Dimitris . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Arce, Gonzalo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 B Banerjee, Prithviraj. . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Baraniecki, Anna Z. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Barrett, Ray. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Batcher, Kenneth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Beetem, John F. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Belfore, Lee. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Bhuyan, Laxmi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Bilgutay, Nihat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Birmingham, William P.. . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Borriello, Gaetano. . . . . . . . . . . . . . . . . . . . . . . 18, 55, 68, 69 Bose, Bella . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Bose, Nirmal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Boyer, Robert S.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 26, 57 Bresler, Yoram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Briner, Jack V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Brunvand,Erik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Bryant, Randal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Buckley, Kevin M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Burleson, Wayne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Bush, Merry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Bushnell, Michael . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 C Cabrera, Sergio D.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Carley, L. Richard. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Chakravarty, Sreejit. . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chan, Pak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Chellappa, Ramalingam . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Choudhary, Alok . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Chowdhury, Salim. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Chu, Bei-Tseng. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Chua, Leon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Ciesielski, Maciej. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Cilingiroglu, Ugur. . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 67 Cioffi, John M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Clarke, Edmund M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Clarkson, Peter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Cohoon, James . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Cong, Jason . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6, 63 D Dai, Hong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Dai, Wayne W-M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7, 64 Dally, William J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Darling, Robert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Das, Chitaranjan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Davidson, Edward. . . . . . . . . . . . . . . . . . . . . . . . . . . . .9, 23 Davis, Nathaniel J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 De Micheli, Giovanni. . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 DeGroat, Joanne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Deller, John. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Deogun, Jitender. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Dill, David . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Djuric, Petar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Doerschuk, Peter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Drabik, Timothy J.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Du, David . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 E Ebeling, Carl . . . . . . . . . . . . . . . . . . . . . . . . . . . 18, 55, 69 Eggers, Susan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 El-Naggar, Mohammed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Ellis, John L.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Ercegovac, Milos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Etter, Delores. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 F Fang, Jiayuan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Farmwald, P. Michael. . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Farvardin, Nariman. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Fellows, Michael. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Feng, Tse-Yun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Ferguson, Frankie J.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Feuerstein, Robert J. . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Flynn, Michael J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Frank, Thomas H.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Franzon, Paul D.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Friedlander, Benjamin . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Fuchs, Henry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 G Gajski, Daniel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Gardner, William. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Geman, Stuart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Ghosh, Sumit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Gopalakrishnan, Ganesh. . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Gottlieb, Allan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Gray, F. Gail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Gray, Paul R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38, 63 Gray, Robert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37, 43 Grenander, Ulf. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 H Harjani, Ramesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Hartley, Craig S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Hayes, John . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Hill, Mark D. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Hodges, David A.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Hollaar, Lee A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 I Irwin, Mary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 33, 59 J Jain, V. K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Jeffs, Brian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Jenkins, Keith B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Jenkins, W. Kenneth. . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Johnson, Steven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Jones, Douglas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Jones, Larry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 K Kahng, Andrew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Kanade, Takeo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Karalamangala, Arun . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Karplus, Kevin. . . . . . . . . . . . . . . . . . . . . . . 17, 33, 50, 59, 64 Kass, David . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Katz, Randy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Kaufman, Arie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Kaufman, Howard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Kaveh, Mostafa. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Kedem, Gershon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Kehl, Theodore H. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Kiaei, Sayfe. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Kime, Charles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Knapp, David. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Koren, Israel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Kuck, David . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Kuh, Ernest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Kurdahi, Fadi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 L Lagnese, Elizabeth D. . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Landis, D. L. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Lang, Tomas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Langston, Michael . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 LaPaugh, Andrea S.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Larrabee, Tracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Lee, Edward . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Lee, Hae-Seung. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Lee, Yung-Cheng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Leeser, Miriam. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Levitan, Steven . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17, 68 Levy, Bernard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Liu, C. L.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Lombardi, Fabrizio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Lopresti, Daniel. . . . . . . . . . . . . . . . . . . . . . . . . . . . 32, 59 Losleben, Paul. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 M Maragos, Petros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Martin, Kenneth W.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Masory, Oren. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Mathews, V. John. . . . . . . . . . . . . . . . . . . . . . . . . . . . 43, 44 Mazumder, Pinaki. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 McClure, Donald E.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Meng, Teresa H.Y. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Messerschmitt, David G. . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Meyer, Robert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Mogri, Juzer S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Moldovan, Dan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Morgan, Nelson. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Mudge, Trevor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9, 23 Namazi, Mohamad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Nayeri, Majid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 O Owens, Robert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31, 59 P Palusinski, Olgierd . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Pederson, Donald O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Perkowski, Marek. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Pillage, Lawrence T.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Pillai, Unnikrishna . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Pinter, Robert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Poggio, Tomaso. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Pomeranz, Irith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Potter, Lee C.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Poulton, John W.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Prasanna-Kumar, V. K. . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 R Rabaey, Jan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Raghavendra, Cauligi. . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Raghuveer, Mysore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Ramachandran, Umakishore. . . . . . . . . . . . . . . . . . . . . . . . . . 22 Ramirez-Angulo, Jaime . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Ranganathan, N. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Riskin, Eve A.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Rowe, Lawrence A. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Rutenbar, Rob A.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 S Sakallah, Karem . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9, 23 Saleh, Resve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Salowe, Jeffrey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Saluja, Kewal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Sarrafzadeh, Majid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Sastry, Sarma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Sauer, Jon R. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Scherson, Isaac D.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Schlag, Martine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Seger, Carl-Johan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Seth, Sharad. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Setliff, Dorothy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Shang, Weijia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Shen, John. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Shin, Kang. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Siewiorek, Daniel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Silverman, Harvey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Singh, Raj K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58, 67 Smith, Michael J.S. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Snyder, Donald. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Snyder, Lawrence. . . . . . . . . . . . . . . . . . . . . . . . . . 18, 55, 69 Song, Bang-Sup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Soumekh, Mehrdad. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Spanos, Costas J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Sridhar, M. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Steer, Michael B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Steiglitz, Kenneth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Stevens, Karl K.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Stonick, Virginia L.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Sussman, Gerald J.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Swindlehurst, A. Lee. . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Szabo, Bela . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 T Tanner, John. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Temes, Gabor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38, 56 Thomas, Donald. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Thomborson, Clark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Tomita, Masaru. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Toole, John C.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60, 69 Tugnait, Jitendra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 V Vaidyanathan, P.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Van Veen, Barry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Varma, Anujan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Veidenbaum, Alexander V.. . . . . . . . . . . . . . . . . . . . . . . . . . 56 Vetterli, Martin. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Voelcker, Herbert B.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Vu, Tho T.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 W Wah, Benjamin W.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Wakefield, Gregory H. . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Watanabe, Hiroyuki. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Wawrzynek, John . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Webb, Jon A.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Wei, Belle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 White, Jacob K. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Wilken, Kent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Williamson, Geoffrey. . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Willsky, Alan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Windley, Phillip. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Winkel, David . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Wojcik, Gregory L.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Wolf, Wayne . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Wong, Ping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Woods, John . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44, 46 Wyatt, John R.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Y Yagle, Andrew . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Z Zakhor, Avideh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43, 63 Zemanian, Armen H.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Zhou, Dian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Zukowski, Charles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Index of Institutions Auburn University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Brigham Young University. . . . . . . . . . . . . . . . . . . . . . . . 40, 48 Brown University. . . . . . . . . . . . . . . . . . . . . . . . . . 32, 47, 59 California Institute of Technology. . . . . . . . . . . . . . . . . . . . . 48 Carnegie-Mellon University. . . . . . . . . . . . . 4, 11, 16, 31, 42, 58, 59 Clarkson University . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Columbia University . . . . . . . . . . . . . . . . . . . . . . . . . . 10, 44 Conference Management Services. . . . . . . . . . . . . . . . . . . . . . . 68 Cornell University. . . . . . . . . . . . . . . . . . . . . . . . . . . 11, 55 Defense Advanced Research Projects Agency . . . . . . . . . . . . . . . 60, 69 Drexel University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Duke University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Florida Atlantic University . . . . . . . . . . . . . . . . . . . . . . . . 65 George Mason University . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Georgia Institute of Technology . . . . . . . . . . . . . . . . . . . . 22, 65 Harvard University. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Illinois Institute of Technology. . . . . . . . . . . . . . . . . . . . . . 42 Indiana University. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 InfoLogic Software, Inc.. . . . . . . . . . . . . . . . . . . . . . . . . . 14 International Computer Science Institute. . . . . . . . . . . . . . . . . . 57 Kent State University . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Lafayette College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Marquette University. . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Massachusetts Institute of Technology . . . . . . . . . 16, 22, 50, 56, 58, 66 MEI Research. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Michigan State University . . . . . . . . . . . . . . . . . . . . . . . . . 46 Michigan Technological University . . . . . . . . . . . . . . . . . . . . . 51 New Mexico State University . . . . . . . . . . . . . . . . . . . . . . . . 30 New York University . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 North Carolina State University . . . . . . . . . . . . . . . . . . . . 31, 67 Northwestern University . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Ohio State University . . . . . . . . . . . . . . . . . . . . . 17, 25, 39, 47 Oregon State University . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Pennsylvania State University . . . . . . . . . . . . . 25, 31, 33, 37, 47, 59 Perinatronics Medical Systems, Inc. . . . . . . . . . . . . . . . . . . . . 42 Polytechnic University. . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Portland State University . . . . . . . . . . . . . . . . . . . . . . . . . 11 Princeton University. . . . . . . . . . . . . . . . . . . . . . . . . . 10, 24 Purdue University . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Rensselaer Polytechnic Institute. . . . . . . . . . . . . . . . . . . . 44, 46 Rochester Institute of Technology . . . . . . . . . . . . . . . . . . . . . 47 Rutgers University. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 San Jose State University . . . . . . . . . . . . . . . . . . . . . . . . . 28 Stanford University . . . . . . . . . . . . . . . . . 3, 5, 37, 43, 49, 57, 63 State University of New York - Binghamton . . . . . . . . . . . . . . . . . 66 State University of New York - Buffalo. . . . . . . . . . . . . . . . . 14, 40 State University of New York - Stony Brook. . . . . . . . . . . . . 4, 30, 49 Syracuse University . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Tanner Research Inc.. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Texas A & M University. . . . . . . . . . . . . . . . . . . . . 15, 26, 32, 67 Top-Vu Technology, Inc. . . . . . . . . . . . . . . . . . . . . . . . . . . 39 University of Arizona . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 University of California - Berkeley . . . . . . . . . . . . 6, 28, 29, 37, 38 . . . . . . . . . . . . . 43, 49, 50, 63 University of California - Davis. . . . . . . . . . . . . . . . . . 21, 41, 50 University of California - Irvine . . . . . . . . . . . . . . . . . . . 6, 28 University of California - Los Angeles. . . . . . . . . . 6, 7, 28, 38, 56, 63 University of California - Santa Cruz . . . . . . . . . . . . 3, 7, 12, 17, 21 . . . . . . . . . . . . 33, 50, 59, 64 University of Colorado - Boulder. . . . . . . . . . . . . . . . . . 42, 64, 65 University of Delaware. . . . . . . . . . . . . . . . . . . . . . . . . . . 42 University of Hawaii. . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 University of Idaho . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 University of Illinois. . . . . . . . . 3, 8, 16, 22, 29, 38, 45, 46, 50, 56 University of Iowa. . . . . . . . . . . . . . . . . . . . . . . . . . . 9, 13 University of Maryland. . . . . . . . . . . . . . . . . . . . . . . . . . . 43 University of Massachusetts - Amherst . . . . . . . . . . . . . . . . . 9, 23 University of Michigan. . . . . . . . . . . . . . . . . 9, 13, 23, 39, 48, 66 University of Minnesota . . . . . . . . . . . . . . . . . . . . 4, 10, 39, 40 University of Minnesota - Duluth. . . . . . . . . . . . . . . . . . . . . . 4 University of Nebraska - Lincoln. . . . . . . . . . . . . . . . . . . . . . 13 University of North Carolina - Chapel Hill. . . . . . . . . . . 31, 55, 58, 67 University of North Carolina - Charlotte. . . . . . . . . . . . . . . . . . 67 University of North Carolina - Greensboro . . . . . . . . . . . . . . . . . 16 University of Pittsburgh. . . . . . . . . . . . . . . . . . . . . . 12, 17, 68 University of South Carolina. . . . . . . . . . . . . . . . . . . . . . . . 25 University of South Florida . . . . . . . . . . . . . . . . . . . . . . . . 29 University of Southern California . . . . . . . . . . . 13, 21, 22, 29, 45, 57 University of Southwestern Louisiana. . . . . . . . . . . . . . . . . . . . 30 University of Tennessee . . . . . . . . . . . . . . . . . . . . . . . . . . 4 University of Texas . . . . . . . . . . . . . . . . . . . . . . . . 16, 26, 57 University of Utah. . . . . . . . . . . . . . . . . . . . . 12, 26, 43, 44, 60 University of Virginia. . . . . . . . . . . . . . . . . . . . . . . . . . . 5 University of Washington. . . . . . . . . . . . 18, 27, 32, 45, 55, 57, 68, 69 University of Wisconsin . . . . . . . . . . . . . . . . . . . . 15, 27, 40, 68 Virginia Polytechnic Institute & State University . . . . . . . . . . . . . 26 Washington University . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Weidlinger Associates . . . . . . . . . . . . . . . . . . . . . . . . . . . 7