This document has been archived. Title : NSF 95-3 - Third National Conference on Diversity in the Scientific & Technological Workforce Type : Report NSF Org: EHR Date : March 27, 1995 File : nsf953c STUDENT RESEARCH COMPETITION AWARD WINNING PAPERS PRECOLLEGE AWARD Terrence R. Ruffin, Freshman E. A. LANEY HIGH SCHOOL, WILMINGTON, NORTH CAROLINA Summer Science Camp Project Principal Investigator, Joseph M. Kishton Barrier Island Topography and Vegetation Zones Abstract: Data were collected by Summer Science Camp students to demonstrate the relationship among the following: distance inland from the high tide line, changes in elevation, and variation in vegetation along a 100 meter transect at Fort Fisher, North Carolina. Five student teams measured changes in elevation using meter boxes and torpedo levels. The sixth team conducted vegetation counts of one square meter areas at five meter intervals along the transect line. A dune field of primary, secondary, and tertiary dunes was found immediately inland from the high tide line. Vegetation in this zone was primarily sea oats and pennywort, with an occasional American beach grass. Yucca, prickly pear, and beach pea were incidental within this zone. A transition zone, dominated by sea ox-eye, preceded the salt marsh, which was dominated by Spartina. INTRODUCTION The purpose of this study was to determine if vegetation zones on a barrier island were related to elevation and relative position to the high tide line. Oceanfront vegetation is, by necessity, salt tolerant. Exposure to salt spray is related to height and distance from the high tide line. Some salt tolerant plants such as sea oats, are known to thrive in dune fields. They trap windblown sand, burying themselves, further stimulating growth, thereby stabilizing the dune area. Creeping plants may reduce salt exposure by staying low to the ground. Cacti have a waxy cuticle that reduces the dehydrating effect of salt. Salt marsh plants must not only be tolerant of salt, but have to withstand twice daily, the flooding and drying out of the tidal cycle. QUESTIONS On a barrier island, is there a measurable relationship with respect to the distance inland from the high tide line, changes in elevation, and types of vegetation? HYPOTHESIS Adaptations to salt spray exposure should determine the type and numbers of plants in any particular place on the island. One would expect a dunefield dominated by sea oats to be immediately inland from the high tide line. Further inland from the dunefield, a maritime thicket of yaupon, scrub live oak, and wax myrtle might exist on higher elevations. On the sound side of the island, a salt marsh of Spartina should occur. MATERIALS AND METHODS Materials for this procedure included five boxes made of meter sticks hinged at the corners, five torpedo levels, 100 meters of stout line (marked at one meter intervals in black, and five meter intervals in red), and five clipboards for recording data. Five groups of four students measured changes in elevation along a 100 meter transect. Each team used the meter stick boxes and torpedo levels to determine changes in elevation at one meter intervals. Total change in elevation was rounded to the nearest twenty-five centimeters for every five meters of distance along the transect. This was done to present the data graphically. A sixth group counted and identified plant species within a one meter square area every fifth meter along the transect. RESULTS The accompanying figures (see printed report) depict the relationship among the occurrence of plant species, elevation, and distance from the high tide line, as well as the number of species in each zone along the transect. Sea oats and pennywort dominated the primary dune area. The large number of pennywort may be deceptive. They are smaller than sea oats. Therefore, larger numbers of individuals may not mean that they are the dominant species of the zones that they inhabit. American beach grass was evident on the secondary dune. Sea oats and pennywort were also found here. Beach grass is probably less tolerant of salt spray and first occurs only in the protection of the primary dune. On the tertiary dune, Yucca plants make an appearance, along with beach pea (a creeping vine). That these were not seen earlier along the transect, probably indicates that they also are less salt tolerant than sea oats, pennywort, and even beach grass. No maritime forest was encountered along our transect. A transition zone, marked by prickly pear and ox-eye daisy immediately preceded the salt marsh characterized by Spartina alterniflora (salt meadow cordgrass). CONCLUSIONS The primary dunefield contained plants that could tolerate salt spray. The dunes also protected more inland areas from wind and spraying salt. The farther inland one moved along the transect line, the more diverse the species of plants. Within the transition and marsh zones, plants must be able to withstand flooding at high tide. Ox-eye daisy and prickly pear can withstand an occasional floodtide, while Spartina must withstand a twice daily flooding by the tidal cycle. ACKNOWLEDGEMENTS The author would like to express his heartfelt appreciation to his parents for all their love and support. I would also like to recognize the help and encouragement of Dr. Kishton and Mr. Mayo. I would also like to acknowledge the hard work of all my fellow researchers at the science camp. It was definitely a group project. BIBLIOGRAPHY Daiber, Franklin C., Conservation of Tidal Marshes, 1986, Van Nostrand Reinhold. Lenihan, John Fletcher, William, The Marine Enviornment, 1977, Academic Press. Peterson, Charles, The Ecology of Intertidal Flats of North Carolina, 1980, U.S. Fish and Wildlife Service. Rafaelli, D., Intertidal Ecology, 1993, Chapman & Hall. Wiegart, Richard, Tidal Salt Marshes of the Southeastern Atlantic Coast: A Community Profile, 1991, U.S. Dept. of the Interior, Fish and Wildlife Service. Figure 1 (see printed report) Barrier island Beach Profile Figure 2 (see printed report) Plant Species by Zone PRECOLLEGE AWARD Roosevelt Love, Senior Beaumont High School, St. Louis, MISSOURI Comprehensive Regional Centers for Minorities Project Principal Investigator, Harvest L. Collier A Study of the St. Louis Metropolitan Metro-Link Light Rail Station Canopies Abstract: Since its introduction in July 1993, the Metro-Link Light Rail Transit System of St. Louis, Missouri has attracted an average of 23,000 riders daily. The success of the system is measured against the projected ridership of 17,000 riders per day. While the overall impact of Metro-Link has been environmentally beneficial, environmental ("green") design was not a requirement of the criteria that the project architects, Kennedy Association, utilized in their original design efforts. The purpose of this project is to redesign the passenger canopies of the Metro-Links outdoor platform stations to include environmental criteria. It is hypothesized that the canopy redesign should differ greatly in form, and in the type of materials used, and it should consider all environmental elements. The canopy design produced by this study was examined against the original design in addition to the objective, and sometimes subjective, criteria of the newly emerging "Green Architecture." The results show the hypothesis to be true. The conclusion is that if mass transit includes environmental criteria in station canopy design energy consumption, pollution, and "unfriendly" materials use will be greatly reduced. BACKGROUND INFORMATION The primary purpose of mass transportation is to move people from place to place in a safe, convenient, comfortable and efficient manner. The spin-off benefits include private economic development that produces increased employment and a higher standard of living. Some of the benefits are immediate and measurable, others take time to develop and are not readily recognized. Reduced energy consumption and a reduction of traffic are already being noticed. Improved economics are only in their early stages. The new Metro-Link system takes advantage of, builds on and repeats some of the region's rich history. In the early part of the nineteenth century, East St. Louis was the western terminus of rail traffic in this area. The Eads Bridge, which is on the National Register of Historic Places, was built to encourage economic growth in St. Louis. Metro-Link utilizes the abandoned rail deck that had gone unused for almost two decades, a rebirth of a structure and purpose that originated 120 years ago. Socially as well as economically, the reuse of the bridge makes East St. Louis less remote and more a part of the metropolitan community, not an isolated satellite lost in the nebula of interstate highway exchanges. The system extends west beneath downtown in the original limestone and brick arcaded tunnel built at the same time as the Eads Bridge. The tunnel, renovated and illuminated is one of Metro-Link's most extraordinary, and beautiful achievements. Metro-Link also reintroduces the romance of rail travel to the 100 year old Union Station. Union Station was once the busiest rail yard in the United States and was the crossroads of thousands of military personnel during World War Two. Metro-Link has also contributed to a fifteen percent increase in retail business at Union Station. Metro-Link has had a similar impact on St. Louis Centre, a downtown shopping mall in the heart of the St. Louis business district. The system runs under the original Wabash Station at Delmar Boulevard and Des Peres Avenue. The beautifully proportioned neoclassical structure is not part of the system, but makes a visible connection to the past. The Wabash Station Building is a perfect location for the type of private development that could take advantage of the 23,000 riders that pass it every day. Metro-Link passes Washington University and runs through The University of Missouri at St. Louis on its way to the airport, giving thousands of students the opportunity to access higher education at campuses that serve a significant commuter population. The newest station that opened at Lambert International Airport is the temporary terminus of the system. The station, in only two months of operation is already experiencing a ridership of 2,700 people per day. That traffic, that no longer jams Interstate 70, is made up of business people traveling to and from other cities and conventioneers that are enriching St. Louis' tourism industry. The quarter cent sales tax that was passed in early August of 1994 will allow designers to extend the system to every point of the compass and allow even more people to take advantage of a system that could be as extensive as Washington DC's in as little as 20 years. REVIEW OF LITERATURE Because the Metro-Link light rail system is such a current event, a great deal of information is readily available. The sources of information for this project include written design documentation, oral analysis from the Project Manager for the Architectural design, periodicals from both local and national sources, a video and a design manual. The initial source of information that was reviewed was the "Design Criteria Manual" that was written in 1986 and was intended to serve as an objective checklist for the designers that began the final design efforts for Metro-Link in 1989. The Design Criteria was essentially design instructions for the engineers and architects that were responsible for the design of electrical, mechanical, plumbing, structural and civil issues in addition to the architectural discipline. The subjects covered by the criteria included pedestrian and vehicular circulation, construction material selection, dimensional requirements, functional requirement, security and lighting standards, landscaping, accessibility for the disabled, and guidelines for addressing historic preservation. The work of the final designers was continually examined against the criteria to assure the design of a safe, durable and cost effective system. The Construction Documents were the next source of information that was necessary to review the architect's success with the Design Criteria. The Construction Documents, commonly, but erroneously, referred to a blue prints, contain the actual design for the canopy as dictated by the criteria. All of the components of the canopy are represented in these documents that are further described in the procedures. The Architectural Project Manager, Robert E. St. John of both Kennedy Associates, provided an oral understanding of both the criteria and the drawings. He had originally written the criteria and was responsible for directing his staff's efforts in the design process. Mr. St. John conducted a tour of the alignment and during this tour it was noted that environmental issues were not the primary considerations in the design. As each aspect of the canopy was discussed it became clearer that each component of the canopy might have taken a different form if the environment were a higher priority. Mr. St. John also provided the outline of the design process that was used in the procedures of this report. The initial discussions of environmental impacts on design led to the discovery of "Green Architecture" as practiced by New York City architects who were responsible for the design of the renovation of the Audobon Building in New York. In the Audobon Building every material, component and assembly was examined for its environmental property and was only utilized if it was compatible with environmental goals. Examples include the use of cotton, not petrochemicals for insulation; cork linoleum, not plastic based flooring, and formaldehyde free processed wood products in lieu of their off-gassing counterparts. The understanding of the aforementioned sources together with a review of current local news articles in the St. Louis Post Dispatch forms the basis of the hypothesis and procedures that are part of this study. STATEMENT OF THE PROBLEM The public's perception of the Metro-Link success helped pass a sales tax initiation in August 1994 that funded 100 miles of extension with as many as eight additional stations. Some of the success of the system can be attributed to the cost effectiveness of the design solutions. For example, the right-of-way, or the alignment, was donated by the manicipalitites and railroads. Existing trackwork was recycled for use in the maintenance facility, and bus routes were reconfigured to "feed" the rail system saving thousands of travel miles and wear of the existing buses. Mass transit has proven itself as a significant past of goals to reduce energy consumption. As overall goal of this redesign project is to recommend that every component and operational feature of a light rail system should be designed to support environmental improvements. Thus, the specific of an improved station canopy should reflect reductions, not only, but in energy consumption, is pollution, and in are of materials detrimental to the environment. PURPOSE The purpose of this study is to redesign the canopy of the St. Louis Metro-Link Transit system's out door, open platform stations. The redesign will introduce a new criteria for design that was not required in the original and recently constructed system. The new criteria will be for environmental considerations. The priority for the new environmental criteria will rank near the top together with safety. HYPOTHESIS The design of the canopy for the outdoor platform stations of the St. Louis Metro-Link Transit System would differ from the current design if environmental quality is introduced as a significant criteria. It is believed that form, materials and image would all be affected by a change in design approach. PROCEDURE AND ANALYSIS Architects are granted licenses by the states to practice their profession when they demonstrate a minimum proficiency in three areas in descending order. The first area includes the ability to follow codes that assure the public of a minimum degree of safety. The second criteria for licensing is the ability to translate the clients goals into usable forms within stated budgetary goals. The third area, which is very subjective, is the ability to provide designs that are aesthetically pleasing. Architectural candidates, through a combination of testing procedures, are tested for their knowledge and ability before they are allowed to practice architecture. For the purpose of this project, environmental concerns will be considered to be the part of an architect's responsibility in the area of public safety. This will rank environmental considerations high in priority in the criteria for design. Architectural training is a four to six year college course of study. Through the design class, architects are trained to refine a design process that they will ultimately use when they enter the profession. The design process takes an architect through the following steps: o Analyze the goals, objectives and program of the client, o Define the client's goal in terms of an architectural solution, o Analyze the client's goals in terms of public safety, o Examine the architectural options available that meet safety and the client expectations, o Examine the component parts of selected options for compatibility with the client's goals, o Compose the component parts into a cohesive design solution and o Present the results of the design efforts to the client for approval. The process that will be used for this study will parallel the basic architectural process. This report will assume that a canopy is the agreed upon solution for passenger protection at the outdoor platform stations as describe in the criteria. Canopy is the term used to describe a structure that is used to protect waiting transit riders from the elements. In the St. Louis area the structure should protect passengers from the rain and from the sometimes relentless summer sun. The canopy should also enhance a rider's perception of physical safety by being well lit with an open feeling. One of the goal of the canopy was for it to be viewed as an icon for the whole transit system, so it has to have a certain commanding presence. The canopy should be large enough to accommodate 50 people during a driving rain or from low summer sun angles. (see diagram I) ----------------------------------------------------------------------------------------------------- (NOTE TO READER: SEE PRINTED REPORT FOR ALL DIAGRAMS.) ----------------------------------------------------------------------------------------------------- The basic components of the canopy are as follows: o Vertical structure, o Horizontal structure and o horizontal coverage. (see diagram II) The vertical structure is most commonly a set of columns that support the canopy. Bearing walls might be considered as an alternative, but they can obstruct view and reduce a patron's perception of safety. The horizontal structure is supported by the vertical structure and in turn supports the horizontal coverage. The horizontal structure can take the form of beams, girders and joists or trusswork. The horizontal structure must be substantial enough to support snow and rain loads, wind loads, and the weight of the materials themselves. An architect will ordinarily design the structure, but a structural engineer will be required to perform the necessary calculations to insure the structure's capacity. The horizontal coverage extends over, and sometimes under, the horizontal structure to protect an area of the platform form the elements. The coverage may be clear, translucent or opague depending on the criteria for protection. Clear coverage will protect from the rain and will allow for natural illumination for most of the day, buy will not protect from the sun. Translucent canopies have the advantage of the clear alternatives and offer sun protection. Opague canopies will protect from the sun and the rain but may mandate artificial illumination early and late in the day. An analysis of the component parts is important at this point to begin to understand the impact of choices for form and materials selection. The only decisions that can be intelligently made are those that suggest that columns are made safer than walls, translucent coverage optimizes protection and minimal use of lightning, and that the horizontal structure should allow for the benefits of translucent coverage. Historically, four basic types of canopy form have been used. They are: o "V" shaped or butterfly, o Flat, o Gabled o Suspension. (see diagram III) The butterfly canopy has been used for about a century. This type of structure allows light to penetrate under the canopy and it directs rainwater away from the platform edge and away from the passenger's heads as they board the train. Flat canopies are a very common way to provide protection but do not shed snow well and therefore require relatively more structure. Gabled structures have been very successful and represent Metro-Links canopy in the modified barrel vault form. Gabled canopies shed snow well, but right on the passengers and tracks if not properly designed. Ultimately the selection of a canopy shape should be integrated with the form that the component parts will take and with the desired materials from which to construct it. The following is an outline of the more generic choices for construction materials that might be considered for the component parts of the canopy: (see diagram IV) o Vertical Structure wood columns steel or aluminum pipe or "H" section columns masonry or stone bearing walls or columns concrete bearing walls or columns o Horizontal Structure wood girders, beams, joists or trusses plastic girders, beams, joists or trusses steel or aluminum girders, beams, joists or trusses concrete cast in place or precast o Horizontal Coverage clear; glass or plastic translucent; glass, plastic, fiberglass or fabric opaque; conventional standing seam, asphalt or rubber Materials should be chosen for their functional and environmental properties. Functional characteristics include durability and low maintenance. Environmental characteristics are defined by the answers to the originals of materials, the amount of energy it takes manufacture or fabricate the material and to the long-term impact that the material will have on the environment. The aesthetics of a material should also be considered. Issues of color, reluctance and texture can dramatically affect the appearance of a design. Aesthetic qualities, however, could be a natural results of the properly select environmental and functional material. As materials are being selected for the component parts of the canopy it is important to consider the source of the material. Does it come from a rain forest? Is it from a recycled source? Does it come form a renewable source? Will obtaining the material be detrimental to it's environment? The hypothetically perfect material would come from either recycle or renewable source. It should come from a place that can be returned to it's natural environmental state. Wood from farmed forests, steel from reclamation, plastics form recycled material and concrete that is comprised from recycled aggregate and reinforcing steel are good examples. It is also important to consider weather or not the materials from a canopy can be reused after it has lived its useful life. Materials should also be selected that need minimum energy requirements to obtain, refine, fabricate and or manufacture. For example, it takes less energy to fabricate aluminum form recycled cans than it does from the raw bauxite ore. The same is true for glass, steel and plastic objects. Natural stone will require less processing than the manufacturing of brick and block. Stone mining if not properly controlled, however, could lead to the ugly results of strip mining. The energy requirements of the construction process on the site should be considered as well. Is bolting less energy intensive than welding? Is hand assembly more energy efficient than machine assembly? Is the cost incurred in the additional labor required balanced by the capital cost and energy consumed by power tools? The final design of the canopy should respond to these questions. The impact of the canopy at eh station on the environment should be minimal. Currently rainwater shed from the canopies enters the existing storm and sanitary sewers. It would be better to direct storm water back to the adjacent land to return to the ground water or to pump it to landscaping that has its own environmental benefits. The path the rainwater takes is also important. It should not wash over toxic coatings or materials prior to reentering the ground water. The resources that are required for the maintenance and operation of the canopy should be reduced. Canopies designed with opague roofs will require artificial illumination in the early mornings and late afternoons, translucent canopies will not. Canopies affixed with solar storage panels could supply the canopies with electricity that could augment requirements for lighting and power necessary for the operation of fare collection machines, security cameras, public address systems and elevators. (see diagram V) The canopies should be designed for minimum maintenance. The use of water for cleaning the canopy or hosing down the platform should be minimized if not eliminated. These new environmental issues are, for the purpose of this project, are now considered a part of a revised design criteria. This new criteria is reflect in the revised design of the canopy that follows. RESULTS The result of this project is an alternative to the existing canopies. This new design is based on the assumption that environmental considerations are an important part of the Design Criteria. In selecting materials for the vertical structure, concrete had advantages over it's alternatives. Concrete, composed of recycled aggregate and reinforcing steel, requires less energy to produce than new steel. Recycled concrete is preferable to wood that is not necessarily a renewable resource. Concrete also has durability and maintenance advantages over wood or steel. The horizontal Structure is a combination of aluminum trusses and tension rods. Aluminum is more expensive than steel and wood, but is probably competitive with cast in place concrete. Because of its weight, aluminum is easier to fabricate and support than concrete. Aluminum is more durable than wood, and requires less maintenance than steel. The horizontal coverage will consist of a combination of translucent fiberglass panels and solar collection and storage panels. The translucent panels allow light penetration and provide shade. The selection of fiberglass, a recycled resource, is preferable to glass, which can shatter, and plastic, which comes from petrochemicals. The solar collection and storage panels would be installed on the sun facing face of the canopy and would offer supplemental power for station lighting and ancillary systems. (see diagram VI) SELECTED REFERENCES Fischer, Stanley I. (editor) Moving Millions: An Inside Look at Mass Transit. New York: Harber and Row, 1979. McCue, George. The Building Art in St. Louis. American Institute of Architects. St. Louis: Knight Publishing Company, 1989. Tomazinis, Anthony. Productivity, Efficiency, and Quality in Urban Transportation Systems. Lexington, Massachusetts: Lexington Books, 1975. Weiss, Harvey. Model Buildings and How to Make Them. New York: Crowell Publishing, 1979. CONCLUSIONS The originally designed canopy was the result of a collaboration of architects, artists, engineers and contractors. It has been enthusiastically received by the public and praised by the media. It is a testament to the success of the design criteria and to the talents of the designers. One purpose of this project was to introduce environmental concerns into the design process. The resultant design achieves the stated goals and suggests that future extensions to the alignment should incorporate environmental requirements into a revised Design Criteria. A new Design Criteria could affect the canopies, as demonstrated, as well as the alignment, bridges and other structures. The entire metropolitan area would then benefit from improved transportation and an improved environment. ACKNOWLEDGEMENTS This participant would like to acknowledge the following people for their dedication and support in the completion of this project: Dr. Edward Haynie for patiently walking me through the Incubator Scientist Program and for his valuable advice; Mr. Michael Kennedy for allowing me to take advantage of the resources of his architectural firm, for his guidance, and for introducing me to Mr. Robert St. John; Mr. St. John for instructing me on the specifics of the Metro-Link project and procedures of the architectural design process: This was heady and completely foreign to me, and most of all I thank God for giving me the determination to see this project through to the best of my ability. Last, but not least, I thank my parents for their patience, encouragement and support. I I still have a long way to go. SELECTED REFERENCES Fischer, Stanley I. (editor) Moving Millions: An Inside Look at Mass Transit. New York: Harber and Row, 1979. McCue, George. The Building Art in St. Louis. American Institute of Architects. St. Louis: Knight Publishing Company, 1989. Tomazinis, Anthony. Productivity, Efficiency, and Quality in Urban Transportation Systems. Lexington, Massachusetts: Lexington Books, 1975. Weiss, Harvey. Model Buildings and How to Make Them. New York: Crowell Publishing, 1979. PRECOLLEGE AWARD Felicia Colon-Barnes, 9th grade Holy Names Academy, Seattle, Washington Summer Science Camp Project Principal Investigator, Kathleen Sullivan AERODYNAMICS IN ACTION Abstract: This science project is about building and flying a simplified model of an airplane wing in a wind tunnel to measure the forces associated with flight. Different wing designs were evaluated with the use of computer simulation. The project involved plotting the desired wing shape on paper, and cutting out plywood to match, and then using these wood templates to guide the wire-cutting to create a two-dimensional wing from styrofoam. After testing the wing in the wind tunnel, the experimental and theoretical results were compared. In both cases, more curvature in the wing shape produced a higher lifting force. The project followed the same general steps that airplane manufacturers use to test wing designs: computer analysis and design, model fabrication, and wind tunnel testing. INTRODUCTION There are four forces acting on an airplane when it is flying. These four forces are: lift, gravity, drag, and thrust, as shown in Figure 1. --------------------------------------------------------------------------------------------------- (NOTE TO READER: SEE PRINTED REPORT FOR ALL FIGURES.) ---------------------------------------------------------------------------------------------------- Lift is the result of lower pressure on the top of the airfoil and higher pressure on the bottom which makes the airplane move upward. The "weight" of each air molecule causes pressure. Gravity is the force that makes the plane move downward because of the weight of the plane. Drag is the force that pushes the airplane in the same direction as the wind. More drag therefore causes more fuel to be used. Thrust is the force caused by the engines of the plane: it makes the airplane move in a forward direction. An important concept in designing aircraft is the shape of the wing, including the camber and angle of attack of the wing cross-section, or airfoil. Camber is how much curve the airfoil has along the middle line as shown in Figure 2. Symmetric airfoils, therefore, have no camber. The angle of attack is the difference in where the wind is coming from and the direction that the wing is pointing, as illustrated in Figure 3. The greater the angle of attack, the more lifting force generally results, until the angle is so large that the wing stalls. The problem statement is how to design a wing. What is the best shape? Does camber increase lift? Our hypothesis is that camber creates more lift for a given thickness. PROCEDURE First, we entered the airfoil shapes into Excel using the profile information in the first reference. There are two airfoils: NACA 4415 and NACA 2415. The first one has more camber than the second one; see Figure 4 for clarification. We used the program Panda to simulate flying these different shaped airfoils: within this application, it is possible to set the angle and calculate the lift. Then, we graphed this data in Excel to compare the airfoils' lift theoretically. In order to measure the actual lift the two wings would produce, it was necessary to build them. The first step was to print the airfoil shapes and using scissors, cut out the figures. We pasted the figures to plywood which was 1/16 inches thick. Then, using a jigsaw, we cut out the wood templates. The edges of the templates needed to be sanded down to the shape of the airfoil to prevent rough edges. Afterwards, we marked and numbered around the edge of the templates. We taped the matching templates to a block of styrofoam after carefully measuring that the distance between the two templates was equally spaced. Then, we used a hot wire to cut the foam tracing the template's shape. The wire gets hot because there is electricity passing through it. Because of the size of the apparatus, it was necessary to have two people controlling the wire; one's leading, calling out the numbers on the template, and the other is following, making sure they are at the same location. After cutting the airfoil shape, we trimmed the trailing edge from the foam using a razor blade and glued on a trailing edge made from balsa wood, making sure the wood was firmly attached. We affixed a black plastic sheet with adhesive, called monocote, to the wood of the trailing edge. By means of a hair dryer, the adhesive is heat activated and helps smooth the surface of the airfoils. Then, we tested the airfoils in the wind tunnel. We took measurements of lift at angles 0 degree, 10 degrees, and 20 degrees. Also, we chose to take additional data at 5 degrees and 15 degrees for airfoil 4415. There was a problem with the screw on the balance that interfered with the reading when we were measuring the 20 degrees angle. This difficulty was solved by keeping the door of the wind tunnel slightly open so that the screw would not hit the door when the wind was blowing. Due to the non-precision nature of the equipment, we felt that it did not affect the pressure inside the wind tunnel significantly. RESULTS The graph in Figure 5 represents the theoretical and experimental results. The airfoil 4415 did indeed produce more lift at most angles. The solid lines show the results from the computer simulation, and the dashed lines show the data from the actual wind tunnel tests. Even though the test results do not match the theoretical results, they do show the same trend. Some of the inaccuracies in the test may have come from setting the angle of attack on the model, which was done by hand, or the interference of the screw with the tunnel door. CONCLUSIONS The results showed that more camber for a given thickness creates more lift. Therefore, if we want to design an airplane wing, we should build it with more camber. However, another consideration that has to be taken into account when designing airplanes is that more camber produces more drag force. Drag causes the airplane to burn more fuel which can be costly. Also, sometimes the best design is too complicated to build. Every airplane design is a compromise. ACKNOWLEDGMENT I would like to thank Edie Lie of the Boeing Company for helping me with my project, as well as Sister Kathleen Sullivan of Seattle University. I would also like to thank my parents, Estrada Colon and Dean Barnes for their support. REFERENCES Abbott, Ira H. and Albert E. Von Doenhoff. Theory of Wing Sections, Dover Publications, Inc. New York City, New York, 1949, pp. 114, 410-413. Alexander, J. Foam Wings, R/C Modeler Corporation, 1971, pp. 10-19. Jennings, Terry. How Things Work: Planes, Gliders, Helicopters, Kingfisher Books, New York City, New York, 1992. Lie, Edie. The Boeing Company, 777 Aerodynamics Engineer. Pearl, Lizzy. The Story of Flight, Troll Associates, US, 1994. PRECOLLEGE AWARD Gabriela Ruvalcaba, 10th Grade Soccorro High School, El Paso, Texas Comprehensive Regional Centers for Minorities Project Principal Investigator, Stephen Riter Drainage Time of a Roughly Cylindrical Container as a Function of Hole Diameter and Initial Water Height Abstract: This is a controlled experiment that investigates the time (t)dependence of a draining approximately cylindrical container with respect to initial water height (h) and aperture size (d). The independent variables are controlled, i.e., first the water height is held constant, then the diameter of the hole is held constant. Graphical analysis of linear and log-log plots of the experimental data is utilized to conclude that t,(h) equals 3.45 h 0.75 d 2. ------------------------------------------------------------------------------------------------- NOTE TO READERS: This electronic version does not contain any figures, photographs or formulas. See printed report. -------------------------------------------------------------------------------------------------- Introduction Drainage time is of interest to engineers who design water tanks and other storage bodies. For my project, I performed a controlled experiment to determine the mathematical relationship between the time t for an approximately cylindrical container to empty and different diameters d of a hole in the bottom and different initial water heights h. For example, one diameter could be 2.0 centimeters (cm), another one could be 5.0 centimeters. From past experience, one knows qualitatively that the container with the larger diameter hole will drain faster than the container with the smaller diameter hole when h is held constant. However, I wished to determine the quantitative relationship. (Note: The initial height of the water was held constant when the diameter was varied, and the diameter was held constant when the initial water height was varied.) For my project, I used four identical, approximately cylindrical containers (cups), each with a different size hole in the bottom, water, a stopwatch, and a ruler in order to measure the different diameters and different initial water heights. After I had all the materials, I measured and cut the different diameter holes in the bottom of the four containers. Then I poured the same amount of water into each container. Each container was tested for its draining time. The procedure was repeated three more times, varying the initial water height, for a total of 16 time measurements. Results were recorded in Table 1 (see data section), and the data were analyzed using graphical analysis techniques. RESEARCH Time In addition to research on graphical analysis, which I will explain later, I also read about time and how water is held together. Time is one of the deepest mysteries known to man. The ability to measure time makes our way of life possible. One way of thinking about time is to imagine a world without time. Any change that takes place again and again stands out from other changes. The rising and setting of the sun exemplifies this point. The first people to keep time probably counted these natural repeating events and used them to keep track of events that did not repeat. When man began to count repeating events, he began to measure time. Scientists think of time as a fundamental quantity that can be measured. Other fundamental quantities include length and mass. The noted physicist, Albert Einstein, realized that measurements of these quantities are affected by relative motion, the motion between two objects. Because of his work, time became popularly known as the fourth dimension. Many physicists have considered the possibility that under certain circumstances time might even flow backwards. But experiments have not supported this idea. Some scientists are considering whether time might have more than one dimension. How Water is Held Together Water's unusual properties depend on the forces holding it together. These forces are chemical bonds and hydrogen bonds. Chemical bonds are the forces that hold the two hydrogen atoms and the one oxygen atom together in a water molecule. Each hydrogen atom has one electron whirling in orbit around its nucleus. But each of these atoms has room for two electrons. The oxygen atom has six electrons in its outer orbit, but it has room for eight. Hydrogen bonds are the forces that link water molecules together. Water molecules have a lopsided shape because the two hydrogen atoms bulge from one end of the oxygen atom. The hydrogen end of the water molecule has a positive electric charge. At the opposite end, the molecule has a negative charge. Water molecules link together because opposite charges attract. Graphical Analysis The analysis of data from an experiment is always reduced to finding the relation between just two quantities, so methods used to determine various types of relations will be discussed. If two quantities are related by some regular function, for each value of one quantity, there is a certain value of the other. The outcome is that when a graph is plotted of one quantity against the other, a line is formed. So the first use of a graph is to show if a relation exists, for if it does, the values of one when plotted against the values of the other will appear to fall on a line. In order to define a line, many points are required. The number required depends on the shape of the line; but if its shape is unknown, the larger the number of points, the better. Two points define a line only if it is known that the line is straight; otherwise a larger number of points is required. Linear or first power relations The general equation of a straight line is of the form y equals mx+b where b is the value of y when x equals 0, and is called the y-intercept. If the line goes through the origin, then b equals 0. The quantity m is called the slope of the line and is commonly expressed as a rise/run. The rise and run are measured in the units indicated on the y- and x-axis, respectively. With this understanding of slope, the values of y must be plotted in the vertical direction, and those of x must be plotted horizontally. If the slope m is zero, then y does not depend on x. Experimental points may not lie exactly on a straight line; we do not expect them to, because experimental data are never exact. The points may suggest a straight line. If so, a straight line should be drawn among the points. Such a line is called the "best fit" straight line. It will probably go above some points and below others, but it is an honest attempt to show the trend of the data. The data should not suggest that a smooth curve would fit the points better than such a straight line. The fact that the experimental points indicate a straight line show that the trend of the data is linear within the limits of that experimental method. The results of the graph may be compared to the results expected from theoretical analysis. Whether or not your results are sufficiently close to the accepted or to a calculated theoretical value can be determined only if the numerical uncertainty, or error, in the experimental value is calculated. Power Laws A very common type of relation is one of the form u equals kvsup. n where u and v are the variables, and the power to which v is raised can be integral, fractional, positive, or negative; k represents a constant. This equation includes all those represented by the curves shown below. Graphs illustrate various mathematical relations. Only the straight line can be identified by inspection. Taking the log of both sides of the equation, u equals kvsup. n gives log u equals log k + n log v. Now let y equals log u, x equals log v, b equals log k, and the equation becomes y equals b+nx, which is the equation of a straight line. The quantity log u can be plotted in the y direction and log v in the x direction. A straight line would then result, the slope of the line being the exponent n and the constant k being the antilog of the y-intercept. Experimental Procedure The time for a storage tank to drain for example a water tower, is of interest to engineers. This experiment investigates the relationship between time t for the contents of a cup to drain through holes of different diameters when the water level starts at different heights h. The independent variables are height h and diameter d, and the dependent variable is time t. This is a controlled experiment since h was held constant and the diameter d was varied four times. Then the diameter d was held constant and the initial height h of the water level was varied four times, as explained in the introduction. The data are shown in Table 1. The time entries are measured in seconds. Table 1 (SEE PRINTED REPORT) Theory/Analysis/Results After recording my data, I graphed time t versus diameter d (see Graph 1). The relationship appears to be a type of inverse relationship t; va 1 d n. Now logic implies that t; va 1 Area of hole;j1 t; va 1pi r (sup)2; j1 t; va 1pi(d/2) 2 ;j1 t; va 1pi(d2/4); j1 t; va 1 d (sup)2; j1 t equals k 1 d (sup)2 (where k is a constant of proportionality), so that nequals(rom)2. This equation is of the form y equals mx+b, which is a straight line if y; j1 t m; j1 k x; j1 1/d (sup)2 b; j1 0 A graph of t versus 1/d (sup)2 should be a straight line going through the origin. I did graph t vs. 1/d (sup)2, holding h constant (Graph 2) and did get straight lines through the origin. Thus, I confirmed that n equals 2 and t; va 1/d (sup)2 (height controlled). Next, I investigated how the initial water height h affected time t to drain. I plotted t time versus height h, holding d constant (see Graph 3). The shape of Graph 3 suggests that t; va h sup. ital. n, when 0 less than n less than 1 (for n less than 1 the graph would look parabolic or steeper in shape and n equals 1 would be a straight line). Now to see if my reasoning is correct, I plotted log t versus log h (see Graph 4), as explained below. Assume that t; va h sup. ital. n ;j1 t equals kh sup. ital. z, where k is a constant of proportionality. t equals kh sup. ital. n ;j1 log t equals log k +log h n ;j1 log t equals log k + n log h; j1 log t equals n log h + log k, which is of the form of a straight line y equals mx+b, if y; j1 log t m; j1 n x;j1 log h b;j1 log k Thus, if I do get straight lines in Graph 4, I can find the exponent of h since n is just the slope of the line } } } } 18]2]log t y equals 2] n m 2]log h x + 2]log k b When I plotted the t and h data on log-log paper, I did indeed get four straight lines which were parallel to each other.(footnote 1) I found the slope to be approximately 0.75, so n equals 0.75 equals 3/4. Therefore, I found that t (d,h) ;va h 0.75 d 2 ;j1 t(d,h) equals k h 0.75 d 2. Now I chose a good point and found the constant k. I plugged d equals 3 cm and h equals 5 cm into the above equation and set this equal to my experimental drainage time of 1.28 seconds. t(d,h) equals k h 0.75 d 2 1.28 equals k 5 0.753 2 ;bm k equals 1.283 25 0.75 equals 3.45 Thus, finally, I found that t (d,h) equals 3.45 h 0.75 d 2 for my container. Discussion of Error The time entry in Table 1 may be in error by ”+•0.1 second since a human hand and eye cannot be trusted to measure less than a tenth of a second. Additional experimental errors (uncertainties) include the errors in measuring diameter d and initial height h. The greatest possible error (g.p.e.) for both d and h can be estimated as half the place holding value of the last significant digit. Since both d and h are measured to the nearest tenth, the g.p.e. can be estimated as ”+•0.05cm. Error in cutting perfectly circular holes was also present. The graphing procedures employed in the lab help take all the above errors into account even when the errors in the measured values are propagated by squaring, multiplication, division, etc. The percent relative error can be calculated using the following equation: percent relative error equals measured value (minus) calculated value measured value 4]*4]100% For example, for d equals 2.0cm and h equals 12.2cm, the measured drainage time is t(d,h) equals 3.45 h 0.75 d 2 t (2.0,12,2) equals 3.4512.2 0.752.0 2 equals 5.63 Thus, the percent relative error is percent relative error equals 5.96 (minus) 5.935.964]*4]100% equals 5.5%. The percent relative error for all values of drainage time is summarized in Table 2. Table 2 (SEE PRINTED REPORT) Conclusion It is possible to find a general mathematical relationship between the time for a cylindrical container to empty and the diameter of a hole in its bottom and the initial height of water within it. Because the "cylindrical" cups were identical, the data from them could be combined to find a general mathematical relationship. For my experiment, as shown above, I found that t(d,h) = 3.45 h 0.75 d 2, which allows us to predict the time t for my cylindrical container to drain given any arbitrary diameter d and initial water height h. So for any h and d, I can find a value of time t without actually measuring it. One of the goals in science is to reduce empirical data to a mathematical equation, as I have done in this experiment. Graphical analysis is seen to be an extremely powerful tool in determining the relationship between experimental variables. Future investigations could include investigating different sized containers, different shaped containers (rectangular, semihemispherical, etc.), or the placement of the hole to see whether the above equation needs to be modified, and if so, to determine the new equation. In addition, other future research could include doing a library search of technical (science and engineering) papers on the topics of drainage time of storage bodies and the theory of water flow through orifices. Reading these papers might provide me with practical applications and new research techniques. In summary, from doing this project I learned about draining. (It does not matter from where water is draining; it could be from a sink, a bottle, etc.). Also, I learned new concepts and techniques of mathematical and graphical analysis. Finally, I learned more about time and properties of water. All these reasons made this project very worthwhile. BIBLIOGRAPHY Bailey, Donna. Energy from Wind and Water. United States of America: Steck Vaughn Company, 1991. Beller, Joel. So You Want to Do a Science Project! New York: Arco Publishing, Inc., 1982. Field Enterprises Educational Corporation. "Water." The World Book Encyclopedia, WXYZ volume 21. 1976, USA, 92-104. Field Enterprises Educational Corporation. "Time." The World Book Encyclopedia, T volume 19. 1976, USA, 226-229. Greenberg. Discovery in Physics. (graphical analysis handout, no other bibliographical information available). Macmillan Educational Company. "Water." Merit Students Encyclopedia, WXYZ volume 19. 1987, 326-328. Morris, William, ed. The American Heritage Dictionary of the English Language. New York: American Heritage Publishing Co., Inc., 1969. Schaim, Uri Haber, Judson B. Cross, John H. Dodge, James A. Walter. Laboratory Guide, PSSC Physics, Third Edition. Canada: D.C. Heath, Alud Company, 1971. footnote-------------------------------------------------------------------------------- 1/The fact that the log t vs. log h lines are parallel to one another is very important because the goal of this experiment is to find a general mathematical relation that describes all the data. If the lines were not parallel, they would have different slopes, hence different values of n, and no general mathematical relation would exist. Instead, several separate equations would be needed. UNDERGRADUATE AWARD Gisela Rodriguez, Senior DEPARTMENT OF CHEMISTRY, UNIVERSITY OF PUERTO RICO Research Careers for Minority Scholars Project Principal Investigator, Brad Weiner TRIISOPROPYLSILANOL: A NEW PHASE TRANSFER CATALYST FOR DEHYDROHALOGENATION Abstract: A new solid-liquid phase transfer catalyst (PTC) has been developed which allows the inexpensive base, potassium hydroxide, to quantitatively convert alkyl halides to alkenes avoiding both ether and alcohol by-products. Triisopropylsilanol (TIPSOH) is effectively deprotonated at the surface of the base forming potassium silanoate (KOTIPS). This highly hindered soluble base effects the deprotonation of alkyl halides regenerating TIPSOH which repeats the cycle. Even primary halides undergo exclusive elimination, an unprecedented result. More acidic in dipolar aprotic solvents than normal alcohols, an efficient proton transfer occurs between the KOH and the silanol providing an entirely new and useful catalytic process. The silanol-silanoate system has also been demonstrated to be an efficient alternative to existing methods in the detoxification of mustard gas analogues and environmental pollutants. INTRODUCTION Silanols and their corresponding anions (silanoates) are finding increasing uses in organic synthesis as organic-soluble equivalents of water and hydroxide ions as well as and in organometallic chemistry as stable bulky ligands. 1,2 Silanols are hydrogen-bonding donors and acceptors. 3a-c Early studies established that silanols are more acidic than the corresponding carbinols, 3a a feature which we felt could be effectively used to develop an entirely new approach to catalytic dehydrohalogenation, namely through the silanol-silanoate system. Through related studies on the directive properties of the triisopropylsilyl group (TIPS),(sup)4 we had been impressed with the remarkable resistance of this group to undergo substitution at the silicon center, a feature which results in much greater hydrolytic and chemical stability for organosilanes which contain this ligation compared to traditional less bulky derivatives. The fact that the isopropyl groups are smaller than phenyl or cyclohexyl while providing effective steric protection around the silicon was an additional attractive feature of the TIPS group because it renders the silyl derivatives both volatile and easy to analyze spectroscopically compared to these other ligand examples. Moreover, their smaller size was anticipated to increase the water solubility of the silanol compared to larger, more hydrophobic groups. These considerations led us to prepare the new silanol, triisopropylsilanol (TIPSOH, 2), and investigate its use as a new phase transfer catalyst for dehydrohalogenation reactions. RESULTS AND DISCUSSION To initiate this study, we took advantage of the known reluctance of highly hindered silanols to form disiloxanes,(sup)5 developing a simple, efficient procedure for the preparation of TIPSOH (99%) from the ammoniacal hydrolysis of TIPSCl (eq 1). This stable silanol was converted to the corresponding alkali metal silanoates (3(rom)) directly from the alkali metals.(sup)6 These silanoates proved to be pentane-soluble and very hygroscopic. For analytical purposes, they were silylated with chlorotrimethylsilane (TMSCl) to produce TMSOTIPS cleanly in each case with this ether being isolated in 92% yield from the potassium salt reaction.(sup)7 The reactions of primary and secondary alkyl halides with metal alkoxides has been extensively studied.(sup)8 These bases effect the dehydrohalogenation of secondary halides through an E2 process giving a mixture of alkene products under stoichiometric conditions. Hindered reagents such as KO(t-Bu) in DMSO give the best results,8a,b being superior to the less bulky, but less expensive, KOH. However, none of these systems can be used to dehydrohalogenate primary alkyl halides without substitution (SN2) being a significant competing process. Because we had observed that the bulky TIPS group has the property of impeding reactions at proximate centers,(sup)4 we felt that the substitution process would be disfavored and even avoided using the highly hindered base, KOTIPS, thereby allowing it to function exclusively as a base with even these substrates. While 3 would be expensive for stoichiometric applications, its potential value in this regard could be significantly enhanced if it could be generated efficiently in a catalytic cycle employing the inexpensive base, KOH. The relative acidity of 2 was compared to t-BuOH by the IR method previously developed for related silanols. 3a Through hydrogen bonding to the base diethyl ether, silanols exhibit IR bands which are shifted nearly twice that of the corresponding carbinols, indicative of their greater acidity. This phenomenon was examined with (ital)t(rom)-BuOH which gave an ;gD;gy equals 115 cm-1 compared to ;gD;gy equals 210 cm-1 for 2, consistent with the greater acidity of the latter. The fact that KOTIPS (3), unlike KOH, is highly soluble in non-polar solvents also suggested that if it is efficiently formed from the deprotonation of 2 with solid KOH, that this base could reach the alkyl halide in solvent systems where KOH itself is insoluble. Thus, the TIPSOH-KOTIPS system held promise for providing a method for using the inexpensive base, KOH to generate a highly hindered organic-soluble base which is too hindered to undergo competitive substitution even with primary alkyl halides. Employing either DMSO or DMF as solvent, we first established that 2-bromooctane (6a) underwent no reaction with solid KOH at room temperature in 2 d. The catalytic role played by 2 was clearly demonstrated when, under the same conditions in either solvent, adding 10% TIPSOH to these mixtures results in complete reaction in 5 h producing 1-octene (41%) and 2-octene (51%, c/t equals 1:6) (Scheme 1, Table 1). However, for 1-bromooctane (4a) the choice of the solvent system is critical (eq 2). For example, the dehydrohalogenation of 4a in DMSO (;ge25 equals 46.6)(sup)9 produces only the octyl silyl ether (8, 10%) with 90% of 4a remaining unreacted. However, with 4a (0.66 M) in DMF (;ge25 equals 36.7)(sup)9 a significant improvement in the process was observed, the yield of 5a being 87% accompanied by 10% of 8. This suggested that the formation of water (;ge25 equals 78.3)(sup)9 from the KOH could play an important role in the outcome of the reaction by raising the dielectric constant of the medium, a change which could favor substitution over elimination. We decided to carry out the reaction with a lower concentration of 4a(rom) (0.3 M), a change which increased the yield of 5a(rom) to 97% (Table 1) and essentially eliminated this deleterious side-reaction. This remarkable result is unprecedented with for 4a or related derivatives employing any procedure which uses hydroxide or alkoxide bases. Similarly, both 1- and 2-bromoethylbenzene are quantitatively converted to styrene under these conditions (Table 1). This methodology was also applied to the dehydrohalogenation of 1,2-dibromooctane which, while slower, produces 1-octyne (10, 94%) efficiently in 12 h at 25oC employing 20% TIPSOH as the catalyst (Scheme 2). The process clearly involves the step-wise elimination and all of the possible isomeric vinyl bromides intermediates (11-13) were observed spectroscopically (GCMS, 13C NMR) as the reaction proceeds. Moreover, this methodology was examined as a new alternative for the disposal of mustard gas (14) by demonstrating that a related analog undergoes clean elimination to provide ethyl vinyl sulfide (16) quantitatively (Scheme 3).10 CONCLUSIONS The use of TIPSOH as a phase transfer catalyst in the dehydrohalogenation of haloalkanes circumvents the difficulties previously encountered with alkoxide bases, namely competitive substitution. This is particularly dramatic for 1-haloalkanes which exhibit no substitution, a result not equaled even with highly hindered bases (e.g. KO(t-Bu) under stoichiometric conditions. This new methodology also avoids the need for excess base (900%) and the large amounts of solvent, and uses the inexpensive base, KOH, to generate the highly hindered KOTIPS from TIPSOH which functions as a new PTC in an effective catalytic cycle. ACKNOWLEDGEMENTS This work summarizes the results of ongoing research at the University of Puerto Rico, Rio Piedras Campus to which the author has contributed. The significnt experimental contributions of Jaime Vaquer (Ph.D., UPR-RP 1994) and Michael J. Diaz and the research direction of Professor John A. Soderquist (UPR-RP) is gratefully acknowledged. The generous support of this research by the U.S. Department of Energy (DE-FC02-91ER75674) and the NSF-RCMS (HRD-9011964) is also gratefully acknowledged. REFERENCES AND NOTES 1. Sieburth, S. McN.; Mu, W. (ital)J. Org. Chem. 1993, 58, 7584. 2. Vaquer, J. Silicon in Hydroboration, Dehydroborylation, Protection and Phase Transfer Process, Ph.D. Dissertation, University of Puerto Rico, 1994. 3. (a) West, R.; Baney, R. H. J. Inorg. Nucl. Chem. 1958, 297. (b) West, R.; Baney, R. J. Am. Chem. Soc. 1959, 81, 6145. (c) Salinger, R. M. J. Organomet. Chem. 1968, 11, 631. 4. (a) Soderquist, J. A.; Colberg, J. C.; Del Valle, L. J. Am. Chem. Soc. 1989, 111, 4873. (b) Soderquist, J. A.; Rivera, I.; Negron, A. J. Org. Chem. 1989, 54, 4051. (c) Soderquist, J. A.; Anderson, C. L.; Miranda, E. I.; Rivera, I.; Kabalka, G. W. Tetrahedron Lett. 1990, 31, 4677. (d) Anderson, C. L.; Soderquist, J. A.; Kabalka, G. W. Tetrahedron Lett. 1992, 33, 6919. (e) Soderquist, J. A.; Miranda, E. I. J. Am. Chem. Soc. 1992, 114, 10078. (f) Soderquist, J. A.; Rane, A. M.; L˘pez, C. Tetrahedron Lett. 1993, 34, 1893. (g) Soderquist, J. A.; Miranda, E. I. Tetrahedron Lett. 1993, 34, 4905. 5. Sommer, L. H.; Frye, C. L.; Parker, G. A.; Michael, K. W. J. Am. Chem. Soc. 1963, 85, 3271. 6. Armitage, D. A. in Comprehensive Organometallic Chemistry, Wilkinson, G.; Stone, F. G. A.; Abel, E. W. (Eds.); Pergamon: Oxford, 1982, 2, 1. 7. Satisfactory analytical data could not be obtained for the hygroscopic metal silanoates. However, their quantitative (”+•5% by GC analysis) to TMSOTIPS with TMSCl was observed in each case (i.e. Li, Na, K). 8. (a) Schlosser, M.; Tarchini, C. Helv. Chim. Acta 1977, 60, 3060. (b) Schlosser, M.; Tarchini, C.; An, T. D.; Jan, G. Helv. Chim. Acta 1979, 62, 635. (c) Hunig, S.; Oller, M.; Wehner, G. Liebigs Ann. Chem. 1979(rom), 1925. (d) Traynor, S. G.; Kane, B. J.; Coleman, J. B.; Cardenas, C. G. J. Org. Chem. 1980, 45 , 900. (e) D'Incan, E.; Viot, P. Tetrahedron 1984, 40, 3415. (f) Bacciochi, E. Acc. Chem. Res. 1979, 12, 430. (g) Bartsch, R. A.; Zavada, J. Chem. Rev. 1980, 80, 454. 9. Lange, N. A. in Lange Handbook of Chemistry; Dean, J. A. (ed.), McGraw Hill, New York: 1985, pp. 5-68. 10. (a) Afonin, A. V.; Amosova, S. V.; Gostevkaya, V. I.; Gavrilova, G. M. Izv. Akad. Nauk. SSSR Ser. Khim. 1990, 398. (b) Afonin, A. V.; Amosova, S. V.; Gostevkaya, V. I.; Gavrilova, G. M.; Ivanova, N. I.; Vashchenko, A. V. Izv. Akad. Nauk. SSSR Ser. Khim. 1990, 1796. (c) Afonin, A. V.; Khil'ko, M. Ya.; Gavrilova, G. M.; Gostevskaya, V. I. Izv. Akad. Nauk. SSSR Ser. Khim. 1991, 333. (d) Kriudin, L. B.; Shcherbakov, V. V.; Kalabin, G. A. Zh. Org. Khim. 1987, 23, 1830. UNDERGRADUATE AWARD Monica R. Page, Senior DEPARTMENT OF MECHANICAL ENGINEERING, TENNESSEE STATE UNIVERSITY Research Improvement in Minority Institutions Project Principal Investigator, Dr. Lee H. Keel APPLICATION OF INTERVAL MODELING TECHNIQUES TO ROBUST CONTROL OF A SLEWING BEAM WITH LOADS Abstract: This project presents an approach for modeling a slewing beam with parametric variations via an interval model of the transfer function. Here the interval model is a model set whose transfer function parameters are bounded. The algorithm attempts to obtain the models of the slewing beam with various loads by using the finite element algorithm which generates a system model based on the stiffness and mass matrices by following the physical laws. Then, the interval modeling techniques are applied to obtain an interval system of the transfer function. In this project, the interval polynomial techniques recently developed in the robust control community are used to analyze the interval model. Both the open-loop and closed-loop systems of the slewing beam with added loads are used to demonstrate and verify the developed modeling technique. INTRODUCTION Modeling a dynamic system with parameter changes is an important and challenging problem in the fields of structural dynamics, system identification, and robust control. For the space structure operated in the space environment, there is the possibility of added loads to the structure. To maintain structural control, it is necessary to consider this type of system change. In this paper, we address the problem of modeling a flexible beam with added loads. Using a set of differential equations to model a system with parameter changes provides the physical representation. Since each differential equation can be expressed as a transfer function, this system can be modeled as a set of transfer functions. The interval model of the transfer function is used to model the flexible beam with various loads. In the last few years, the interval modeling techniques developed in [1,2] have been used to model dynamic systems with parameter changes and dynamic systems with model uncertainty. The first step of the proposed algorithm is to obtain the models of the slewing beam with various loads by using finite element analysis[3], which generates a system dynamic model based on the stiffness and mass matrices by following physical laws. Then a model reduction technique[4] is applied to obtain a reduced order model with low frequency modes of interest. After we obtain the models for various cases, we apply a singular value decomposition technique[1] to obtain an interval model of the transfer function. Both the open-loop and closed-loop slewing beam systems are used to demonstrate and verify the developed approach. For the closed-loop system, a PD[5] feedback controller is designed to suppress the vibration of the slewing beam. FINITE ELEMENT MODELING Finite element analysis has been widely applied to generate the model of a dynamic system in the space and automobile industries due to the availability of digital computers to carry out the numerical aspects of structural dynamics problem solving. When modeling a free-pinned Euler beam, which is the beam component of the flexible beam system with various loads as shown in Figure 1, the vibration of the uniform beam is governed by Euler's beam differential equation[6] (1) EI; gq 4 y; gq x 4 + m; gq 2 y; gq t 2 equals f Figure 1 Flexible beam system with loads This is a continuous model based on Newton's law, and it represents an infinite degree of freedom (DOF) system without analytical solution. Using finite element analysis, we can generate an approximated discrete model with finite N-DOF for the dynamic analysis of this Euler beam. The approximated solution can be expressed as (2) y (x,t) equals Ni equals 1; g Ci(x) u i(t) where each ;gCinf. ital. i(x) describes the deflected shape corresponding to the vibration uinf. ital. i(t). The kinetic energy and potential energy of the beam can be written as[3] (3) T equals 12 Ni equals 1 Nj equals 1 mij2]uAi2]uAj (4) V equals 12 Ni equals 1 Nj equals 1 kij2]uAi2]uAj where mij and kij are the elements of the stiffness and mass matrices, respectively. For the Euler-Bernoulli beam, the stiffness and mass matrices of the ith element (5) ki equals EIL 3 4]12 (minus)6 L (minus)12 (minus)6 L 4](minus)6 L 4 L 2 6 L 2 L 2 4](minus)12 6 L 12 6 L 4](minus)6 L 2 L 2 6 L 4 L 2 (6) mi equals ;grAL 420 4]156 22 L 54 (minus)13 L 4]22 L 4 L 2 13 L (minus)3 L 2 4]54 13 L 156 (minus)22 L 4](minus)13 L (minus)3 L 2 (minus)22 L 4 L 2 where E is Young's modulus, I is the second moment of inertia, L is the length of the ith element, ;gr is the mass density, and A is the cross-sectional area. These matrices correspond to the coordinate of the ith element as shown in Figure 2. (7) qi equals [yi ;gui yi+1 ;gui+1]T Figure 2 Deflection of the ith element The generalized coordinate vector of the beam with n nodes is (8) q equals [y1 ;gu1 y2 ;gu2 . . . yn ;gun]T The stiffness matrix K0 (mass matrix M0) of the beam is the summation of the stiffness(mass) contributed from each element. After generating the mass matrix of the beam, the moment of inertia of the shaft component, where the beam is clamped, is added to the mass element m22, which corresponds to the angular displacement ;gu;i1 at the first node. Since the beam is clamped at the hub, the first row and column of M;i0 and K;i0, which correspond to the coordinate y1, are eliminated to generate the mass matrix M and the stiffness matrix K. Using Lagrange's equation[3], we can obtain the dynamic equation (9) M2]3]=Uq + Kq equals Bqf where Bq is the input matrix. This differential equation can be transferred as a state space model (10) 2]xA equals Ax + Bf (11) x equals 2]q 2]qA , A equals 2]O 2n (minus)M-1K 2] I2n O 2 n, B equals 2]O M -1 B q where O2n and I2n are 2n x 2n zero and identity matrices. The displacement output measurement can be written as (12) y equals Cx The transfer function corresponding to the input-output of this state space model is (13) g(s) equals y(s)f(s) equals C(sI(minus)A)-1B After we obtain the state space model of the high order finite element model, a model reduction technique[4] is used to obtain a reduced order model with the modes of interest. In this paper, we consider 21 cases with various added loads, minf. ital. w equals (0.01i)Wb (i equals 0,1,2, . . . ,20), at the tip of the beam. Here Wb is the weight of the beam. In the finite element modeling, these added loads are added to the mass element corresponding to the coordinate yn. The results of using finite element analysis to model the flexible beam with various loads will be discussed later. INTERVAL MODELING TECHNIQUE The interval modeling algorithms have recently been developed for modeling the model uncertainty and the parameter changes of a dynamic system. the model structure chosen in this paper is a linear interval system of the transfer function. (14) G(s,p) equals {g(s);vb g(s) equals n0 (s) + ni equals 1; gai ni(s) d 0 (s) + ni equals 1;gai d i(s), ;gai ;ge [;ga-i ;ga+i]} where n;i0(s) and d;i0(s) are the numerator and denominator of the nominal model. The bounded variables ;gainf. i represent the parameter uncertainty part in the directions of the polynomials ninf. ital. i(s) and dinf. ital. i(s). The transfer functions of the previous reduced finite element models of the k cases with different loads are expressed as (15) g i(s) equals nimsm+nim (minus)1sm (minus)1+ ... +n i 0 sm+dim (minus)1sm (minus)1+ ... +di 0, i equals 1,2,...,k with the parameter vectors (16) p i equals[ni 0 ... nim (minus)1 nim di0 ... dim (minus)1;cbT, i equals 1,2,...,k A judicious choice for the nominal model is the average of all the models. The parameter vector corresponding to the nominal model is (17) p 0 equals 1 k ki equals 1 p i The uncertainty part of the interval model is contributed from the difference between the nominal model p0 and the finite element models pi. The model difference between pi and p0 is (18) ;gD p i equals p i (minus) p 0 Then we use the model difference vectors to generate a parameter uncertainty matrix (19) ;gD P equals [;gDp1 ;gDp2 ... ;gDpk] The algorithm in Appendix[7] is used to process the matrix ;gDP to obtain the polynomials ninf. ital. i(s), dinf. ital. i(s) and the parameter bounds ;ga+, ;ga(minus). NUMERICAL RESULTS An aluminum slewing beam clamped to a shaft with parameters listed in Table 1 is the test article considered in this project. Table 1 (SEE PRINTED REPORT) Parameters of Slewing Beam System Shaft moment of inertia In the finite element modeling, the node number n is chosen to be 16. The finite element models with 16 nodes has 31 modes. Five of these 31 modes are within the 100 HZ frequency range. Table 2 shows the natural frequencies of these five modes for the beam without added loads. Table 2 Natural Frequencies of Slewing Beam System Mode Natural Frequency (HZ) 1 Rigid body 0 2 1st Bending 6.684 3 2nd Bending 21.951 4 3rd Bending 46.828 5 4th Bending 82.491 In this project, we consider the model of the first three modes, the rigid body mode and the first two bending modes, with one input and one displacement output both of which are located at the tip of the beam and are in the y direction. The model of the first three modes is obtained by using the model reduction technique in [4]. The transfer function of the single-input, single-output three mode model for the flexible beam system without added loads is (20) 7.216x10(sup)1s(sup)4+9.560x10(sup)5s(sup)2+6.624x10(sup)8s(sup)6 +2.079x10(sup)4s(sup)4+3.355x10(sup)7s(sup)2 The nominal model, which is the average of the models of the 21 cases, of the interval model is (21) 3.949x10(sup)1s(sup)4+5.102x10(sup)5s(sup)2+3.510x10(sup)8s(sup)6 +1.710x10(sup)4s(sup)4+2.196x10(sup)7s(sup)2 In Table 3, which shows the results of the uncertainty part of the interval model, the interval length, ;ga+(minus);ga(minus), indicates the parameter uncertainty distributed in the directions of ni and di. The parameter change is dominated in the direction of the first singular vector. The parameter uncertainty distributed in the direction of the first singular vector is about 10(sup)7 times larger than that of the fifth singular vector. Table 3 (SEE PRINTED REPORT) Results of Interval Model To verify the identified interval model by using the closed-loop system, we first design an optimal PD feedback controller to suppress the vibrations of the first three modes. Figure 3 shows the block diagram of this feedback system. The control design is based on the flexible beam without added loads. This design is basically to increase the damping of each mode to satisfy the performance of vibration suppression and to maintain the control force as small as possible. The transfer function of this designed PD controller is (22) K(s) equals 0.1910 s + 0.1402 Figure 3 Block diagram of the feedback control loop Figure 4 shows the transfer function magnitude plots of the open-loop(solid line) and closed-loop(dashed line) systems for the beam without added loads. The vibration of each mode is significantly suppressed. From the open-loop interval model, we can generate the closed-loop interval system as (23) T (s) equals G (s,p)1+G(s,p) K(s) equals n 0(s)+5iequals1 ;gai n i(s) d 0(s)+K(s)n 0(s)+5 i equals 1;gai[K(s)n i(s)+d i(s);cb Then we apply the edge theorem[8], which is developed to compute the boundaries of the roots of the linear interval polynomials, to obtain the boundaries of the poles of this closed-loop interval system. The transfer function of the closed-loop systems of the previous 21 cases with various loads are (24) t i (s) equals g i(s)1+gi(s)K(s), i equals1,2,...,21. In Figure 5, `.' represents the boundaries of the poles of the closed-loop interval system and `*' represents the poles of the 21 closed-loop cases. The root clusters precisely represent and cover the poles of the closed-loop systems with various loads. This verifies that the identified interval model precisely represents the cases with various loads. Also the performance of vibration suppression for the cases with various loads can be predicted by using the root clusters of the closed-loop interval system. CONCLUDING REMARKS This paper presents an algorithm to model the flexible beam system with various loads. The finite element analysis is used to generate the models for the system with various loads. The model reduction technique is applied to obtain the reduced order model with the modes of interest. Then the interval modeling technique is used to generate an interval model. The numerical results of the interval model show that the range of the uncertainty parameter of the interval model indicates the uncertainty distributed in the direction of the corresponding polynomials. Also the results of the closed-loop system verify that the identified interval model precisely represents and covers the original cases with various loads. In the future, we will implement experiments and compare the experimental results with the results in this paper. ACKNOWLEDGEMENTS The author would like to thank Dr. Michael Busby, Director of the Center of Excellence in Information Systems and Dr. Jiann-Shiun Lew, co-principal investigator, RIMI grant and advisor for my project, for their encouragement and advice throughout my work. This research is supported by NSF grant HRD-9252932. REFERENCES 1. Lew, J.-S., Keel, L. H., and Juang, J.-N., "Quantification of Model Error via an Interval Model with Nonparametric Error Bound," Proceedings of the AIAA Guidance, Navigation, and Control Conference, Monterey, CA, August 1993. 2. Lew, J.-S., Link, T., Garcia, E., and Keel, L. H., "Interval Model Identification for Flexible Structures with Uncertain Parameters," Proceedings of the AIAA/ASME Adaptive Structures Forum, Hilton Head, SC, April 21-22, 1994. 3. Craig, R.R., Structural Dynamics: An Introduction to Computer Methods, John Wiley & Sons, Inc., New York, 1981. 4. Gawronski, W. and Juang, J.-N., "Model Reduction for Flexible Structures," Advanced in Large Scale Systems Dynamics, edited by C.T. Leondes, Academic Press, Inc., New York, 1990. 5. Phillips, C. L. and Harbor, R. D., Feedback Control Systems, Prentice Hall, Inc., Englewood Cliffs, New Jersey, 1988. 6. Thomson, W. T., Theory of Vibration Applications, Prentice Hall Inc., Englewood Cliffs, 1972. 7. Lew, J.-S., "Comparison of Interval Modeling Techniques for Structures with Uncertain Parameters," Fifth International Conference on Adaptive Structures, Sendai, Japan, 1994. 8. Bartlett, A. C., Hollot, C. V., and Lin, H., "Root Location of an Entire Polytope of Polynomials: It Suffices to Check the Edges", Mathematics of Controls, Signals and Systems, Vol. 1, pp. 61-71, 1988. APPENDIX 1. Compute the weighted uncertainty matrix ;gDPsup. W ;gD P sup. ital. W equals W(minus)1;gD P where W equals diag [w;i1 w;i2 . . . w2m+1] and wj is the standard deviation of the jth element of pi. 2. Use SVD to factorize ;gDPsup. W ;gD P sup. W equals USV sup. ital. T where U is the basis matrix of ;gDPsup. W. 3. Compute the basis matrix for ;gDP U inf. ital. P equals WU, U inf. ital. P equals [U;i1 U;i2 ... U 2 m +1] 4. Compute the coordinate vector of ;gDpinf. ital. i corresponding to the basis matrix UP;gD;gainf. ital. i equals U(minus)1P;gD pinf. ital. i 5. Compute the polynomials ninf. ital. i(s) and dinf. ital. i(s) n inf. ital. i(s) equals m+1j equals1 U inf. ital. i(j)sj(minus)1, d inf. ital. i(s) equals 2 m+1Jequalsm+2 U inf. ital. i(j)sj(minus)m (minus)2 6. Compute the parameter bounds ;ga+i equals max{;gD;ga;i1(i) ;gD;ga;i2(i) ... ;gD;gainf. ital. k(i)};ga+i equals min{;gD;ga;i1(i) ;gD;ga;i2(i) ... ;gD;gainf. ital. k(i)} where ;gD;gainf. ital. j(i) is the ith element of ;gD;gainf. ital. j. Figure 4 Transfer functions of open-loop and closed-loop systems without added loads Figure 5 Root clusters of closed-loop interval system (a) Rigid body mode (b) 1st bending mode (c) 2nd bending mode GRADUATE AWARD Thomas Tenorio, M.S. Department of Computer Science, New Mexico State University, Las Cruces, NM Research Improvement in Minority Institutions Project Principal Investigator, Dr. H‚ctor J. Hern ndez Creating an Object-Oriented Test Repository Abstract: The Object-Oriented (O-O) paradigm provides a rich set of tools for software development. This paradigm has influenced the development of databases, languages, graphical user interfaces, along with analysis and design methodologies. A significant portion of software development focuses on existing systems or legacy systems. This paper is a case study on enhancing an existing test repository by using existing O-O technology. The High Energy Laser Site and Test Facility (HELSTF) test repository has a hierarchical database with pointers to data files stored on tape or disk. O-O databases support file storage directly in the database. These databases can also store other complex data such as video or test reports. This paper is a case study on utilizing O-O methodologies and tools to create a prototype repository. An Introduction Section provides a critique of the current HELSTF repository. The Background Section documents the synthesis of sometimes competing methodologies and identifies tools necessary for efficiently implementing this prototype. The Case Study Section highlights preliminary analysis and design issues. A summary of the results of this study is found in the Conclusion section. Introduction This section describes the HELSTF Test Repository as it exists to date (see Figure 1). HELSTF is a laser test facility where performance data are collected on various lasers through an ongoing series of tests. The HELSTF Test Repository consists of the complete set of data files associated with every test conducted at HELSTF along with the database tracking these files. A hierarchical database tracks the data files located on either disk or magnetic tape. This database tracks the set of data files associated with each test. The database also contains some background information on test, camera, and images. Additional background data is found in documents that are inaccessible from the computer system. Signal (sensor output) and image (camera output) data are the two forms of data taken at HELSTF. This paper addresses only camera data with primary focus on the infrared video sequences called images. Computer operators run programs to digitize and decommutate analog data into data files available for analysis. Digital image files contain the infrared video sequences in a format unique to HELSTF. Decommutation is the mapping of data from its encoded multiplexed form into its decoded recorded form. They then use analysis programs to generate hard copy plots for various customers (see Figure 2). Each plot represents one frame of camera data. Only three tests are available on-line due to storage limitations. The remaining data from over 100 tests are off-line in a vault unavailable for interactive analysis and review. Restoring a complete test from tape to disk usually takes about a day. Analysis software available for processing images at HELSTF has two limitations: a command-line interface and static frame processing. The primary limitation of the command-line interface is that the user must memorize cryptic commands to analyze and plot image data. This learning curve usually prohibits visiting analysts from using these programs. The software supports only static processing of individual frames in an image and does not support dynamic interactive manipulation of frames. A user must process each frame one by one to isolate anomalies. Infrared cameras record data at twenty-five frames a second. Processing all these frames is a very time-consuming process. With only static processing there is a possibility of overlooking anomalies easily seen when viewing frames dynamically. Problems with the current approach include: o File-Based Repository o VMS Specific Data Files o Textual Database Inaccessible to Analysts o Majority of Data Archived Off-line o Analysis Restricted to Tests On-line o Redundant Decommutation and Digitization o Emphasis on Batch Processing o Limitations of Interactive Processing Environment o Raster-Based Graphics o Command-Line Interface o Static Frame Processing o Limited Background Information Background The Object-Oriented Programming (OOP) Paradigm models the world into objects. This model is easier to understand because the semantic gap between reality and the model is small (see Figure 3 [6]). Program modifications are simpler because modifications often focus on a single item or object. Rumbaugh identifies the four aspects characteristic of the object-oriented approach [16]: Identity means data are quantitized into discrete, distinguishable entities called objects. Classification means that objects with the same data structure (attributes) and behavior (operations) are grouped into a class. Polymorphism means that objects of different classes behave differently under the same operation. Inheritance is the sharing of attributes and behaviors among classes based on a hierarchical relationship. A definition of OOP can now be presented. OOP is a method of implementation in which programs are organized as cooperative collections of objects, each of which represents an instance of some class, and whose classes are all members of a hierarchy of classes united via inheritance relationships [1]. There may be multiple hierarchies of cooperating classes. A goal of this project was to identify mature technology that would be useful in creating an alternate repository. The O-O Paradigm continues to influence the development of GUIs (Graphical User Interfaces), programming languages, databases, and software development methodologies. GUI, language, database, and methodology considerations are found in following paragraphs. An O-O tool must provide explicit support for the O-O paradigm: objects, classes, hierarchy, and polymorphism. Prototypes for this system were created on Sun workstations. Sun's OpenWindow Environment was chosen because of the level of support it provides for the paradigm. Figure 4 [5] shows the class hierarchy of the OpenWindow XView toolkit. The OpenWindow GUIDE tool [4] was also used to support Rapid Prototype Development [9]. An Object-Oriented Programming Language (OOPL) is a requirement for OOP. Table 1 [1] shows the support C++ provides for the object model. C++ has the following advantages that have made it the most popular OOPL: extension of C for retention of C expertise; commercial support from multiple sources; multiplatform support; nonproprietary; free from Freeware Foundation; supports early error detection; and of foremost importance is its run-time efficiency. C++ was chosen for this project for these reasons. ObjectStore was selected because it is an Object-Oriented Database Programming Language (OODPL). An OODPL extends an OOPL by adding persistence [2]. The database adds persistence to C++. An OODPL directly addresses the impedance mismatch problem programmers face when using popular Relational Databases Management System (RDBMS). A programmer must store all data as tables when using an RDBMS. Unfortunately most complex data are not in table form. The impedance problem refers to the problem of transforming complex objects into these table-oriented structures. ObjectStore stores any C++ object (complex or simple) directly in the database without any restructuring of the data. This provides an alternative to the popular persistent storage options, namely: an RDBMS or files (see Figure 5 [12]). ObjectStore also provides: client/server operations over a network, concurrency control for shared data, relationship facilities for modeling data, collection management facilities, query support, security, and administrative tools [10]. The developers achieve performance goals by targeting applications that are data-intensive and perform manipulation of complex objects [7]. These applications exhibit the following characteristics: temporal locality, spatial locality, and fine interleaving. ObjectStore's patented Virtual Memory Mapping Architecture (VMMA) can dereference pointers for both persistent and transient objects at equivalent rates [11-14]. A formal methodology is appropriate for initial OOP efforts. This forces a programmer to explicitly consider all aspects of analysis and design while formulating a personal approach. Formal Object-Oriented Analysis and Design (OOAD) methodologies vary in both scope and application. Monarchi and Puhr [8] suggest a formal evaluation scheme for these methodologies. Table 2 shows the personalized approach utilized in this project which was synthesized by selecting references addressing all relevant OOAD issues. The primary reference was Booch [1] with supplemental material from Coad [3] and Rumbaugh [16]. Coad provides guidance for placing classes and attributes and also identifying interface classes. Rumbaugh provides information on identifying base and utility classes. OOAD methodologies often blur analysis and design. Object-Oriented Analysis (OOA) focuses on the creation of semantic objects in the problem domain. Object-Oriented Design (OOD) focuses on the solution domain objects necessary for the implementation of the OOA model (see Figure 6 [8]). Booch OOD is an iterative methodology where one first implements real world semantic objects (OOA) and then creates supporting solution objects (OOD). Prototypes are critical for testing analysis and design assumptions. Iterations continue until all real world objects and supporting solution objects are complete. Booch OOD supports multiple, orthogonal views of the world. Figure 7 [1] shows the Booch model that is the basis for Booch OOD. The Booch OOD process has the following steps: o Identify the classes and objects at a given level of abstraction o Identify the semantics of these classes and objects o Identify the relationships among these classes and objects o Implement these classes and objects Diagrams and templates describe each view of a real world model. The logical view focuses on class and object definitions in static and dynamic situations. Additional templates and diagrams describe dynamic execution. The static physical view concentrates on the placement of code in modules. Process architecture addresses the execution of processes in the dynamic world. Figure 8 [18] shows the road map and deliverables of the Booch method where cloud icons identify diagrams while templates and documents are missing this symbol. Case Study This section is a case study applying one iteration of Booch OOD to the HELSTF Test Repository problem. The purpose of the Requirements Analysis Phase is to establish system requirements. In the Domain Analysis Phase, one models the logical view of the system. In the Design Phase, one maps the logical model to physical structures. This case study focuses on the first iteration of each phase. Requirements Analysis A System Function Statement (SFS) identifies system requirements. Revisions to this document continue with each iteration of the Booch process. A Pressman Software Requirement Specification [15] provides an outline for the SFS since Booch specifies no formal structure for this document. Initially this document includes general design goals for the new system as outlined in this section. Figure 9 shows the proposed HELSTF Test Repository. This repository takes advantage of the latest hardware and O-O technology. The proposed system is a visual database environment for browsing test data. The proposed environment has a bit-mapped window-based interface that supports dynamic image viewing. Figures 10 and 11 show sample screens representing this interface. Operations support is limited to a one time archival of all data onto on-line media. A CD-ROM jukebox provides cost effective network based on-line storage. An entire test would be placed on two CD-ROMs. The application would be database-oriented containing the following complex test data: graphical images, documents, video segments, background information from the current database, and actual data files. All data would be exportable in industry standard file formats for analysis with commercial analysis programs. Dynamic playback of data with a VCR-like interface would support rapid detections of certain anomalies less discernible when viewing frames statically. The software analyst moves on to the Analysis phase only after the customer accepts the proposed environment. The advantages of the new repository are as follows: o Complex Data Repository o Industry Standard Data File Export o Textual Database Information Available to Analysts o All Data Archived On-line on CD-ROM Jukebox o Analysis for All Tests o Decommutation and Digitization Once to CD-ROM o Emphasis on Interactive Processing o Enhance Interactive Processing Environment o Bit Map Graphics o Windows Interface o Dynamic (VCR-like) and Static Frame Processing o Comprehensive Background Information o Domain Analysis Figure 12 [18] shows the six steps in Domain Analysis where one models the logical view of a system. Figure 13 shows the logical (semantic) class diagram that is the result of applying steps one through four. Each cloud in this diagram represents a class. A class template further specifies each class (see Tables 3 and 4). Each template contains details necessary for implementing a class in any language. The diagrams and templates define the following contains (has a) relationships: a test has data; an image has frames; an image has a camera; and, a signal has a sensor. Templates also specify the attributes of each class. Test attributes include: an id, a date, an initiation time, a start time, etc. After defining attributes for each class it is important to define inheritance relationships and superclasses. If several common attributes are found in classes it may indicate the existence of a superclass. The data class is a super class that contains attributes common to all data. In Step 5 one defines operations for each class. Table 4 shows the types of operations that a class must provide: creation, removal, attribute definition and access are common to every class. Class diagrams are important in defining static data while processes define dynamic manipulation of objects. An object-scenario diagram is a representation of how objects interact in an application. Figure 14 shows how an analyst would interact with this repository: a test would be opened; an image would be selected; frames would be viewed from this image; and finally background data like the camera class could also be viewed. This diagram is useful in assessing completeness and validating the design. Up to this point the design is architecture independent. Mapping of logical classes to specific architectures occurs in the design phase. Design Figure 15 [18] shows the steps and deliverables of the design phase. The final product of this phase will typically be a prototype. Table 5 shows the Prototype Plan for the first iteration. The initial goal is a prototype displaying how this proposed test repository will operate. The prototype will use dummy data from the ObjectStore database until the customer approves the interface. An Architectural Description document describes the computer, operating system, language, database, GUI, and any other architectural information. In the second step the focus is on establishing base/utility objects, application objects, and interface objects. Class diagrams and templates document the classes. The third step involves the actual implementation of all classes in some programming language. The final step is the refining of the design in response to user feedback. Figure 16 shows the prototype interface for the proposed test repository. Several windows correspond directly to classes shown in Figure 13. The Test window shown in Figure 17 provides access to test objects. Once an image is selected then an Image Control Window provides VCR-like access to the frames in the image (see Figure 18). Manipulation routines for image viewing are exported from Xrastool, a program for manipulating raster images. Figure 19 shows this program interface. A set of background information viewing windows is shown in Figure 20. Various other utility windows are shown in Figure 16. They provide a list of tests, a list of images, error information and other utility functions. Conclusions An improved HELSTF Repository can be developed by using existing methodologies and tools based on the Object-Oriented Paradigm. The Object-Oriented Paradigm facilitates development by collapsing the semantic gap between "Real World" objects and modeled objects. Object-Oriented Analysis and Design Methodologies like Booch OOD provide a formal approach for efficiently generating object-oriented software. Object-oriented programming tools include: languages, databases, Graphical User Interfaces and generators, and CASE software. Object-Oriented Programming is not a silver-bullet and it does not create true software components but it is a viable technique for developing software. The ongoing influence of the paradigm seems secure with the commercial success of C++. Component software may not be fully realized until comprehensive network models like COM (Microsoft), SOM (IBM), DOE (Sun) or CORBA (Object Management Group) are universally adopted [17]. ObjectStore addresses many issues currently considered by these models and as a result the product is relatively expensive. ACKNOWLEDGEMENTS This paper was partially supported by NSF grant HRD-9353271. I thank my advisor, Dr. H‚ctor Hern ndez, for all the assistance and guidance he gave me in developing this project. I also wish to acknowledge the contributions of Steve Gonzales and other HELSTF personnel who played a critical role in the analysis of this problem. REFERENCES 1. Booch, Grady. Object-Oriented Design with Applications. The Benjamin/Cumming Publishing Company, Inc., 1991. 2. Cattell, R.G.G. Object Data Management. Addison-Wesley Publishing Company, 1991. 3. Coad, Peter and Edward Yourdon. Object-Oriented Design. Prentice Hall, 1991. 4. Devguide 3.0 User's Guide. Sun Microsystems, 1991. 5. Heller, Dan. Xview Programming Manual. O'Reilly & Associaties, Inc., 1990. 6. Jacobson, Ivar, Magnus Christerson, Patrik Jonsson, and Gunnar Overgaard. Object-Oriented Software Engineering--A Use Case Driven Approach. Addison-Wesley, 1992. 7. Lamb, Charles, Gordon Landis, Jack Orenstein, and Don Weinreb. "The ObjectStore Database System." SIGMOD Record--Communications of the ACM. October 1991, 50-63. 8. Monarchi, David E. and Gretchen I. Puhr. "A Research Typology for Object-Oriented Analysis and Design" Communications of the ACM. September 1992, 35-47. 9. Mullin, Mark. Rapid Prototyping for Object-Oriented Systems. Addison-Wesley, 1990. 10. ObjectStore Administration and Development Tools. Object Design, Inc., October 1992. 11. ObjectStore Reference Manual. Object Design, Inc, October 1992. 12. ObjectStore Technical Overview. Object Design, Inc., Release 2.0, July 1992. 13. ObjectStore Tutorial. Object Design, Inc., October 1992. 14. ObjectStore User Guide. Object Design, Inc., October 1992. 15. Pressman, Roger S. Software Engineering: A Practitioner's Approach. McGraw-Hill, 1987. 16. Rumbaugh, James, Michael Blaha, William Premerlani, Frederick Eddy, and William Lorensen. Object-Oriented Modeling and Design. Prentice Hall, 1991. 17. Udell, Jon. "Componentware." Byte. May 1994, 46-56. 18. White, Iseult. The Booch Method: A Case Study for Rational Rose. Rational 1993. Figure 1 HELSTF Test Repository Figure 2 Sample Plot Figure 3 Semantic Gap Figure 4 OpenWindow XView Class Hierarchy Figure 5 ObjectStore Migration and Utilization Figure 6 OOA and OOD Objects Figure 7 The Models of OOD Figure 8 Road Map of Booch Method Figure 9 Proposed HELSTF Test Repository Figure 10 Sample Screen 1 Figure 11 Sample Screen 2 Figure 12 Road Map of Domain Analysis Figure 13 Class Diagram for Test Data Category Figure 14 Object-Scenario Diagram Figure 15 The Map of Booch Method Design Figure 16 Prototype Screens Figure 17 Test Window Figure 18 Image Control Window Figure 19 Xrastool Application Figure 20 Image Background Windows GRADUATE AWARD Nathaniel A. Whitmal, Ph.D. Candidate Department of Electrical Engineering and Computer Science, Northwestern University NSF Directorate for Engineering Principal Investigator, Janet C. Rutledge Noise Reduction Methods for Speech Enhancement Abstract: Most listeners have difficulty understanding speech in the presence of noise. Hearing-impaired listeners, in particular, face special difficulties that are often exacerbated by interfering noise. Attempts to suppress interfering noise have achieved only marginal success, limited by the unpredictability of the noise and the capabilities of current technology. This paper reviews the application of digital signal processing techniques to the speech enhancement problem. Several speech enhancement methods are discussed. Differences between single-microphone and multiple-microphone methods are examined, with emphasis placed on internal signal representation, real-time implementation, and application issues. Finally, preliminary results are presented for a new parametric single-microphone approach. INTRODUCTION Most listeners (particularly those with hearing impairments) have difficulty understanding speech in the presence of noise. Much of this difficulty may be attributed to masking of consonants, which often resemble short-duration bursts of random noise. Numerous signal processing algorithms have been proposed to address this problem. Several of these algorithms have difficulty distinguishing between noise and consonants, and consequently remove both. Furthermore, inaccurate estimates of the noise (which is often assumed to be stationary) can cause some algorithms to create audible artifacts which further mask consonants. The objective of the present study is to develop a new approach capable of accurately distinguishing between speech and noise in portable communication systems. This paper will review the application of signal processing algorithms to the noise reduction issue. Speech enhancement algorithms using both single and multiple-microphone configurations will be reviewed. The capabilities of these methods to improve intelligibility will be discussed, with areas of improvement suggested. Finally, a novel parametric single-microphone approach is proposed, which reduces noise by compressing noisy speech onto a series of wavelet bases. BACKGROUND A. Differences between single-microphone and multi-microphone approaches The differences between single-microphone and multiple-microphone approaches have been motivated primarily by intended applications. Multiple-microphone approaches use the correlation between noise signals from spatially separated inputs to enhance noisy speech. In applications affording use of spatially separate inputs (e.g., noisy cockpits, automobiles, and some industrial environments), this approach has been shown to improve intelligibility [1] [2] [3]. Single-microphone approaches, which rely on statistical models of speech and noise, are more appropriate for compact portable systems (e.g., mobile telephones, digital hearing aids) and other applications for which this spatial information is not available. Several single-microphone techniques showing promise for real-time application are reviewed in the sequel. B. Single-microphone noise reduction methods 1. Spectral subtraction The term "spectral subtraction" is used to describe a number of related techniques ([4], [5], [6]) which estimate the spectrum of clean speech by subtracting estimates of the noise spectrum from the spectrum of the noisy speech (see Figure 1). The noise spectrum, which is assumed to be stationary, is estimated from measurements taken during non-speech intervals. Noise spectrum estimation errors are manifested in the output spectrum as randomly spaced magnitude peaks of short duration. These peaks produce a sound often referred to as "musical noise," which degrades the intelligibility and quality of the output speech. The INTEL method of spectral subtraction [4] has been used successfully in industrial environments to reduce worker fatigue from noise exposure. Another method reported by Boll [6] was shown to increase intelligibility of LPC-coded speech in noisy helicopter cockpits. None of the methods have been shown to increase the intelligibility of uncoded noisy speech for human listeners. 2. Wiener filtering A second approach uses the optimum noise-reduction filter developed by Wiener for stationary random signals corrupted by uncorrelated additive noise [7]. The filter is described by the transfer function H Wiener(ejw) equals Sss(ejw) Sss (ejw) + Snn (ejw), where S ss (ejw) and Snn (ejw) are the respective discrete-time power spectral densities of the speech and noise signals. Precise implementation of the Wiener filter (which is non-causal) requires a priori knowledge of signal and noise parameters. In most practical situations, where these parameters are not precisely known, estimates derived from simple models (like those of spectral subtraction) are used in their place. Alternately, a sub-optimal time-varying approximation may be implemented with short-time spectra. 3. Bayesian parameter estimation More recently developed single-microphone noise reduction approaches have employed two Bayesian parameter estimation methods: maximum a posteriori (MAP) estimation, which maximizes the conditional probability density of the clean speech, and minimum mean-squared error (MMSE) estimation, which provides the expected value of clean speech for the given noisy speech [7]. Lim and Oppenheim [8] used MAP estimation to construct all-pole estimates of noisy speech, which were used iteratively to obtain time-varying Wiener filters. Ephraim [9] later developed a similar system which used hidden Markov models (HMMs) to derive the Wiener filters. Both methods force usage of a single sub-optimal estimator; a drawback circumvented by various forms of MMSE estimation used by other researchers. These estimates of clean speech parameters (;afs) were taken as weighted sums of expected parameters, conditioned on the noisy speech y and each of M hypotheses Hinf. ital. i as: 2]=Ts ;by E{s);vby)} equals Mi equals 1E{s;vby,Hi}p[Hi;vby] The estimation approaches differ primarily in their emphases on time ([10]) or frequency ([9],[11],[12]) domain parameters. The MMSE approach was also used by Quatieri and McAulay to estimate clean speech with a sinusoidal model [13]. C. Multiple-microphone approaches Many of the techniques reported for reducing environmental noise have employed adaptive noise-canceling (ANC) systems [14]. The ANC system (see Figure 2) receives two input signals; a primary signal (x[m]), consisting of speech in additive noise, and a reference signal (ninf. ital. R[m]), consisting only of a second noise signal correlated to the noise of the primary channel. The reference signal is passed through an adaptive FIR filter (see Figure 3) which estimates the primary channel noise component and subtracts it from the primary signal to produce an estimate of the original speech. This speech estimate is fed back to the adaptive filter, which adjusts its weights to provide a minimum mean-squared-error estimate of the noise. The task of reducing the error in the speech estimate may be alternately viewed as the task of selecting a weight vector minimizing the quadratic cost function J (Wm) ;by Wm TR mW m (minus)2P mTW m + E{n[m]2}, where Rm ;by E{NR,mNR,mT} and Pm ;by E;fbn[m]NR,m}. One widely used iterative method for weight-vector selection is the LMS algorithm [14], which performs a stochastic gradient descent of the cost function's surface in weight space. Successive weight vectors are found by moving in opposition to the gradient; i.e., setting Wm+1 equals Wm+2;ga2]=Ts[m]NR,m, where ;ga, the learning rate of the adaptive system, is less than 1. Convergence is dependent on the choice of ;ga, which determines the size of the steps (and speed of adaptation) taken in descent of the error surface. In portable systems with frequent movement, the fast adaptation rate required can result in weight misadjustment, especially at high listening levels where ;afs [m;cbNR,m is large. Practical compromises between misadjustment, adaptation speed and filter length are discussed in [2]. A second practical limitation concerns the quality of the reference signal. Widrow, et. al., [14] have shown that the maximum attainable output signal-to-noise ratio (SNR) is equal to the noise-to-signal ratio (NSR) at the reference input. In compact systems with short inter-microphone distances, obtaining a reference signal consisting only of noise is very difficult. One proposed solution [15] uses a beamforming array (see Figure 4) to improve rejection of unwanted signals. The beamformer derives its primary and reference signals from the respective sum and difference of signals from two sensors, located equidistant from the target. Signals radiating directly from the target to the sensors produce in-phase information in the primary channel. Off-center signals, having differing amplitude and phase, produce a reference signal which is used to reduce their amplitude. Studies conducted by Peterson [16] and Greenberg and Zurek [17] showed that the benefits of beamforming systems decreased as the direct-to-reverberant energy ratio at the microphones decreased. The presence of reverberation impairs beamformer performance, as reverberant target energy received by the reference channel tends to lower the reference NSR, and reverberant target energy received by the primary channel tends to obscure the true location of the target source. D. Evaluations of Existing Noise-Reduction Algorithms Earlier reviews of the existing literature [18] [19] [20] report that none of the methods mentioned above are capable of providing consistent improvements in intelligibility. In a recent study, Levitt, et. al., [21] evaluated the capabilities of four noise reduction algorithms to improve speech intelligibility in digital hearing aids. The noise reduction algorithms, which included adaptive noise canceling, short-time Wiener filtering (using a priori spectral information), low-frequency spectral subtraction, and sinusoidal modeling [22], were evaluated by both normal-hearing and hearing-impaired listeners. The results of the study indicated that: 1) Adaptive noise canceling provided significant intelligibility gains which decreased in the presence of reverberation and subject head movement. 2) Wiener filtering increased intelligibility for half of the hearing-impaired listeners, and reduced intelligibility for all of the normal-hearing listeners. 3) Spectral subtraction and sinusoidal modeling removed both interfering noise and crucial high-frequency cues, thereby improving SNR (and perceived quality) without improving intelligibility. The first three methods evaluated by Levitt, et. al., attempt to preserve aspects of the speech waveform, while removing features unique to the noise waveform. This waveform representation of signals is used in many speech processing algorithms. THEORY A. A parametric approach to noise reduction A second approach commonly used in speech recognition systems maps portions of the speech waveform into a set of time-varying parameters. The parameters may then be used to synthesize a modified version of the signal (as with the sinusoidal model mentioned above) or used as input to subsequent processors. The parametric model's capability for resynthesis lends itself well toward solving the problems met by the waveform representation, in that: 1) No short-time stationarity assumptions are required: processing may be modified as needed on a frame-by-frame basis. 2) Distortion may be more easily controlled, since the output waveforms are synthesized by the noise-reduction system. 3) Parameters derived from distorted or noisy data may be input to intelligent signal processing algorithms for purposes of enhancement or restoration. B. The Minimum Description Length (MDL) criterion A new approach proposed by Whitmal and Rutledge [23] uses the Minimum Description Length (or MDL) criterion recently applied by Saito [24] to reduce additive white Gaussian noise in digitized image and geophysical signals. The description length, defined as the length (in bits) of a theoretical binary codeword used to describe both a noisy signal x ;geRN and a model thereof, is expressed as (SEE PRINTED REPORT) L(x,;gum,k,m,k)equals L(m,k)+L(;gum,k;vbm),k+L(x);vb;gum,k,m,k, where ;gum,k, the model of the signal, is constructed with k members of orthonormal basis m [25]. Given a library containing M varieties of orthonormal bases (i.e., wavelet packets and local trigonometric functions) with minimum information cost [26], Saito's algorithm selects the basis and coefficients providing optimum compression of the signal and rejection of the white noise (which compresses poorly in every basis). Assuming equal probability of basis selection, the approximate minimum description length (AMDL) is given by k* coefficients in basis m* such that 2]10];sl10]L(k)*,m*) equals min 0 less thank less than N1;lem;leM 3k2log N+N2log;dv(I(minus);rN(k)WTmx;dv2, where WTm is the transform matrix, and ;rN(k) a rank-k matrix preserving the largest k coefficients. The algorithm was successfully demonstrated on both geophysical data and digitized images. When applied to speech, the MDL algorith tends to remove consonants in the presence of noise, and imposes mild distortion on speech (particularly consonants) in the absence of noise. Furthermore, for the short frame lengths appropriate to real-time processing of speech, the additive noise tends to compress efficiently onto a few basis elements. The retained coefficients produce audible artifacts similar to the "musical noise" produced by spectral subtraction. C. An adaptive multi-band MDL criterion Several modifications are proposed to allow use of the MDL algorithm with speech signals. First, a quadrature mirror filter (QMF) bank employing power-symmetric FIR filters [27] is used to split the incoming signal into two bands: a low-frequency band dominated by vowels and nasal consonants, and a high-frequency band dominated by fricative consonants and plosive bursts. The MDL algorithm is then separately applied to the low and high frequency signals. The multi-band approach allows salient features of consonants to be reproduced faithfully, eliminating the distortion produced by the original algorithm in the absence of noise. Moreover, the filter symmetry causes each channel's noise component to be manifested as white noise, obviating the need for computationally intensive inverse-filtering. Additional modifications are motivated by a relationship between changes in AMDL values and changes in the envelope of the speech waveform. Observed AMDL values have a lower bound dependent on the minimum amplitude of the speech signal, and provide reliable indication of whether the signal is above or below the noise floor. When the signal in the high-frequency band is below the noise floor, a tracking algorithm adaptively disables the band's MDL processing in favor of power spectrum subtraction (using local trigonometric bases), thereby reducing audible artifacts. A running average of spectra derived from discarded coefficients in the local trigonometric basis is used to construct an estimate of the noise. IMPLEMENTATION RESULTS A preliminary comparison of the capabilities of original and modified MDL approaches was conducted. An utterance of the sentence, "That hose can wash her feet," was sampled at 8 kHz, digitized to 16 bits, and added to each of three white Gaussian noise sequences to produce waveforms with overall SNRs of 0, 5, and 10 dB. Successive frames of the speech signals (256 samples, 50% overlap) were processed by each of three algorithms: original MDL, multi-band MDL, and multi-band MDL using power spectrum subtraction. RMS levels of the /o/ phoneme in "hose" and the closure preceding /t/ in "feet" were used to obtain relative measures of signal-to-noise ratio for each of the three methods. (For sentences with 0, 5, and 10 dB average SNRs, vowel-to-silence SNRs were 8.77, 13,78, and 18.75 dB respectively.) The observed SNR increases are presented below in Table 1. At all noise levels, the proposed algorithm substantially reduces the "musical noise" produced by the original MDL algorithm. This difference is reflected in the higher SNRs of the proposed algorithm. SUMMARY Several methods capable of reducing noise in speech signals have been reviewed. These methods, which are generally classified as single-microphone or multiple-microphone methods, often improve SNR without improving intelligibility. The need for improved intelligibility is particularly strong in portable systems (e.g., mobile telephone systems and digital hearing aids) where single-microphone methods are most appropriate. A novel method for enhancement of noisy speech has been presented. Preliminary results indicate that the new method may be useful in applications requiring a single-microphone noise reduction system for speech. ONGOING WORK Ongoing work is focused on the development of intelligent preprocessing algorithms which apply time-varying, frequency dependent (TVFD) processing [28] to perceptually significant basis elements. The capability of multi-band MDL with TVFD processing to improve intelligibility in normal-hearing and hearing-impaired listeners will then be tested. The method's ability to provide reference noise estimates for single-microphone adaptive noise cancelling systems is also being investigated. ACKNOWLEDGEMENTS The author wishes to thank Professors Janet Rutledge and Jonathan Cohen for their support and helpful comments. This study was supported in part by the Buehler Center on Aging (McGaw Medical Center, Northwestern University), by National Science Foundation Grant #BCS-9110247, and by a National Science Foundation Graduate Fellowship. Portions of this work were presented at the 16th Annual Conference of the IEEE Engineering in Medicine and Biology Society in November, 1994. REFERENCES 1. Brey, R.H., Robinette, M.S., Chabries, D.M., and Christiansen, R.W., "Improvement in speech intelligibility in noise employing an adaptive filter with normal and hearing-impaired subjects," Jour. Rehab. Res. and Dev., Vol. 24, 1987, pp. 75-86. 2. Chabries, D.M., Christiansen, R.W., Brey, R.H., Robinette, M.S., and Harris, R.W., "Application of adaptive digital signal processing to speech enhancement for the hearing impaired," Jour. Rehab. Res. and Dev., Vol. 24, 1987, pp. 65-74. 3. Schwander, T. and Levitt, H., "Effect of two-microphone noise reduction on speech recognition by normal-hearing listeners," Jour. Rehab. Res. and Dev., Vol. 24, 1987, pp. 87-92. 4. Weiss, M.R., Aschkenasy, B., and Parson, T.W., "Study and development of the INTEL technique for improving speech intelligibility," Report NSC-FR/4023, Nicolet Scientific Corporation, 1974. 5. Schwartz, R., Berouti, M., and Makhoul, J., "Enhancement of speech corrupted by acoustic noise," Proc. ICASSP, 1979, p. 208. 6. Boll, S.F., "Suppression of acoustic noise in speech using spectral subtraction," IEEE Trans. on Acoustics, Speech, and Signal Processing, Vol. 27, 1979, pp. 113-120. 7. Van Trees, H.L., Detection, Estimation, and Modulation Theory. John Wiley and Sons, New York, 1968. 8. Lim, J.S., and Oppenheim, A.V., "All-pole modeling of degraded speech," IEEE Trans. of Acoustics, Speech, and Signal Processing, Vol. 26, 1978, pp. 197-209. 9. Ephraim, Y., Malah, D., and Juang, B.H., "On the applications of hidden Markov models for enhancing noisy speech," IEEE Trans. of Acoustics, Speech, and Signal Processing, Vol. 37, 1989, pp. 1846-1856. 10. McAulay, R.J., and Malpass, M.L., "Speech enhancement using a soft-decision noise suppression filter," IEEE Trans. of Acoustics, Speech, and Signal Processing, Vol. 28, 1980, pp. 137-145. 11. Ephraim, Y., and Malah, D., "Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator," IEEE Trans. of Acoustics, Speech, and Signal Processing, Vol. 32, 1984, pp. 1109-1122. 12. Porter, J.E., and Boll, S.F., "Optimal estimators for spectral restoration of noisy speech," Proc. ICASSP, 1984, Vol. 1, p. 18A.2. 13. Quatieri, T.F., and McAulay, R.J., "Noise reduction using a soft-decision sine wave vector quantizer," Proc. ICASSP, 1990, p. 821. 14. Widrow, B., and Stearns, S., Adaptive Signal Processing. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1985. 15. Griffiths, L.J., and Jim, C., "An alternative approach to linearly constrained adaptive beamforming," IEEE Trans. on Antennas and Propagation, Vol. 30, 1982, pp. 29-34. 16. Peterson, P.M., Durlach, N.I., Rabinowitz, W.M., and Zurek, P.M., "Multimicrophone adaptive beamforming for interference reduction in hearing aids," Jour. Rehab. Res. and Dev., Vol. 24, 1987, pp. 103-110. 17. Greenberg, J.E., and Zurek, P.M., "Evaluation of an adaptive beamforming method for hearing aids," Jour. Acoust. Soc. Am., Vol. 91, 1992, pp. 1662-1676. 18. Lim, J.S., Speech Enhancement. Prentice-Hall, Englewood Cliffs, N.J., 1983. 19. Makhoul, J., and McAulay, R. Removal of Noise from Noise-Degraded Speech Signals. National Academy Press, Washington, D.C., 1989. 20. Boll, S.F., "Speech enhancement in the 1980s: noise suppression with pattern matching," Advances in Speech Signal Processing. S. Furui and M.M. Sondhi, eds., Marcel Dekker (New York), 1991, pp. 309-326. 21. Levitt, H., Bakke, M., Kates, J., Neuman, A., Schwander, T., and Weiss, M., "Signal processing for hearing impairment," Scand. Audiol., suppl. 38, pp. 7-19, 1993. 22. Kates, J., "Speech enhancement based on a sinusoidal model," Jour. Speech and Hearing Res., to be published. 23. Whitmal, N., and Rutledge, J., "Noise reduction algorithms for digital hearing aids," Proc. IEEE Conf. on Engineering in Medicine and Biology, 1994. 24. Saito, N., "Simultaneous Noise Suppression and Signal Compression using a Library of Orthonormal Bases and the Minimum Description Length Criterion," in Wavelets in Geophysics, E. Foufoula-Georgious and P. Kumar (eds.): Academic Press, Inc., 1994. 25. Rissanen, J., Stochastic Complexity in Statistical Inquiry, World Scientific, Singapore, 1989. 26. Coifman, R.R., and Wickerhauser, M.V., "Entropy-Based Algorithms for Best Basis Selection," IEEE Trans. Inf. Theory, Vol. 38, pp. 713-718, 1992. 27. Vaidyanathan, P.P., Multirate Systems and Filter Banks. Prentice-Hall, Englewood Cliffs, N.J., 1993. 28. Drake, L.A., Rutledge, J.C., and Cohen, J., "Wavelet analysis in recruitment of loudness compensation," IEEE Trans. Signal Proc., vol. 41, pp. 3306-3312, 1993.