PART IV.
Supplementary Materials


Chapter 8
Annotated Bibliography

In selecting books and major articles for inclusion in this short bibliography, an effort was made to incorporate those useful for principal investigators (PIs) and project directors (PDs) who want to find information relevant to the tasks they will face, and which this brief handbook could not cover in depth. Thus, we have not included all books that experts in qualitative research and mixed method evaluations would consider to be of major importance. Instead, we have included primarily reference materials that NSF/EHR grantees should find most useful. Included are many of those already listed in the references to Chapters 1 through 7.

Some of these publications are heavier on theory, others deal primarily with practice and specific techniques used in qualitative data collection and analysis. However, with few exceptions, all the publications selected for this bibliography contain a great deal of technical information and hands-on advice.

Denzin, Norman K., and Lincoln, Yvonna S. (Eds.). (1994). Handbook of Qualitative Research. Thousand Oaks, CA: Sage.

This formidable volume (643 pages set in small type) consists of 36 chapters written by experts on their respective topics, all of whom are passionate advocates of the qualitative method in social and educational research. The volume covers historical and philosophical perspectives, as well as detailed research methods. Extensive coverage is given to data collection and data analysis, and to the "art of interpretation" of findings obtained through qualitative research. Most of the chapters assume that the qualitative researcher functions in an academic setting and uses qualitative methods exclusively; the use of quantitative methods in conjunction with qualitative approaches and constraints that apply to evaluation research is seldom considered. However, two chapters - "Designing Funded Qualitative Research," by Janice M. Morse, and "Qualitative Program Evaluation," by Jennifer C. Greene - contain a great deal of material of interest to PIs and PDs. But PIs and PDs will also benefit from consulting other chapters, in particular "Interviewing," by Andrea Fontana and James H. Frey, and "Data Management and Analysis Methods," by A. Michael Huberman and Matthew B. Miles.

The Joint Committee on Standards for Educational Evaluation. (1994). How to Assess Evaluations of Educational Programs, 2nd Ed. Thousand Oaks, CA: Sage.

This new edition of the widely accepted Standards for Educational Evaluation is endorsed by professional associations in the field of education. The volume defines 30 standards for program evaluation, with examples of their application, and incorporates standards for quantitative as well as qualitative evaluation methods. The Standards are categorized into four groups: utility, feasibility, propriety, and accuracy. The Standards are intended to assist legislators, funding agencies, educational administrators, and evaluators. They are not a substitute for texts in technical areas such as research design or data collection and analysis. Instead they provide a framework and guidelines for the practice of responsible and high-quality evaluations. For readers of this handbook, the section on Accuracy Standards, which includes discussions of quantitative and qualitative analysis, justified conclusions, and impartial reporting, is especially useful.

Patton, Michael Quinn. (1990). Qualitative Evaluation and Research Methods, 2nd Ed. Newbury Park, CA: Sage.

This is a well-written book with many practical suggestions, examples, and illustrations. The first part covers, in jargon-free language, the conceptual and theoretical issues in the use of qualitative methods; for practitioners the second and third parts, dealing with design, data collection, analysis, and interpretation, are especially useful. Patton consistently emphasizes a pragmatic approach: he stresses the need for flexibility, common sense, and the choice of methods best suited to produce the needed information. The last two chapters, "Analysis, Interpretation and Reporting" and "Enhancing the Quality and Credibility of Qualitative Analysis," are especially useful for PIs and PDs of federally funded research. They stress the need for utilization-focused evaluation and the evaluator's responsibility for providing data and interpretations, which specific audiences will find credible and persuasive.

Marshall, Catherine, and Rossman, Gretchen B. (1995). Designing Qualitative Research, 2nd Ed. Thousand Oaks, CA: Sage.

This small book (178 pages) does not deal specifically with the performance of evaluations; it is primarily written for graduate students to provide a practical guide for the writing of research proposals based on qualitative methods. However, most of the material presented is relevant and appropriate for project evaluation. In succinct and clear language, the book discusses the main ingredients of a sound research project: framing evaluation questions; designing the research; data collection methods; and strategies, data management, and analysis. The chapter on data collection methods is comprehensive and includes some of the less widely used techniques (such as films and videos, unobtrusive measures, and projective techniques) that may be of interest for the evaluation of some projects. There are also useful tables (e.g., identifying the strengths and weaknesses of various methods for specific purposes; managing time and resources), as well as a series of vignettes throughout the text illustrating specific strategies used by qualitative researchers.

Lofland, John, and Lofland, Lyn H. (1995). Analyzing Social Settings: A Guide to Qualitative Observation and Analysis, 3rd Ed. Belmont, CA: Wadsworth.

As the title indicates, this book is designed as a guide to field studies, using as their main data collection techniques participant observation and intensive interviews. The authors' vast experience and knowledge in these areas results in a thoughtful presentation of both technical topics (such as the best approach to compiling field notes) and nontechnical issues, which may be equally important in the conduct of qualitative research. The chapters that discuss gaining access to informants, maintaining access for the duration of the study, and dealing with issues of confidentiality and ethical concerns are especially helpful for PIs and PDs who seek to collect qualitative material. Also useful is Chapter 5, "Logging Data," which deals with all aspects of the interviewing process and includes examples of question formulation, the use of interview guides, and the write-up of data.

Miles, Matthew B., and Huberman, A. Michael. (1994). Qualitative Data Analysis - An Expanded Sourcebook, 2nd Ed. Thousand Oaks, CA: Sage.

Although this book is not specifically oriented to evaluation research, it is an excellent tool for evaluators because, in the authors' words, "this is a book for practicing researchers in all fields whose work involves the struggle with actual qualitative data analysis issues." It has the further advantage that many examples are drawn from the field of education. Because analysis cannot be separated from research design issues, the book takes the reader through the sequence of steps that lay the groundwork for sound analysis, including a detailed discussion of focusing and bounding the collection of data, as well as management issues bearing on analysis. The subsequent discussion of analysis methods is very systematic, relying heavily on data displays, matrices, and examples to arrive at meaningful descriptions, explanations, and the drawing and verifying of conclusions. An appendix covers choice of software for qualitative data analysis. Readers will find this a very comprehensive and useful resource for the performance of qualitative data reduction and analysis.

New Directions for Program Evaluation, Vols. 35, 60, 61. A quarterly publication of the American Evaluation Association, published by Jossey-Bass, Inc., San Francisco, CA.

Almost every issue of this journal contains material of interest to those who want to learn about evaluation, but the three issues described here are especially relevant to the use of qualitative methods in evaluation research. Vol. 35 (Fall 1987), Multiple Methods in Program Evaluation, edited by Melvin M. Mark and R. Lance Shotland, contains several articles discussing the combined use of quantitative and qualitative methods in evaluation designs. Vol. 60 (Winter 1993), Program Evaluation: A Pluralistic Enterprise, edited by Lee Sechrest, includes the article "Critical Multiplism: A Research Strategy and its Attendant Tactics," by William R. Shadish, in which the author provides a clear discussion of the advantages of combining several methods in reaching valid findings. In Vol. 61 (Spring 1994), The Qualitative-Quantitative Debate, edited by Charles S. Reichardt and Sharon F. Rallis, several of the contributors take a historical perspective in discussing the long-standing antagonism between qualitative and quantitative researchers in evaluation. Others look for ways of integrating the two perspectives. The contributions by several experienced nonacademic program and project evaluators (Rossi, Datta, Yin) are especially interesting.

Greene, Jennifer C., Caracelli, Valerie J., and Graham, Wendy F. (1989). "Toward a Conceptual Framework for Mixed-Method Evaluation Designs" in Educational Evaluation and Policy Analysis, Vol. II, No. 3.

In this article, a framework for the design and implementation of evaluations using a mixed method methodology is presented, based both on the theoretical literature and a review of 57 mixed method evaluations. The authors have identified five purposes for using mixed methods, and the recommended design characteristics for each of these purposes are presented.

Yin, Robert K. (1989). Case Study Research: Design and Method. Newbury Park, CA: Sage.

The author's background in experimental psychology may explain the emphasis in this book on the use of rigorous methods in the conduct and analysis of case studies, thus minimizing what many believe is a spurious distinction between quantitative and qualitative studies. While arguing eloquently that case studies are an important tool when an investigator (or evaluator) has little control over events and when the focus is on a contemporary phenomenon within some real-life context, the author insists that case studies be designed and analyzed so as to provide generalizable findings. Although the focus is on design and analysis, data collection and report writing are also covered.

Krueger, Richard A. (1988). Focus Groups: A Practical Guide for Applied Research. Newbury Park, CA: Sage.

Krueger is well known as an expert on focus groups; the bulk of his experience and the examples cited in his book are derived from market research. This is a useful book for the inexperienced evaluator who needs step-by-step advice on selecting focus group participants, the process of conducting focus groups, and analyzing and reporting results. The author writes clearly and avoids social science jargon, while discussing the complex problems that focus group leaders need to be aware of. This book is best used in conjunction with some of the other references cited here, such as the Handbook of Qualitative Research (Ch. 22) and Focus Groups: Theory and Practice.

Stewart, David W., and Shamdasani, Prem N. (1990). Focus Groups: Theory and Practice. Newbury Park, CA: Sage.

This book differs from many others published in recent years that address primarily techniques of recruiting participants and the actual conduct of focus group sessions. Instead, these authors pay considerable attention to the fact that focus groups are by definition an exercise in group dynamics. This must be taken into account when interpreting the results and attempting to draw conclusions that might be applicable to a larger population. However, the book also covers very adequately practical issues such as recruitment of participants, the role of the moderator, and appropriate techniques for data analysis.

Weiss, Robert S. (1994). Learning from Strangers - The Art and Method of Qualitative Interview Studies. New York: The Free Press.

After explaining the different functions of quantitative and qualitative interviews in the conduct of social science research studies, the author discusses in considerable detail the various steps of the qualitative interview process. Based largely on his own extensive experience in planning and carrying out studies based on qualitative interviews, he discusses respondent selection and recruitment, preparing for the interview (which includes such topics as pros and cons of taping, the use of interview guides, interview length, etc.), the interviewing relationship, issues in interviewing (including confidentiality and validity of the information provided by respondents), data analysis, and report writing. There are lengthy excerpts from actual interviews that illustrate the topics under discussion. This is a clearly written, very useful guide, especially for newcomers to this data collection method.

Wolcott, Harry F. (1994). Transforming Qualitative Data: Description, Analysis and Interpretation. Thousand Oaks, CA: Sage.

This book is written by an anthropologist who has done fieldwork for studies focused on education issues in a variety of cultural settings; his emphasis throughout is "on what one does with data rather than on collecting it." His frank and meticulous description of the ways in which he assembled his data, interacted with informants, and reached new insights based on the gradual accumulation of field experiences makes interesting reading. It also points to the pitfalls in the interpretation of qualitative data, which he sees as the most difficult task for the qualitative researcher.

U.S. General Accounting Office. (1990). Case Study Evaluations. Transfer Paper 10.1.9. issued by the Program Evaluation and Methodology Division. Washington, DC: GAO.

This paper presents an evaluation perspective on case studies, defines them, and determines their appropriateness in terms of the type of evaluation question posed. Unlike the traditional, academic definition of the case study, which calls for long-term participation by the evaluator or researcher in the site to be studied, the GAO sees a wide range of shorter term applications for case study methods in evaluation. These include their use in conjunction with other methods for illustrative and exploratory purposes, as well as for the assessment of program implementation and program effects. Appendix 1 includes a very useful discussion dealing with the adaptation of the case study method for evaluation and the modifications and compromises that evaluators - unlike researchers who adopt traditional field work methods - are required to make.


Previous Chapter | Back to Top | Next Chapter

Table of Contents