PART III.
Designing and Reporting Mixed Method Evaluations


Chapter 5
Overview of the Design Process for Mixed Method Evaluations

One size does not fit all. Consequently, when it comes to designing an evaluation, experience has proven that the evaluator must keep in mind that the specific questions being addressed and the audience for the answers must influence the selection of an evaluation design and tools for data collection.

Chapter 2 of the earlier User-Friendly Handbook for Project Evaluation (National Science Foundation, 1993) deals at length with designing and implementing an evaluation, identifying the following steps for carrying out an evaluation:

Readers of this volume who are unfamiliar with the overall process are urged to read that chapter. In this chapter, we are briefly reviewing the process of designing an evaluation, including the development of evaluation questions, the selection of data collection methodologies, and related technical issues, with special attention to the advantages of mixed method designs. We are stressing mixed method designs because such designs frequently provide a more comprehensive and believable set of understandings about a project’s accomplishments than studies based on either quantitative or qualitative data alone.

 

Developing Evaluation Questions

The development of evaluation questions consists of several steps:

The process is not an easy one. To quote an experienced evaluator (Patton, 1990):

We have developed a set of tools intended to help navigate these initial steps of evaluation design. These tools are simple forms or matrices that help to organize the information needed to identify and select among evaluation questions. Since the objectives of the formative and summative evaluations are usually different, separate forms need to be completed for each.

Worksheet 1 provides a form for briefly describing the project, the conceptual framework that led to the initiation of the project, and its proposed activities, and for summarizing its salient features. Information on this form will be used in the design effort. A side benefit of filling out this form and sharing it among project staff is that it can be used to make sure that there is a common understanding of the project’s basic characteristics. Sometimes newcomers to a project, and even those who have been with it from the start, begin to develop some divergent ideas about emphases and goals.

 

WORKSHEET 1:
DESCRIBE THE INTERVENTION

1. State the problem/question to be addressed by the project:

 

2. What is the intervention(s) under investigation?

 

3. State the conceptual framework which led to the decision to undertake this intervention and its proposed activities.

 

4. Who is the target group(s)?

 

5. Who are the stakeholders?

 

6. How is the project going to be managed?

 

7. What is the total budget for this project? How are major components budgeted?

 

8. List any other key points/issues.

 

 

Worksheet 2 provides a format for further describing the goals and objectives of the project in measurable terms. This step, essential in developing an evaluation design, can prove surprisingly difficult. A frequent problem is that goals or objectives may initially be stated in such global terms that it is not readily apparent how they might be measured. For example, the statement "improve the education of future mathematics and science educators" needs more refinement before it can be used as the basis for structuring an evaluation.

Worksheets 3 and 4 assist the evaluator in identifying the key stakeholders in the project and clarifying what it is each might want to address in an evaluation. Stakeholder involvement has become an important part of evaluation design, as it has been recognized that an evaluation must address the needs of individuals beyond the funding agency and the project director.

Worksheet 5 provides a tool for organizing and selecting among possible evaluation questions. It points to several criteria that should be considered. Who wants to know? Will the information be new or confirmatory? How important is the information to various stakeholders? Are there sufficient resources to collect and analyze the information needed to answer the questions? Can the question be addressed in the time available for the evaluation?

Once the set of evaluation questions is determined, the next step is selecting how each will be addressed and developing an overall evaluation design. It is at this point that decisions regarding the types and mixture of data collection methodologies, sampling, scheduling of data collection, and data analysis need to be made. These decisions are quite interdependent, and the data collection techniques selected will have important implications for both scheduling and analysis plans.

 

WORKSHEET 2: Describe Project GOAL
AND OBJECTIVE

1. Briefly describe the purpose of the project.

 

2. State the above in terms of a general goal.

 

3. State the first objective to be evaluated as clearly as you can.

 

4. Can this objective be broken down further? Break it down to the smallest unit. It must be clear what specifically you hope to see documented or changed.

 

5. Is this objective measurable (can indicators and standards be developed for it)? If not, restate it.

 

6. Formulate one or more questions that will yield information about the extent to which
the objective was addressed.

 

7. Once you have completed the above steps, go back to #3 and write the next objective.
Continue with steps 4, and 5, and 6.

 

 

WORKSHEET 3:
IDENTIFY KEY STAKEHOLDERS AND AUDIENCES

 

Audience

 

Spokesperson

Values, Interests, Expectations, etc. That Evaluation Should Address
 

 

 

 

 

 

 

 

 

 

   
 

 

 

 

   
 

 

 

 

   
 

 

 

 

   
 

 

 

 

   

 

WORKSHEET 4:

STAKEHOLDERS’ INTEREST IN
POTENTIAL EVALUATION QUESTIONS

Question
Stakeholder Group(s)
 

 

 
 

 

 

 

 

 

 

 

 

 

 
 

 

 
 

 

 
 

 

 
 

 

 

 

 

 

 

 

 

 

WORKSHEET 5:
PRIORITIZE AND ELIMINATE QUESTIONS

Take each question from worksheet 4 and apply criteria below.

Question
Which stakeholder(s)?
Importance to Stakeholders
New Data Collection?
Resources Required
Timeframe
Priority (High,
Medium, Low,
or Eliminate)
             

H   M   L   E

             

H   M   L   E

             

H   M   L   E

             

H   M   L   E

             

H   M   L   E

             

H   M   L   E

             

H   M   L   E

             

H   M   L   E

             

H   M   L   E

 

 

Selecting Methods for Gathering the Data:
The Case for Mixed Method Designs

As discussed in Chapter 1, mixed method designs can yield richer, more valid, and more reliable findings than evaluations based on either the qualitative or quantitative method alone. A further advantage is that a mixed method approach is likely to increase the acceptance of findings and conclusions by the diverse groups that have a stake in the evaluation.

When designing a mixed method evaluation, the investigator must consider two factors:

To recapitulate the earlier summary of the main differences between the two methods, qualitative methods provide a better understanding of the context in which the intervention is embedded; when a major goal of the evaluation is the generalizability of findings, quantitative data are usually needed. When the answer to an evaluation question calls for understanding the perceptions and reactions of the target population, a qualitative method (indepth interview, focus group) is most appropriate. If a major evaluation question calls for the assessment of the behavior of participants or other individuals involved in the intervention, trained observers will provide the most useful data.

In Chapter 1, we also showed some of the many ways in which the quantitative and qualitative techniques can be combined to yield more meaningful findings. Specifically, the two methods have been successfully combined by evaluators to test the validity of results (triangulation), to improve data collection instruments, and to explain findings. A good design for mixed method evaluations should include specific plans for collecting and analyzing the data through the combined use of both methods; while it may often be difficult to come up with a detailed analysis plan at the outset, it is very useful to have such a plan when designing data collection instruments and when organizing narrative data obtained through qualitative methods. There needs to be considerable up-front thinking regarding probable data analysis plans and strategies for synthesizing the information from various sources. Initial decisions can be made regarding the extent to which qualitative techniques will be used to provide full-blown stand-alone descriptions versus commentaries or illustrations to give greater meaning to quantitative data. Preliminary strategies for combining information from different data sources need to be formulated. Schedules for initiating the data analysis need to be established. The early findings thus generated should be used to reflect on the evaluation design and initiate any changes that might be warranted. While in any good evaluation data analysis is to some extent an iterative process, it is important to think things through as much as possible at the outset to avoid being left awash in data or with data focusing more on peripheral questions, rather than those that are germane to the study’s goals and objectives (see Chapter 4; also see Miles and Huberman, 1994, and Greene, Caracelli, and Graham, 1989).

 

Other Considerations in Designing
Mixed Method Evaluations

Sampling. Except in the rare cases when a project is very small and affects only a few participants and staff members, it will be necessary to deal with a subset of sites and/or informants for budgetary and managerial reasons. Sampling thus becomes an issue in the use of mixed methods, just as in the use of quantitative methods. However, the sampling approaches differ sharply depending on the method used.

The preferred sampling methods for quantitative studies are those that will enable researchers to make generalizations from the sample to the universe, i.e., all project participants, all sites, all parents. Random sampling is the appropriate method for this purpose. Statistically valid generalizations are seldom a goal of qualitative research; rather, the qualitative investigation is primarily interested in locating information-rich cases for study in depth. Purposeful sampling is therefore practiced, and it may take many forms. Instead of studying a random sample of a project's participants, evaluators may chose to concentrate their investigation on the lowest achievers admitted to the program. When selecting classrooms for observation of the implementation of an innovative practice, the evaluator may use deviant-case sampling, choosing one classroom where the innovation was reported "most successfully" implemented and another where major problems have been reported. Depending on the evaluation questions to be answered, many other sampling methods, including maximum variation sampling, critical case sampling, or even typical case sampling, may be appropriate (Patton, 1990). When sampling subjects for indepth interviews, the investigator has considerable flexibility with respect to sample size.

In many evaluation studies, the design calls for studying a population at several points in time, e.g., students in the 9th grade and then again in the 12th grade. There are two ways of carrying out such studies that seek to measure trends. In a longitudinal approach, data are collected from the same individuals at designated time intervals; in a cross-sectional approach, new samples are drawn for each successive data collection. While in most cases, longitudinal designs that require collecting information from the same students or teachers at several points in time are best, they are often difficult and expensive to carry out because students move and teachers are reassigned. Furthermore, loss of respondents due to failure to locate or to obtain cooperation from some segment of the original sample is often a major problem. Depending on the nature of the evaluation and the size of the population studied, it may be possible to obtain good results with successive cross-sectional designs.

Timing, sequencing, frequency of data collection, and cost. The evaluation questions and the analysis plan will largely determine when data should be collected and how often focus groups, interviews, or observations should be scheduled. In mixed method designs, when the findings of qualitative data collection will affect the structuring of quantitative instruments (or vice versa), proper sequencing is crucial. As a general rule, project evaluations are strongest when data are collected at least at two points in time: before the time an innovation is first introduced, and after it has been in operation for a sizable period of time.

Throughout the design process, it is essential to keep an eye on the budgetary implications of each decision. As was pointed out in Chapter 1, costs depend not on the choice between qualitative and quantitative methods, but on the number of cases required for analysis and the quality of the data collection. Evaluators must resist the temptation to plan for a more extensive data collection than the budget can support, which may result in lower data quality or the accumulation of raw data that cannot be processed and analyzed.

Tradeoffs in the design of evaluations based on mixed methods. All evaluators find that both during the design phase, when plans are carefully crafted according to experts' recommendations, and later when fieldwork gets under way, modifications and tradeoffs become a necessity. Budget limitations, problems in accessing fieldwork sites and administrative records, and difficulties in recruiting staff with appropriate skills are among the recurring problems that should be anticipated as far as possible during the design phase, but that also may require modifying the design at a later time.

What tradeoffs are least likely to impair the integrity and usefulness of mixed method evaluations if the evaluation plan as designed cannot be fully implemented? A good general rule for dealing with budget problems is to sacrifice the number of cases or the number of questions to be explored (this may mean ignoring the needs of some low priority stakeholders), but to preserve the depth necessary to fully and rigorously address the issues targeted.

When it comes to design modifications, it is of course essential that the evaluator be closely involved in decisionmaking. But close contact among the evaluator, the project director, and other project staff is essential throughout the life of the project. In particular, some project directors tend to see the summative evaluation as an add-on, that is, something to be done - perhaps by a contractor - after the project has been completed. But the quality of the evaluation is dependent on record keeping and data collection during the life of the project, which should be closely monitored by the evaluator.

In the next chapter, we illustrate some of the issues related to designing an evaluation, using the hypothetical example provided in Chapter 2.

 

References


Previous Chapter | Back to Top | Next Chapter

Table of Contents