ATTACHMENT 1

Using an Assessment Process to Measure
Educational Research Project Success

Barbara M. Olds, Ronald L. Miller
Colorado School of Mines

 

INTRODUCTION TO THE PROCESS

          Just as in technical research, conducting high quality scholarship and innovation in educational research requires rigorous assessment and evaluation of project results. Although collecting assessment data and analyzing the results may be more complex in educational research projects, the goal is the same -- to determine as reliably as possible if the stated project objectives have been met. Assessing (collecting and analyzing data) and evaluating (interpreting and reporting the data) the success of a project requires the use of a well-articulated, multi-step process consisting of the following steps [1]: 1) develop research questions, 2) match questions with information-gathering techniques, 3) collect data, 4) analyze data, and 5) provide evaluation information to interested audiences.

          This process is strikingly similar to better known engineering problem-solving and design processes, suggesting that most engineering educational researchers are more familiar with the attributes of a good project evaluation plan than they probably realize. The objective of this paper is to provide an introduction to the process of assessing and evaluating educational research projects for project directors who are not experienced evaluators. Examples of several project evaluation plans will be discussed to illustrate important concepts and to warn against common pitfalls.

THE PROJECT EVALUATION MATRIX

          We have found that the easiest way to begin developing a project evaluation plan is to use the evaluation matrix shown in Table 1{adapted from [1]}. The matrix includes a series of questions designed to guide project directors through the planning process. Questions are posed to help develop the following aspects of the plan:

Each of these aspects should be treated as iterative and fluid as the project progresses. Additional details to help project directors work through the planning process are discussed below.

Table 1 -- Project Evaluation Matrix





Research Question

Implementation Strategy

Evaluation Methods

Timeline

Audience Dissemination

What are the project objectives? What questions are you trying to answer?













How will the objectives be met? Which project activities help you meet each objective?










How will you know the objectives have been met? What measurements will be made? On whom?

When will measurements be made?

Who needs to know the results? How can you convince them the objectives were met?

 

 

© B.M. Olds and R.L. Miller, 1997

          Research Question(s). Developing clear and measurable research question(s) is the key to the success of an evaluation plan. The following steps should be completed to obtain useful research questions [1]:

          Many project directors have good ideas about general project goals, but they too rarely spend the time necessary to articulate clear research questions before they undertake the project. They need to ask such questions as "What are the project objectives?" and "What questions are we trying to answer?" before the project gets underway, not at the end. These questions should be clear and measurable -- in our experience many noble but unmeasurable objectives are articulated by project directors. We emphasize the need for clear, specific project objectives and performance measures in research projects. For example, "Does the expert system software work?" is the kind of research question we often see, but what does it mean for software to "work"? A much more measurable question might be to ask, "Do results from the expert system software agree with results from human experts?" Another vague research question might be, "Do students know more about design after completing the new design course?" Again, a term like "know more" is very vague and difficult to measure. However, the research question is much more measurable when rephrased as: "After completing the new design course, can students articulate the design process and use it to complete a project?" Both of the main components of this question -- "articulate the design process" and "use it to complete a project" are measurable.

          In addition to obtaining measurable research questions, completing the steps listed above will force those working on the project to agree on the project goals and objectives and will help clarify vague terms in their own minds. Not everyone would agree with our rephrasing of the example questions, but they would be forced to agree on what they mean by vague phrases such as "works" and "know."

          To the extent possible, project directors should articulate performance measures for each research question to be evaluated. A performance measure "defines the level of performance required to meet the objective" [2] and indicates the types of data that will be used to provide supportive evidence. For example, "75% of the students who take the pilot course will remain in electrical engineering two semesters later" represents a measurable level of performance in a project designed to improve retention in electrical engineering.

          Implementation Strategy. It is important to make sure that research questions and implementation strategies mesh. For example, important questions such as "How will the objectives be met?" and "Which project activities help to meet each objective?" should be answered as the implementation strategy is developed. We have seen assessment plans, for example, with numerous lofty goals for student achievement between entry and graduation. However, when pressed the faculty developing these goals sometimes allow for no place in the curriculum where the goals can be met. For example, if students are to learn the design process, or how to communicate effectively, or gain an understanding of contemporary issues, they must have the opportunity within the curriculum and/or co-curriculum to learn and practice these skills. Just as a researcher carefully plans an experiment, so the project director must carefully plan the implementation of those parts of the project designed to meet specific goals and objectives.

          Evaluation Methods. Once the research questions and the implementation strategy have been developed, the general methodological approach(es) to the problem should be selected. The basic questions here are "How will you know the objectives have been met?" "What measurements will be made?" "On whom?" The methods selected depend on many factors including time and money available, but several rules of thumb apply:

Many assessment measures are possible and available to the interested project director including standardized exams, questionnaires, surveys, focus groups, ethnographic studies, data analysis, protocol analysis, etc. Resources on all of these techniques are widely available {see [1] and [2] for annotated bibliographies}. Once the methods are selected, the data need to be collected and analyzed. During this part of the process the needs of the respondents should be considered and as little disruption as possible should occur. Data collectors must be trained and should be unbiased.

          Timeline. The key question here is "When will the measurements be made?" Once the data are gathered and prepared for analysis, the initial analysis should be performed based on stated performance measures included in the evaluation plan. Once initial results have been obtained, other analysis can be conducted if appropriate. Finally findings should be integrated and synthesized into a coherent evaluation of the project.

          Audience/Dissemination. Here the key questions are "Who needs to know the results?" and "How can you convince them the objectives were met?" From the beginning of the project the stakeholders should be identified and their needs analyzed. Different audiences clearly have different agendas and will need different information presented in different ways to be convinced that the project was a success. Evaluation results should be customized to meet the needs of various audiences and delivered in time to be useful. For example, a funding agency such as NSF might be interested in the effect of a newly developed course on aggregate student learning, while college administrators will be interested in the cost/benefit analysis of the course, and parents will be interested in how well their own child learns in the course. All are valid issues and the evaluation plan must address each audience’s unique concerns.

EVALUATION PLAN EXAMPLES

          Several generic examples of project evaluation may serve to illustrate the points that we have made. First, we use the example of a piece of expert system software being developed with help from a federal grant. As shown in Table 2, the research question here is: "Do results from expert system software agree with results from human experts?" The implementation strategy is to collect data using both the beta version of the expert system software and traditional interviews conducted by human experts. The evaluation method in this case is a statistical comparison of quantitative results where the software is deemed reliable if the correlation coefficient exceeds 0.8 for a sample of 20 students (here we see a very specific performance measure to define software "reliability"). Measurements will be completed during the second year of the project after the beta version of the software has been written. The statistical results will be used by human experts and programmers to improve the software and results will be disseminated to educators and human experts in the field who will be interested in the possibility of using an expert system to emulate a more costly and time-consuming data-gathering method.

          Another common type of research question would be similar to this one (Table 3): "Do students in the retention project remain enrolled in college at a higher rate than their peers?" The implementation strategy would involve whatever interventions had been developed in the project to improve retention. The evaluation method would involve collecting and statistically comparing enrollment data for students in the project with enrollment data for their non-participating peers. The timeline might be to collect data at the end of each semester of the project and results may be used by the project directors to improve the project and to convince administrators to institutionalize the project curriculum.

          A final example from a curriculum development project asks the question (Table 4): "Are students more technically competent after completing a series of subject modules?" The implementation strategy would involve developing appropriate subject modules and using them in class. Evaluating the effect of the modules could be accomplished by asking faculty to assess both the technical competence of each student on an absolute scale and the growth demonstrated by each student as they used the module. Students would also be asked to self-assess their level of competence and amount of growth after studying each module. Faculty assessment of each student would occur at the end of each module, which students would be asked to self-assess at both the beginning and end of each module. Assessment results would be provided to each student for formative feedback and to faculty for grading purposes.

REFERENCES CITED

1) Stevens, F., F. Lawrenz, and L. Sharp, User-Friendly Handbook for Project Evaluation: Science, Mathematics, Engineering, and Technology Education, J. Frechtling, ed., National Science Foundation document #93-152, 1996.

2) Rogers, G.M., and J.K. Sando, Stepping Ahead: An Assessment Plan Development Guide, Rose-Hulman Institute of Technology, Terre Haute, Indiana, 1996.

Table 2 -- Evaluation Plan for Expert System Software Development Project

Research Question

Implementation Strategy

Evaluation Methods

Timeline

Audience Dissemination

What are the project objectives? What questions are you trying to answer?



How will the objectives be met? Which project activities help you meet each objective?

How will you know the objectives have been met? What measurements will be made? On whom?

When will measurements be made?

Who needs to know the results? How can you convince them the objectives were met?

Do results from expert system software agree with results from human experts?










Collect data using both beta version of the software and traditional interview methods.

Statistically compare results -- software is deemed reliable if correlation coefficient exceeds 0.8 for a sample of 20 students.

Measurements will be completed during the second year of the project after beta version of software is written.

Statistical results will be used by human experts and programmers to improve software.

 

© B.M. Olds and R.L. Miller, 1997

Table 3 -- Evaluation Plan for Retention Project

Research Question

Implementation Strategy

Evaluation Methods

Timeline

Audience Dissemination

What are the project objectives? What questions are you trying to answer?



How will the objectives be met? Which project activities help you meet each objective?

How will you know the objectives have been met? What measurements will be made? On whom?

When will measurements be made?

Who needs to know the results? How can you convince them the objectives were met?

Do students in the retention project remain enrolled in college at a higher rate than their peers?

Develop and implement interventions designed to improve retention in targeted population.

Collect and statistically compare enrollment data for students in the project with enrollment data for their peers.

Collect data at the end of each semester of the project.

Results will be used by project directors to improve the project and convince administrators to institutionalize the project curriculum.








 

 

© B.M. Olds and R.L. Miller, 1997

Table 4 -- Evaluation Plan for Curriculum Development Project

Research Question

Implementation Strategy

Evaluation Methods

Timeline

Audience Dissemination

What are the project objectives? What questions are you trying to answer?

How will the objectives be met? Which project activities help you meet each objective?

How will you know the objectives have been met? What measurements will be made? On whom?

When will measurements be made?

Who needs to know the results? How can you convince them the objectives were met?



Are students more technically competent after completing a series of subject modules?

Develop and use the new subject modules.

Faculty will assess the technical competence (absolute scale) and growth of each student in each subject area. Students will self-assess their level of competence and amount of growth during each module.

Faculty will assess technical competence at the end of each module and amount of growth during the module. Students will self-assess at the beginning and end of each module.

Results will be provided to each student after each module (formative feedback) and to faculty (summative assessment).








 

© B.M. Olds and R.L. Miller, 1997