CASE STUDIES WORKSHOP ON EVALUATION OF ENGINEERING EDUCATION PROJECTS

Chair:




Facilitator:

Barbara Olds
Professor
Div. of Liberal Arts & International Studies
Colorado School of Mines

Ron Miller
Associate Professor
Department of Chemical Engineering
Colorado School of Mines

Chairperson Barbara Olds described the objectives of the workshop, which were to:

A paper on this topic, authored by Olds and facilitator Ron Miller, appears as Attachment 1 following this workshop summary.

Small groups of participants were asked to list the steps they thought essential to evaluating a project. Lists compiled by the morning and afternoon sessions of the workshop are consolidated as follows.

  1. Determine why an evaluation is needed. Identify sponsor requirements.
  2. Set goals (different from objectives: goals are closer to statements of vision than measurable targets) and compare them with the university's mission statement to ensure they are compatible.
  3. Identify stakeholders.
  4. Determine the metrics (e.g., decide what to measure and how to measure it).
  5. Consider categories of measurements (qualitative versus quantitative).
  6. Determine the types of evaluation needed (summative versus formative).
  7. Identify the evaluators (ideally, not all should be stakeholders; some should be in a position to be more objective).
  8. Ensure funding for evaluation.
  9. Develop the evaluation(s).
  10. Conduct a formative evaluation (to gather feedback).
  11. Conduct a summative evaluation.
  12. Define decisions.
  13. Analyze data.
  14. Identify those who need to know the results of the evaluation.
  15. Provide feedback and adjust the project accordingly.
  16. Disseminate the results.

Ron Miller asked whether participants had thought of approaching the evaluation process as a design problem. He used a diagram of a generic design process (Figure 1) to show that many design steps are similar or identical to steps in the evaluation process. One participant commented that the community involved in a design problem is generally not as complex as the community involved in an education project -- a fact that complicates the design of an evaluation that meets everyone's needs. Miller observed that the point is to show participants that they probably have a wealth of experience that is relevant to developing and conducting an evaluation.

Participants discussed ways of ensuring a valid evaluation, such as using a control group. Although most agreed that an evaluation need not be as rigorous as research, some said that people challenging a project want to know which groups are being compared, and there are seldom enough data concerning the status quo to provide a valid contrast with the project. One participant also noted that the Hawthorne effect -- in which new projects yield better (or worse) results than established ones, or people being observed perform better (or worse) than they would otherwise -- might cause an evaluation to yield results that are not valid.

Olds and Miller distributed the National Science Foundation's User-Friendly Handbook for Project Evaluation: Science, Mathematics, Engineering and Technology Education (NSF 93-152) and reviewed the evaluation process as described in Chapter Two of the handbook. The following points were emphasized:

In discussing the percent of a project's budget to be allocated to evaluation, participants noted that the complexity of the multi-institution Engineering Education Coalitions can cause evaluations of large coalition projects to require more than the handbook-recommended 10 percent.

Some participants expressed continuing discomfort with qualitative evaluation methods and with the difficulty of determining whether or not a project is working. Miller said that there are good qualitative evaluation methods (for example, ethnographic studies or focus groups) that, when professionally applied, yield valid information concerning students' thoughts and behaviors, just as there are bad quantitative methods that yield meaningless numbers.

One participant complained that the evaluation methods, concepts, and tools being presented here and in the Handbook are not new. He and many of his colleagues feel that it is time to develop new evaluation methods.

A participant noted that, in his experience, industrial partners tend to dislike evaluations; they are satisfied with observing the results themselves and therefore are unwilling to participate in a structured evaluation.

Discussion emphasized the importance of asking questions whose purpose is clear and that can be answered meaningfully, to ensure that both evaluators and stakeholders understand what the evaluation is measuring. Participants also noted the importance of understanding stakeholders' objectives for the project, to ensure that the evaluation will measure achievement of those objectives. Some observed that faculty often don't know how to set measurable objectives.

Olds and Miller offered several examples of good vs. bad evaluation questions, noting that good questions and authentic assessments have a reference for comparison, are detailed, yield measurable responses, and are performance- or outcome-based, while bad questions may seem "fuzzy" and produce responses that are not readily measurable.

Participants then developed several research questions for evaluating their own projects, using a matrix developed by Olds and Miller. Out of these, each small group selected one or two and tried to identify methods of gathering information to answer the questions. The workshop as a whole reviewed the questions and methods identified by the small groups. The following are the questions and methods discussed by both sessions.

Question 1:

Regarding a course aimed at teaching skills relevant to working in industry. The research question addressed was, "Can students completing this course present a detailed project proposal that would be accepted by management or sponsors?" The group identified the following components of a successful proposal:

The following methods were proposed for gathering information to answer the question:

1)     Judging by an industrial panel that awards a scholarship(s)

2)     Evaluation by faculty colleagues

3)     Challenges by other students

4)     Scoring against a checklist

5)     Comparison of the final proposal with a proposal drafted at the beginning of the course

6)     Comparison of proposals drafted by students in the class with proposals drafted by a controlals drafted by a control group (with the comparison being made by an industrial panel)

7)     Requiring a proposal to earn a positive evaluation by two out of the following three: industry, faculty, and other students (one problem noted with this method is that even if two out of three do not approve the proposal, the student is nonetheless learning, and that should be noted by the evaluation)

8)     Conducting a qualitative evaluation: Do students mention the course in job interviews?

The group noted that the school may want to use a different combination of evaluators in a formative evaluation than in a summative evaluation.

Question 2:

With respect to a course designed to give students a hands-on, participatory introduction to the study of engineering. The following question was examined: Is a hands-on laboratory environment a friendlier entry to engineering for women and minorities than a lecture format? The following methods were proposed to answer the question:

1)     Making the hands-on course optional and comparing students who take the lab course with those who take only the lecture course to see which group shows a higher rate of retention.

2)     Using any of several instruments available to measure the learning "climate," from sources including Cornell University and the Women in Engineering Program Advocates Network (WEPAN).

3)     Using focused interviews as an evaluation technique.

Question 3:

With regard to a project aimed at shifting students from being passive learners to being active learners, the following question was presented: Does the project create self-learners, as compared (a) to peers who have not been in the project and (b) to the learning styles of project students before and after their participation?

Discussion centered around whether this goal can be measured as stated and also whether a project can create self-learners. The following was determined:

Olds observed that many of the questions presented illustrate the need to define measurable objectives carefully.

Question 4:

What effect does the pattern of group study have on grades? The group proposed the following methods to gather information to answer this question:

1)     Collect student self-reports
2)     Track and compare the grades of students who participate in group study with the grades of students who do not.

Question 5:

Can each faculty member identify three or four basic cognitive concepts and use them as he or she develops and implements a course? Group members proposed the following methods to gather information answering this question:

1)     Collect faculty self-reports
2)     Conduct classroom observations
3)     Collect student reports
4)     Conduct observations before and after the courses.

Question 6:

Has the curriculum changed (for example, to meet modern needs)? The information-gathering methods proposed were:
1)     Compare old and new catalogs
2)     Compare old and new requirements of particular courses.

Question 7:

Do students in a new Foundation Coalition course understand mechanics (physics) concepts better than students in the traditional course?

Question 8:

Have the students in the new Foundation Coalition course improved their scores on the Force-Concepts-Inventory test (to a statistically significant degree) over the students in the traditional course?

In discussing questions 7 and 8, participants noted that if Foundation Coalition scores are better, one might ask whether the Coalition course is teaching to that test. Participants also said that the department should conduct a qualitative evaluation to determine whether students can effectively transfer and apply the knowledge they gain in the Coalition course.

Question 9:

With respect to evaluation of a Research Experience for Undergraduates (REU) project in which students address 80 to 90 steps in the process of developing semiconductor chips, the question asked was, How did students participate in the processing steps? The following methods of gathering data were proposed to answer the question:

1)     Measure time spent in the lab
2)     Review the logs students keep of their lab activities
3)     Have mentors observe student participation
4)     Conduct peer reviews
5)     Track student participation in daily group meetings.


SUMMARY

The workshop chair noted that the process of developing a sound evaluation, including identifying goals, selecting methods, and developing a plan for implementing the evaluation, is often as involved as the product, the evaluation itself. Olds and Miller distributed examples of good evaluation questions and methods developed on a matrix, which provides columns to list the project objectives, how the project will meet its goals, how evaluators will measure whether the project is achieving its goals, when the particular evaluations will be conducted, and issues of information dissemination, such as who will receive the information and how to persuade them that the project objectives are being met. This matrix appears as Table 1 in the attached paper by Olds and Miller (Attachment 1).

Olds and Miller reviewed evaluation resources available, including the NSF handbook and the sources listed in its bibliography; campus resources, such as evaluation experts, college of education faculty, and institutional researchers; and NSF staff. Olds cautioned that those planning an evaluation must be sensitive to the fact that evaluation is often perceived as threatening, especially if the information will be published or shared with peers. One way to ease this perception is to allow the individuals involved to see their own evaluations and then to see the anonymous mean of the complete evaluation before it is published or shared. It is important that those conducting an evaluation work to establish trust.