University of Wisconsin
Evaluation is a tool available to us for making our education programs more effective and efficient, as well as a way to show others the value and role of education as a part of broader water quality protection programs. To evaluate a program, one systematically collects information about how the program operates and the effects it may be having on the actions of target audiences. There are a broad array of methods, procedures and models to choose from to accomplish the task of evaluation, Although evaluation holds great promise for strengthening programs, it can also be a very frustrating process which wastes time and scarce resources.Why Evaluate?
Building water education programs is a daunting task. Almost any water issue involves a mix of ecological, physical and chemical variables, as well as diverse social, economic and ethical issues. Most water education programs need to be very purposeful and targeted because they are part of broader programs aimed at serving specific public policy goals.
Evaluation involves gathering evidence about a program and judging this information against measures of success or performance established for the program. Evaluation is like looking at a road map--you often know where you want or need to be, so you set your goals on getting there. Along the way you are watching for signs, physical changes along the roadside, and you may even set a time for when certain things should occur. Evaluation is the process you use to interpret the information about where you are and how far you are from your destination or goal. Evaluating educa-tional programs is not much different. You set your program goals and objectives and use evaluation to deter-mine if you have reached them and if not, why not.
To accomplish the task of evaluating educational programs begin with a four question check list.
Do you have a good understanding of the program that you want to evaluate?
It helps to start out with a basic review of a program's overall purpose, its objectives, the topics or issues addressed by the program and the program's target audience. This will help in making some basic decisions about the focus of the evaluation. For example, if a program has the goal of raising citizen awareness of a specific problem, the evaluator's task is to evaluate changes in problem awareness. To do this requires asking the target audience both before and after an educational program to define local water quality problems. If the program's goal is for people to take action, the focus needs to be on how people have changed their behavior as a result of the program. If an educational program is conducted aimed at improving manure management, the focus of evaluation efforts should assess the extent to which specific management practices such as manure crediting and spreader calibration are being used by farmers before and after the programming efforts.
What is the purpose of the evaluation?
An evaluation effort can have one or more specific purposes. It is important that the evaluation strategy used flows directly from those purposes. Is information needed to help refine program elements to meet specific audience needs? Or is evidence needed that people change their behavior? Or is the purpose to show accountability of the program? These are all different and valid reasons for conducting an evaluation. Evaluation looking for the behavior change might define several potential behavioral changes and then assess the degree to which each occurred. An evaluation focused on accountability might follow a cost-benefit approach.
Who has a stake in the evaluation?
In order to make the final results of the evaluation useful, it is important to understand who holds a stake in the program and its evaluation. For example, the agencies or organizations that provide funding for a program may be interested in knowing the numbers and types of educational materials produced or the number of best management practices implemented. Program directors will have different needs. They might be more interested in how citizen advisory committees were organized or how information was delivered to specific audiences. The program's stakeholders' specific needs also determine how evaluation findings are reported. It is important to understand the issues the stakeholders would like the most information on. It is also important to understand the amount and complexity of the data that is best suited to the needs of the stakeholders.
What evaluation methods are most appropriate?
Based on the answers to the above questions, one can begin to choose from the broad array of evaluation methods available. This is a good time to seek advice from people with program evaluation experience. In addition to the topics discussed above, there are a number of other important considerations in choice of methods. It is useful to ask questions such as: What level of funding, staff or volunteer resources are available? Does the value of involving volunteers in conducting the evaluation outweigh a modest decline in data reliability? Are experienced people available to help with the evaluation method to be employed? Asking questions and seeking broad input into evaluation efforts is the best insurance that time and resources will be successfully utilized. It is also useful to monitor and amend the evaluation strategy as it unfolds. Finally, if evaluation is a fairly new topic, start with a modest evaluation program and build on experience.
Different Types of Evaluations for Education Programs
Different types of evaluation serve different purposes. For example, one purpose might be to learn how a water education program was conducted, while another purpose might be to understand the impact or what happened as a result of the educational program. To help guide one's choice of evaluation tools and strategies, it is helpful to have a general understanding of the purposes for evaluation. For water-related educational programs there are three commonly used categories of evaluation.
1. Formative evaluation - also called developmental evaluation. These evaluations are aimed at providing information for program planning, improvement, modification and management. The evaluation often focuses on identifying audience needs and/or issues, problems, behaviors, etc. that a water resource program should address.
2. Impact evaluation - also called summative or effectiveness evaluation. These evaluations are aimed at determining program results and effects, especially for the purposes of making major decisions about program continu-ation, expansion, redirection and funding. The evaluation often focuses on what happened that would not have occurred if the educational program had not been implemented. Such evaluation usually requires a pre- and post-test design that compares the circumstances before the program was implemented with a future point in time after the program ended. This traditional approach can be modified by collecting data at multiple points in time, and then using the information to improve educational program approaches, topics and teaching methods during program implementation.
3. Program monitoring - The kinds of activities involved in these evaluations vary widely from periodic checks of compliance with policy to routine tracking of services delivered and counting the number of clients. These evaluations most often include post-workshop and post-field day questionnaires, and program participant surveys that focus on the groups and how they felt about the educational program they attended.
Often these categories of evaluation are not used as single approaches. For example, two or more might be used to evaluate any one particular educational program. However, when time and funding is limited one may choose to focus efforts by utilizing more of one approach over another.
Before choosing one over the other, remember each of these approaches have different purposes. During the process of designing an educational program, formative evaluation techniques are helpful because they focus on identifying audience characteristics that are important in tailoring the program to the target audiences. In some instances, formative evaluation can provide valuable data that can be used as a baseline for a future survey. Used in this way, both formative evaluation with impact (summative) evaluation are incorporated by including a comparison before the program began with a future point in time. If funding is limited the fastest and least expensive evaluation techniques are program monitoring. However, results are often limited and may not help to fully understand the impact of educational programming.
Keep in mind that there is no single set of procedures for evaluation. The best advice is gained from experience. An evaluator will want to select the technique or combination of techniques that are appropriate to a given situation. The effective evaluator often brings together a collection of methods and approaches to fit the program being evaluated.
Evaluation: Matching a Method to Your Madness!
Getting started on the right track is essential to a good evaluation. A fairly common problem in evaluating a program is immediately jumping to a method, such as assuming that a survey will meet all needs. Choosing an appropriate evalua-tion method involves figuring out what one wants to measure and what one wants to do with the information collected. For example, if one asks questions that have a range of potential responses, or if the questions require detailed qualifi-cation, a method like an interview that allows for respondents to elaborate will likely be needed. Likewise, if one is looking for specific information, or measuring widespread occurrence of something in a target population, a survey may be appropriate.
Before deciding what is right for evalua-tion, keep in mind that even the most common evaluation methods have their strengths and limitations. After gaining a general sense of the different types of evaluation methods, try seeking advice from those who have used some of these techniques.
Aiming for Results: Planning How To Use Evaluation(s)!
Before setting out to evaluate a program, try writing down some evaluation goals and objectives. This important step will clarify the purpose of the evaluation and help communicate the evaluator's intentions to those involved in the project, including bosses, landowners and agency staff. Careful thought should go into deciding what questions need to be answered and how to get that information with integrity and without bias. A good principle to follow is that bad or inaccurate data is worse than no data at all because people make decisions based on the wrong information. Defining goals also helps to look toward the future, forecasting problems, needs and resources. During the course of an evaluation, these forecasts can be compared with incoming data. In other more general areas, goal-setting leads to staff commitment to action, a feeling of being a part of the team where people are involved with the educator in determining what the program should achieve. These goals can be modified throughout the life of the project as evaluations add changing perspectives and new information.
Planning and evaluation should focus on: (1) what information is needed (i.e., knowledge, skills, attitudes and/or beha-viors); (2) how the information should be collected (i.e., survey, meeting, focus group, interviews, etc.), (3) who will collect this information (i.e., project staff or an external professional), (4) in what time frame will the information be collected (i.e., weeks, months, is it a one-time/time-two comparison) and (5) how will results be communicated (i.e., reports, newsletters, news releases, memos, personal discussions, etc.).
Choosing the right approach is not an either/or decision. Clearly, a watershed project needs to be able to describe the impacts it has had on the lives of those in the watershed. However, successful programs also need feedback loops that can help staff determine what is working. The trick is not to become overwhelmed by the evaluation techniques, but rather to choose the right evaluation tools to meet the original intent of the evaluation process. When assessing educational programs, keep in mind that many others have faced the same evaluation issues. Here are some evaluation tools that should prevent reinventing the wheel.
The Landowner Assessment Project
The University of Wisconsin (UW) Extension conducts several active evaluation projects all with help from the "Landowner Assessment and Program Evaluation Project." One of the most popular evaluation efforts has been the Farm Practices Inventory (FPI-Survey). This standardized survey approach is used in selected watersheds each year to help target audiences and specify objectives for educational programming. The FPI survey records the extent to which farm management practices such as nutrient application, manure and legume crediting and soil testing are used by farmers. When the FPI survey is administered at the beginning of a watershed project, it is used to help plan future educational programs. When used at a second point in time, after implementation for example, the FPI survey can identify changes in farm practices, especially the adoption of nutrient management strategies that protect water quality. Along with the FPI survey, the Landowner Assessment and Program Evaluation Project also has a set of similar, standardized surveys for urban, lake shore and rural nonfarm residents. Each year based on county requests, UW-Extension and the Wisconsin Department of Natural Resources' Nonpoint Section select a small number of watersheds where the Landowner Assessment survey is used. For information about the Landowner Assessment and Program Evaluation Project, contact the Environmental Resources Center, UW-Madison 608/262-1016.
Workshop Questionnaires and Field Day Surveys
It is often a good idea when time and money are dedicated to organizing a watershed event, to ask a few simple questions about who came and why. These short and informal evaluations are often referred to as participant question-naires. The goal is to record the demographics of those who attended the field day/workshop, how far they traveled to get there, and where they heard about
the event prior to coming. Results from a field day or workshop questionnaire should help plan future events by letting organizers know who the event attracted and what form the publicity should be in to reach similar audiences. There are many different versions of field day and workshop questionnaires. Area educators and county extension faculty are good sources of examples.
For new watershed projects, especially those in their planning or early sign-up phase, the watershed newsletter is a regular source of information for landowners. UW-Extension has developed a standard telephone survey that measures the degree to which watershed residents use the newsletter as a regular source of information about the project. Results are used to help improve local newsletters by focusing stories on local interests and reader styles. For information about newsletter evaluation, contact Bruce Webendorfer at the Environmental Resources Center, UW-Madison, 608-262-1369.
Miscellaneous Evaluation Strategies
The University of Wisconsin Environ-mental Resource Center (ERC, UW-Madison) tracks many different evaluation projects, techniques and methodologies. In considering evaluation it can be helpful to see what other projects have done. Other projects have used surveys, focus groups, interviews and case studies. While some needs are very specific, someone else has probably addressed similar issues and a quick call or little research in past projects may prevent reinventing the wheel--saving time and resources. For information about pro-gram evaluation, contact the Environ-mental Resources Center, UW-Madison 608-262-1916.Good books on program evaluation:
Herman, Joan L., Lynn Lyons Morris and Carol Taylor Fitz-Gibbon 1987. Evaluator's Handbook. Newbury Park, California: Sage Publications.
King, Jean A., Lynn Lyons Morris and Carol Taylor Fitz-Gibbon. 1987. How to Assess Program Implementation. Newbury Park, California: Sage Publications.
Mohr, Lawrence B. 1995. Impact Analysis for Program Evaluation. (Second Edition) Thousand Oaks, California: Sage Publications.
Morris, Lynn Lyons, Carol Taylor Fitz-Gibbon and Marie E. Freeman. 1987. How to Communicate Evaluation Findings. Newbury Park, California: Sage Publications.
Patton, M.Q. 1982. Practical Evaluation. Newbury Park, California: Sage Publications.
Stecher, Brian M., and W. Alan Davis. 1987. How to Focus an Evaluation. Newbury Park, California: Sage Publications.
Return to Table of Contents
Return to Title Page