Chair: Desmond Jolly, University of California- Davis
Moderator: Susan Smalley, Michigan State University, East Lansing
Challenges in Documenting Program Impacts in Measurable Terms
Desmond A. Jolly
University of California
Evaluation should not be simply a requirement imposed by funding agencies whether public or private. Evaluation is a management tool. It is an essential component of intelligent management, a coordinated, coherent system of decisions designed to maximize the effectiveness of programmatic efforts, given a set of internal and external constraints.
We face a number of challenges in making evaluation an integral, essential and useful component of program efforts. Evaluations require resources and in the context of resource scarcity it is sometimes perceived as a diversion from the main purpose of a project, service delivery to clientele. The ultimate outcomes of interventions in terms of their measurable impacts on clientele performance often manifest themselves after a considerable lag. This is particularly true when our interventions are targeted to clientele that are enmeshed in a constellation of economic, institutional and cultural constraints. This is not true only for low income clients, but for highly capitalized, profit maximizing firms as well. Think of the rate of adoption of practices loosely identified as "sustain-able" designed to improve the long-run productivity of agricultural systems through better management of soils, water and pests.
A fundamental challenge to the adoption of evaluation as an integral part of our programs is the fact that agricultural and environmental education involves the strong presumption that knowledge of plants, animals, soils, insects, viruses, nematodes and the like are necessary and sufficient to promote agricultural development and environmental protection. Only grudgingly and belatedly have we come to include people and their cultural and social systems in our frameworks of study and attention. Hence, most agriculturalists feel very inadequate to intelligently design programs in the context of cultural and social systems. How can we deal with and effectively overcome these challenges? I suggest we begin at the beginning.
If behavioral change is our ultimate objective, and it almost always is, we are typically attempting to decrease or increase the occurrence of certain practices. We do this by increasing knowledge of their potential costs and benefits.
However, we often perceive the delivery of the knowledge as the outputs of our programs rather than the inputs into a change process. Few of us wish to fail or even to be perceived as doing an average job. Hence, there is a psychological bias against evaluating how users perceive the benefits of a given intervention such as a workshop, field day, research paper or demonstration project. If we perceive evaluation, however, as part of the process of product development, of continually improving the product to meet market demand, we might change our attitude towards evaluation as an integral component of program or project develop-ment. If we see it as separate and add-on, it will continue to receive short shrift in terms of our time and attention. It needs to become an integral part of program design, coequal with problem specifi-cation and intervention methodologies.
The use of evaluation requires not only skills and knowledge of methodologies, but also orientation. Both of these can be dealt with through in-service workshops. The objective will not necessarily be to transform every researcher or extension agent into an expert evaluator, but to make them conversant with the approa-ches and techniques of evaluation and, as importantly, to the value of evaluation. These workshops should engage partici-pants in as much "nitty-gritty" as possible through a hands-on approach. It may take more than a one-shot effort to create the level of interest that will change the organizational culture regarding evalua-tion. It is of little use to insist on evaluations if the organizational culture mitigates against it. Simply mandating it will go only so far and without broad acceptance is likely to engender a response of minimalism, a proforma evaluation to satisfy administrators or funding agencies. Participation in workshops on evaluation have to be positioned not as administrative mandates but as key inputs into profe-ssional development. Merits, promotions and other indicators of professional development may benefit significantly from improved evaluation skills.
Most projects, other than a narrowly conceived laboratory or field research project, can benefit from an interdisciplinary or multidisciplinary approach. Even basic research, to the extent that it could influence applied research and ultimately impact the set of choices that users face, can benefit from an understanding of the environment within which it may find application, the potential private and social costs of the innovation, and the potential opportunities and constraints that may face decision-makers. This is, of course, an ideal paradigm for a priori, research design.
For intervention approaches that more directly aim to influence clients in the direction of changed practices, a knowledge of who the target population is, their relevant attitudes, and the constraints and opportunities they face is the key to designing appropriate methodologies and products that can achieve mutually beneficial objectives. Professionals trained in one discipline are unlikely to be able to encompass all the relevant dimensions of a situational analysis. Likewise, their choices of methodologies and products may be constrained by their training and experience. The inclusion of a wider span of knowledge and experience reduces the constraints of knowledge and experience, and increases the windows of opportunity for realistic interventions with increased chances of positive outcomes.
The level of specificity with which we can articulate the problem, the methodologies designed to address the problems, and the expected outcomes will affect our abilities to carry out ongoing, as well as periodic evaluations. A good situation analysis may require extensive research to establish the scope and content of the problem. This research would reveal the particulars related to who the target clients are, their economic situations, their technological knowledge, their attitudes, beliefs and practices, the constraints they face in regard to their practices, and the systems they employ in their households, farms or business operations. Even this exploratory research, when it involves surveys of clients, must be informed by some knowledge of the cultural context in which it is applied.
Once the situation has been carefully described and analyzed, expected outcomes need to be specified in measurable terms, keeping in mind the constraints and limitations alluded to earlier. Outcomes should be projected with as much realism as possible based on the constraints and opportunities facing the project. What is the realistic level of resources that can be allocated to the project and what are their opportunity costs? What are the constraints and opportunities facing the clientele? Given the set of constraints and opportunities they and you face, how many can be reached with the new knowledge and, of those, what proportion can you realistically expect to adopt the new knowledge and practice within given time intervals. The choice of methodologies need to be appropriate to the cultural, economic and logistical circumstances of the clientele group.
At this point in the project design process, an information system must be developed to provide ongoing data on performance, to identify problem areas in order to solve them in a timely fashion, and to develop the database for the periodic evaluations, whether mid-term or terminal.
A schematic of the components of a management system to guide project performance might include the following:
Specify project objectives as a foundation for developing a detailed implementation strategy.
Develop a list of activities and delivery systems and determine required inputs and outputs.
Prepare realistic plans of work in light of resource availability, including staffing.
Allocate responsibilities appropriately among collaborators and staffs.
Develop recording systems to monitor physical and financial performance.
Establish measurable performance indicators based on feasibility, costs, and capacities.
Establish a system to supervise and monitor the performance of individuals and units involved in the project.
Monitor the project environment to keep track of evolving developments that may enhance or inhibit performance.
Provide periodic reports to interested agencies and institutions.
The World Bank, in its guidelines on project evaluation, categorizes the project sequence as comprised of inputs, outputs, effects, and impacts. Inputs would include infrastructure and exten-sion services, as well as obvious inputs as such as improved seeds, fertilizers and chemicals. Outputs would be the physical changes in productivity that result from the employment of these inputs. Effects are the agronomic benefits that derived from these changes. Impacts are the changes in living standards and the quality of life of beneficiaries. We need to include social impacts such as improvements in resource management that enhance their sustainability.Measuring Beneficiary Outcomes
The experience of beneficiaries with respect to project services is one measure of project impacts. Appropriate indicators for measuring beneficiary impacts may include:
Proportion of the target population that is aware of the project's services or inputs.
Proportion of the target population that has access to particular project services or inputs.
Proportion of the target population that received the project's message, service or input.
Proportion of the target population that received the message service or input that understood its purpose.
Proportion of this group that per-ceived the message service or input as potentially helpful.
Proportion of the exposed popula-tion that adopted at least some elements of the projects' recommendations for the first time.
Proportion of the adopting popula-tion that practiced the new recommendations in subsequent periods.
Proportion of the adopting population that continue the practices after the special efforts of the program terminates.
Scaled index of levels of satisfaction with the project.
Reasons for nonusers and nonadopters not adopting recommendations.
Put another way, both the ongoing monitoring and the information system that documents ongoing project performance, and the periodic evaluations should seek to ascertain:
The extent to which the target clientele understand the available services;
The extent to which those services are seen as meeting the needs of those who understand them;
The extent to which those services are tried by those who understands and perceive them as relevant;
The degree to which those who tried the services continue using them.
Ultimately, we want to find out who has access to the project services and inputs, how they react to these inputs, and how these inputs affect their behavior and performance.
Summary and Conclusion
Evaluation and impact assessments are not yet a comfortable part of our institutional cultures. Attitudes and skills mitigate against their incorporation into our programs and projects. But even apart from the requirements of funding agencies, evaluations and impact assessments can be invaluable tools to help move us to higher levels of performance and excellence.
I have suggested more emphasis on in-service training, the use of multi-disciplinary teams in research and outreach, and some basic guidelines for focusing on the usefulness of programs to intended beneficiaries. Change can be expected to be incremental and cumulative. But clearly, for those of our programs involved in public intervention, impact assessment is a methodology whose time has come.
Return to Table of Contents
Return to Title Page