Abstract
This step considers the types of evidence that are needed, the challenges of gathering evidence and how to develop indicators.
- Reflection on project objectives, M&E purpose, the project pathway and the evaluation questions that come out of these processes
- Consideration of the types of evidence and collection methods, as well as resource implications
- Assessment of the pros and cons of indicators and other types of evidence
- Design of a cost-effective and locally appropriate approach to gathering evidence
Theory
Types of evidence
Two types of evidence often mentioned in M&E are quantitative (measurable or quantifiable) and qualitative (assessing quality). Both are important in answering evaluation questions. Using the PRPR Project outlined in Step 3 as an example, quantitative data on the numbers of farmers attending training sessions would be useful, but would still need to be supported by qualitative data (e.g. from interviews) to understand if, and how, the farmers actually used this knowledge on their farms. Indicators that are easy to measure are attractive, but often need additional qualitative information to help in understanding the story behind the figures.
Guidance
Quantitative evidence is good for tracking activities and assessing whether the implementation of adaptation options is on target to delivering the planned outputs. It is also useful for establishing a baseline against which future changes can be assessed. Having common quantitative indicators is useful for comparing progress of similar activities in different locations. Quantitative indicators include, e.g.:
- Adoption of adaptation options
- Change in gross-margins for adopters
- Measures of water pollution, etc.
- The severity of climate hazards
Qualitative evidence is good for identifying what influences the adaptive capacity or resilience of small coffee farmers. Qualitative indicators include, e.g.:
- Willingness and capacity to invest in improvements to natural resources
- Attitudes towards household spending
- Reasons for seasonal variation in access to alternative sources of income generation
- Openness to innovation and adoption of improved livelihood practices
- Community capacity for organizing collective action
Common challenges
As time and resources are often in short supply, it is good to openly discuss the challenges that arise when collecting data. This way, you can be clear about the implications of your choices in the design of your evaluation (adapted from PMERL):
- Using existing data versus obtaining new data: There will always be a balance between what you would like to monitor and evaluate and what is actually possible given the time and resources available. Using data that already exist and are easily accessible makes sense, especially when resources are limited, but these data may not be the most relevant and may oversimplify or even distract from the overall aim of building resilience. For example, existing data might give you an average measure of soil moisture content at the field level, but when looking at plant scale impacts, this is no substitute for smaller-scale moisture measurements where the sampling takes into account changes in soil type and field topography.
- Identifying locally appropriate indicators versus externally determined indicators: Good adaptation is locally specific and M&E systems need to be tailored to local conditions. However, engaging farmers and other stakeholders in the development of these indicators can be time-consuming, which make externally determined measures seem more attractive. External measures may also be easier to compare with other areas. One way to address this is to interpret externally derived indicators to ensure they are relevant at the local level. For example, an externally determined indicator might be access to accurate climate data. This could be locally interpreted by asking if there are good links to meteorological stations and research organizations.
- Building capacity to do M&E versus using external experts: M&E can be used as an opportunity to empower coffee farmers and other stakeholders to learn systematically from their experiences. To build long-term capacity, farmers need to not only participate in adaptation processes, but also to design and manage these processes themselves. This requires a greater investment of time, as this type of change does not happen quickly. Outsiders can perhaps do the work more quickly, but it would build far less capacity for adaptation in the long term. It would be ideal if local farmers could be trained to do the evaluation themselves, which would build their capacity to identify the severity of climate risks, identify assumptions about what activities would build local resilience, and develop a plan to gather evidence in testing these assumptions.
- Evaluating the success of planned activities versus learning from the unanticipated consequences of the work: The evidence needed for M&E is often a mixture of easily measurable data relating to the achievement of activities and more qualitative ‘stories of change’ that can reveal things that were not anticipated in the beginning. These stories of change are important, as they help to challenge assumptions about what supports good practice and what gets in the way.
Guidance
- Validity: Do the people who are using the information believe the method is valid (e.g. are they able to assess the desired indicator with enough accuracy)?
- Reliability: Will the method work when needed?
- Relevance: Does the method produce the information required?
- Sensitivity: Is it able to pick up data variations sufficiently?
- Cost effectiveness: Is it producing useful information at a relatively low cost?
- Time: Is it likely to avoid delay between information collection, analysis and use?
Developing and choosing indicators
An indicator provides specific information on the state or condition of something. In M&E, it is often about providing information about change (e.g. have farmers become more resilient?). Indicators are an important part of understanding change processes and exploring what adaptation measures work or do not work, in what context and why.
There is not only one set of indicators that will work for all adaptation implementation processes. Indicators must be chosen in relation to the adaptation activities that were planned and the context in which these activities were implemented. By developing indicators as part of the project pathway, you will ensure that they relate to your objectives.
Assuming you developed a project pathway in Step 3, this task will help you to gather the information in order to understand how the pathway works in practice. If you have not developed a project pathway, this task will still help you gather the evidence you need for your M&E activities, but it is recommended to take a look at Step 3 first.
Indicators are present in both monitoring and evaluation processes, but not all indicators will be used for both. For example, it may be expensive and logistically impossible to track the attitudes of farmers throughout the implementation of adaptation options (as part of your monitoring), but you may wish to do so as part of a mid-term evaluation. Similarly, your evaluation may not require monthly data from demo plots, but will instead use summary data on how the demo plots performed overall. Monitoring progress is reliant on selecting indicators that are capable of representing changes. These indicators should link to your efforts to implement and validate adaptation options (e.g. make use of observational data from test plots, see Table 19).
Types of indicators
There are two basic types of indicators for M&E and most processes are likely to be a mix of both:
Outcome indicators demonstrate that a particular outcome has been achieved (e.g. reduction in disease-related economic losses amongst smallholder farmers). Outcome indicators are very useful, but can often be difficult to use in assessing adaptation activities as there are often long time lags between the implementation of the adaptation option and the outcome being achieved (e.g. if there is no rust outbreak in an area, how can we know if losses are reduced as a result of the project?). It is thus useful to also use process indicators to measure progress towards the achievement of an outcome (e.g. the number of farmers now using coffee rust prevention measures or the number of farmers who have been trained). These process indicators are valuable in understanding whether resilience is increasing, even if the resilience has not yet been tested by a climate-related event.
To choose which evidence to collect (or which indicators to measure), look at the evaluation questions you developed in the previous section and consider for each what type of evidence or indicator is most suitable, given the resources and capacities available. Record the key outputs from this task in the M&E plan template, Step 5, Task A.
Practical Guidance
Objective
To design a realistic and cost-effective plan for gathering evidence that will help answer the evaluation questions.
Expected outputs
- A plan for gathering evidence, as well as a list of which methods and tools you plan to use to answer your evaluation questions.
- Completion of part C of your evaluation plan.
Required time
Variable depending on the methods you choose, the depth of the information needed and how many people are engaged.
Procedure
Questions to consider
Starting with the identified evaluation questions.
- Is there a mix of outcome and process indicators?
- Is there a mix of qualitative and quantitative indicators?
- Reflect on the common challenges described in Step 5 of the source book which, if any, are of concern? What are the implications of this? For example:
- Will you be using existing data or obtaining new data?
- Are the indicators you plan to use locally appropriate or externally determined?
- Does the process of gathering evidence focus on building local capacity to carry out M&E or external experts?
- Are you interested in evaluating the success of planned activities or learning from the unanticipated consequences of the work – or both?
I. Define which evidence and indicators to use
Most evaluation questions require that you combine different types of evidence in order to get as complete a picture as possible of what has happened. Look at one of your evaluation questions and brainstorm types of evidence that might be used to answer it.
II. Compare different types of evidence
This exercise helps you to compare the pros and cons of different types of evidence
- From the group brainstorm in part I), review the advantages and disadvantages of each type of evidence, as well as what is surprising or strange about them. Be sure to do this in a way that is cost-effective. Use the following PRPR case as an example:
Example evaluation question: How effective were the rust management activities in reducing the severity of rust outbreak?
A brainstorm for the Chiquimula, Guatemala PRPR example might result in the following:
- Personal observations from farmer interviews
- A percentage of rust incidences
- The severity of rust outbreaks
- The average annual income data for smallholder coffee farmers in the area
- A farmer focus group ranking of different rust management techniques
For each type of evidence, consider the advantages, disadvantages and interesting aspects. Remember to look at evidence in terms of how useful it is for answering the evaluation question. Consider how representative it is, how easy it is to access or collect and whether it is accurate and up-to-date.
The Chiquimula, Guatemala PRPR example might result in the following (see Table 42):
- Once you have assessed the evidence, you can compare the good and bad points of each piece to create a more detailed understanding of what exactly you should be gathering.
- Repeat this process for each evaluation question.
- Remember that different people will have different perspectives on what is an advantage, a disadvantage or interesting aspect. Therefore, it may be a good idea to repeat this with different groups (women, smallholder farmers, farmers with larger farms or more diverse enterprises, cooperatives, etc.) to if any differences emerge.
III. Create a realistic and cost-effective plan for gathering evidence
- Once you have considered the pros and cons of different types of evidence, prepare a plan that outlines the evidence and indicators you would like to collect, as well as the methods you intend to use to gather this evidence.
- Think about any assumptions, resource requirements or limitations of these methods. This can be done using the following template (Table 40).
Table 41: Example of assessment of different types of evidence
Evidence | Advantage | Disadvantage | Notes |
Personal observations from farmer interviews |
|
|
|
Percentage of rust incidences |
|
|
|
Severity of rust outbreaks |
|
|
|
Average annual income data for small holder farmers in the area |
|
|
|
Farmers focus group ranking of different rust management techniques |
|
|
|
Table 42: Example of plan for gathering evidence from PRPR case in Chiquimula, Guatemala
Evaluation question | Possible method | Assumptions or conditions for this method to be viable | Resources needed to implement this method | Limitations of this method |
Did what you achieved match what you expected? (Were three FFS set up and 75 producers trained to present rust attack?) | Completing table 45 below with the people that implemented the adaptation process |
|
| If some activities have not been implemented, it may be seen as a ‘failure’ that should be covered up or not discussed rather than an opportunity to learn |
Were the planned activities undertaken in an efficient, affordable, appropriate and timely way? | Group discussion with implementers about what is understood by the words: ‘efficient’, ‘affordable’, ‘appropriate’ and ‘timely’: Describe the characteristics of each word in relation to coffee production and assess implementation activities as to how well they were achieved |
|
| Quality of data is dependent on the discussion, the level of participation and how shared understanding is developed Without careful facilitation, some groups or individuals may dominate and outputs may be biased |
Were inputs sufficient for carrying out the planned activities? (Were there sufficient extension staff for the FFS? Were sufficient funds available for setting up the nursery and distributing seedlings?) | Collection of quantitative data about inputs (e.g. fertilizers, human resources, irrigation, pesticide sprays, etc.) Comparison of expected costs against actual costs of inputs Discussion about likely reasons for any differences between anticipated and actual inputs and their costs, as well as the implications of this on the project. How might things be done differently next time? |
|
| Assessing sufficiency is quite subjective, making it important to discuss what ‘sufficient’ means in your context before trying to answer this question |