Projects even with a good planning, adequate organisationmachinery and sufficient flow of resources cannot automatically achieve the desired result. There must be some warning mechanism, which can alert the organisationabout its possible success and failures, off and on. Constant watching not only saves wastage of scare resources but also ensure speedly execution of the project. Thus momitoring and evaluation enable a continuing critique of the project implementation.

First, what is COLEMA-ME?
COLEMA-ME has become kind of a coined term to describe the process by which organizations generate evidence about their results. It has been around for some time but has become increasingly popular over the last decade because of the need to measure the Millennium Development Goals (MDGs).

Organizations focused on strengthening their M&E activities, but soon realized that monitoring and evaluation activities cannot be separated from project planning. Monitoring is conducted based on indicators set out in planning documents and therefore if the project plan is wrong, it is likely that we will measure the wrong things. Among the most common mistakes in planning is that organizations don’t take the time to understand the problem well enough before they design a solution, or they focus on output-based indicators instead of outcomes or impacts. That is, they focus on ticking the boxes for activities they are responsible for doing, like delivering things. But they may not measure what people do with the things they receive, or the effects the things have on their lives, which are usually the outcomes we would like to measure.

Take, for example, an economic development project where the implementing agency reports the number of, say, beehives and the number of workshops on beehive management they deliver to project participants but not the changes in income that the participants may experience due to selling honey and other derived products. That’s focusing on activities and not on outcomes. Could it be that the beehives are just giving people more work without any additional benefits?   

Evaluation is often lumped together with monitoring and thought to be the sum of monitoring data over time. It is more than that. Evaluation is about collecting data (which may or not be related to monitoring data) systematically, to answer questions and make judgments about a project. It  should  take a broader look and challenge the assumptions that underpin the project theory of change (a diagram and statement that explains why and how we think an intervention will work). But evaluation is less common than monitoring and that broader perspective is sometimes ignored. Overall, effective planning is critical to good M&E.

In addition to planning, the use of data is also very important; for what is the point of collecting data and transforming it into information if it is never used? Therefore, M&E has an implicit goal of making sure the evidence it generates triggers learning, changes in behaviors and decisions that benefit the disadvantaged groups that the project aims to benefit.  

All of this is to say that M&E is more than simply “monitoring” and “evaluation.” It is about project planning, defining results frameworks and indicators, choosing the best data collection methods and approaches, collecting, cleaning, analyzing, visualizing and sharing data and, perhaps most important, making sure results are used. Below are some examples of how we are working to integrate Akvo tools and M&E.

It is worth noting that the concepts described above are helping us write Akvo’s approach to generating evidence – a document that outlines what we think is important in the process of M&E and which can inform what we do moving forward. Perhaps we’ll make the concepts that are part of the term “M&E” explicit. Maybe we’ll develop a better name for it.