Results-based Monitoring and Evaluation as a powerful tool for managing development - Part I: "M"<>"E"

in #monitoring5 years ago

Monitoring back facing Evaluation

Monitoring and Evaluation (M&E) as perceived in some countries is still attached to the implementation phase only. This means monitoring and evaluation are somehow distant from each other and people do not grasp the interconnection between them. We all know (or should know by now) that using resources in an efficient way to generate outputs does not necessarily make people happy and that benefits are intrinsically connected to outcomes. For instance, a weak program design by not grasping what the real problem is, even accompanied by a strong implementation will most probably lead to bad results.

However, in some countries monitoring focuses on inputs, activities and outputs (implementation) only, and evaluation reflects mostly on impacts (which obviously relates to outcomes and goals). Besides this lack of alignment between these two actions, evaluation suffers from working in vacuum. Indeed, how can evaluation work on attributions and causalities when monitoring isn't even able to measure and document success?

How this happened?

First of all, international technical support has contributed to separate even further these two definitions by embarking counterparts in trainings where performance indicators are wrongly seen as output indicators and where there is only one type of evaluation (impact evaluation), which occurs in the end of an implementation only.

Furthermore, monitoring is usually "domestically" done (as it should be) and impact evaluation is often outsourced. This separation of roles, responsibilities and technical capabilities, together with the fact that evaluation does not necessarily use the findings documented by the monitoring process as a basis for work, lead to two completely detached approaches.

Coin Marketplace

STEEM 0.28
TRX 0.12
JST 0.033
BTC 69386.82
ETH 3714.50
USDT 1.00
SBD 3.85