Evans Authors: Mary Kay Gugerty

For several decades, the aid effectiveness movement has called for more robust, informed and independent impact evaluation of aid activities, but the prevalence and adoption of these practices remain unclear. This article seeks to understand the current state of impact evaluation practice in the development field by examining standard-setting documents intended to guide the behavior of entities involved in development assistance. We explore these standards as representations of institutional logics that encode current norms, practices, and expectations for these actors and examine the extent to which impact evaluation norms and practices are enshrined within these standard-setting documents. To do so, we examine guidance from a diverse set of 42 standards to better understand how evaluation is conceptualized and what standards are being articulated. We find both convergence and divergence in the institutional logics employed and in how evaluation norms and practices are incorporated into standards. We see convergence in the adoption of a normative, process-oriented logic that appears across many entities in the widely articulated commitment to practices such as information sharing, participation, and listening. We find more divergence in the adoption of a results-oriented logic that implies a commitment to impact evaluation. These distinct logics give rise to two discrete discourses: an “evaluation generalist” discourse that conceptualizes evaluation in broad terms and an “impact centric” discourse that articulates a more comprehensive set of principles emphasizing causal attribution. We suggest that structural characteristics and positionality in the aid system may help explain the adoption of different institutional logics and associated evaluation practices.