Learning is never an easy task, but, boy, is it worth it. One of the best aspects of the American Evaluation Association annual conference is actually what precedes it — the preconference workshops. More than 60(!) workshops are being offered this year. It is a great opportunity to hear some of our field’s luminaries, thinkers, theorists, practitioners, and innovators share what they know and love doing. It’s also a chance to ‘stay close to the ground’ and learn about the very real concerns and challenges practitioners are experiencing.
I just finished Tom Chapel’s (Chief Evaluation Officer, Centre for Disease Control) 2-day workshop on “Logic Model for Program Evaluation and Planning”. In this blog post, I share some of the more salient insights gathered from his session. Rarely can one abstract evaluation issues so clearly from a practitioner perspective and be able to teach it so succinctly. He draws in great case example; they are rich, sufficiently complex, yet simple enough to carry great educational value. Kudos to Tom.
My interest in this is two-fold. I am interested in the practical aspects of logic modeling. I am also interested on a theoretical level how he argues for its role in evaluation practices. So, in no particular order, here are nine key insights from the session. Some are basic and obvious, while others are deceivingly simple but not.
Some foundational ideas:
1) At the most basic level, a logic model is concerned with the relationship between activities and outcomes. It follows the logic: if we do this, then we can expect this to occur.
2) Program outcomes—more appropriately, a series of outcomes—drive at a “need”, i.e. the social problem that the program aspires to change.
3) A logic model is aspirational in nature. It captures the intentions of a program. It is not a representation of truth or how the program actually is (that’s the role of evaluation).
4) Constructing a logic model often exposes gaps in logic (e.g. how do we get from this step to this step…??). Bringing clarity to a logic model often requires clarification from stakeholders (drawing on practical wisdom) or empirical evidence (drawing from substantive knowledge underlying the field). It also sets up the case to collect certain evidence in the evaluation if it proves meaningful in an evaluation to do so.
5) And in talking with program folks about their conceptions of a program, differing logic about why and how the program works is often exposed. These differing views are not trivial matters because they influence the evaluation design and the resulting values judgment we make as evaluators.
6) And indeed, explicating that logic can surface assumptions about how change is expected to occur, the sequencing of activities through which change is expected to occur, and the chain of outcomes through which change progresses towards ameliorating the social problem. Some of these assumptions can be so critical that unless attended to could lead to critical failure in the program (e.g. community readiness to engage in certain potentially taboo topics; cultural norms, necessary relationships between service agencies, etc…).
7) Employing logic modeling, thus, avoids the business of engaging in black-box evaluation (a causal-attribution orientation) which can be of limited value in most program situation. I like the way Tom puts it: Increasingly evaluation are engaged in the improving business, not just the proving business. Logic modeling permits you to open the black box and look at how change is expected to flow from action, and more importantly, where potential pitfalls might lie.
But here’s the real take-away.
8) These kinds of observations generated from logic modeling could be raised not only at the evaluation stage, but also during planning and implementation. These process use (an idea usually attributed to Michael Patton) insights could prove tremendously useful even at these early stages.
9) Indeed, problems with the program logic is especially problematic when raised at the end. Imagine telling the funder at year 5 that there is little evidence that the money made any real impact on the problem it set out to address. Early identification of where problematics could lie and the negotiations that ensue can be valuable to the program.
The Design Argument inherent in using Logic Modelling for Planning
First, what Tom is essentially suggesting here is that attention paid to the program logic is worthwhile for evaluators and program staff at any point during the program life cycle.
Where these conversations stand to make a real, meaningful contribution is before the “program is let out of the barn”. This is important because the intentions inherent in the logic underlying a program gives rise/governs/promotes the emergence of certain program behaviour and activities (in much the same way that DNA or language syntax gives rise to complex behaviour). The logic both defines what IS and IS NOT within the program, doesn’t it.
So, if we accept the premise that a program can be an object of design (i.e. that we can indeed design a program), then we could argue that the program logic constitutes a major aspect of the design. And because we can evaluate the design itself, as can we with any design objects, evaluating the program design becomes a plausible focus within program evaluation.
Like this:
Like Loading...