All posts by Chi Yan Lam

About Chi Yan Lam

Chi Yan Lam, PhD(c), CE, is a program evaluator, researcher, and educator. He works closely with social innovators and public sector leaders across Canada to bring analysis and strategy to bear on program development, evaluation, and decision-making.

Calling for Canadian and International Bloggers/Twitter/Podcaster attending CES?

Most of you already know that the call for proposal is out for the Canadian Evaluation Society 2014 annual meeting. Do you know of any Canadian or international evaluation bloggers or twitter users? 

Brian Hoessler (of Strong Roots Consulting) and I are organizing a session to be presented at CES on discussing and showcasing Canadian’s use of social media to the global evaluation community. We‘re looking for collaborators who’d be interested in sharing their experiences and co-developing a session on blogging, twitter, podcasting, or other social media activities. The tentative foci are to 1) take stock of the different purposes evaluators use social media  and 2) profile different ways  evaluators are leveraging social media to connect across sectors and boundaries.

Most recently Chris Lysy, Ann Emery, Sheila Robinson, and Susan Kistler presented a Think Tank session at AEA13 that was a hit. http://www.slideshare.net/InnoNet_Eval/evaluation-blogging-27544071

Indeed, we welcome collaborations and partnership with our American and international colleagues as it offers us the chance to compare and contrast the Canadian context to the broader international landscape. 

So… might you or any one you know be interested in this? Hit me up on twitter  @chiyanlam or via the Contact Me page. Let’s talk!

Advertisements

Highlights from Michael Quinn Patton’s #eval13 talk on the ‘State of Developmental Evaluation’

Michael Patton gave a great talk today at AEA13 on the State of Developmental Evaluation.  Here are some highlights.

1. The ‘Doors’ to Discovering Developmental Evaluation.

Patton observed that developmental evaluators and clients typically arrive at DE through multiple doors. One door through which people arrive at DE are those engaged in innovation. The second door through which people arrive at DE are those seeking systems change. The third door through which people arrive at DE are those dealing with complexity. The final door through which people arrive at DE are those working with unstable, changing context.

Driving this  ‘search for the alternative’ are evaluation users’ desire for a compatible evaluation framework.

2. DE is becoming a bonafide approach. 

AEA 13 features over 30+ sessions on developmental evaluation.

The Australasian Evaluation Society recently awarded their Best Policy and Evaluation Award to a crew of developmental evaluators.

(The CES awarded its  best student essay to an empirical research on understanding the capacity of DE for developing innovative program.)

3. DE is best enabled by clients who are willing to explore and experiment.

4. DE is methods-agnostic, and in fact, defies prescription.

Patton emphasized the importance of operating from the principles of DE and applying and adapting them when conducting DE. (Another way of looking this is to frame DE as engaging in inquiry… this might actually make a nice blog post).

Some observations…

Participants raised some great questions during the Q&A session.  Part of the confusion, it seems to me, lies in the more subtle aspects  to how and why Developmental Evaluation might be more appropriate/useful in some contexts. This confusion arises because of how necessarily responsive developmental evaluation is by design. The on-ramping for someone who hasn’t done DE, but wants to do it, can be difficult. So,  I wonder if there might be a place for a clearinghouse of sort for frequently asked questions—i.e. the sort often asked by newcomers.

3 Quick Survival Tips for Attending AEA13 #eval13

Here are three quick ‘survival tips’ if you are attending AEA 13 in Washington.

1) Free wifi is provided in the lobby of the Washington Hilton. There is no free wifi coverage in the conference areas. There is no free wifi in rooms if you are staying at the Washington Hilton. Alternatively, the Office Depot across the street offers free wifi.

2) Buy a jug or two of water from the nearest pharmacy or convenience store. If you are staying at the Washington Hilton, the nearest pharmacy is called Rite-Aid. It’s 350 ft, or one minute of walking, away, from the Hilton.

3) There are many reasonable lunch and dinner options along Connecticut if you go south. Aim for the Dupont Circle area.

Bonus: There are two Starbucks within close walking distance if you go south on Connecticut. They have cheap sandwiches for around $5. Possible lunch options if you are on a (grad student) budget.

Key Takeaways from Tom Chapel’s AEA13 Workshop: Logic Models for Program Evaluation and Planning

Learning is never an easy task, but, boy, is it worth it. One of the best aspects of the American Evaluation Association annual conference is actually what precedes it — the preconference workshops. More than 60(!) workshops are being offered this year. It is a great opportunity to hear some of our field’s luminaries, thinkers, theorists, practitioners, and innovators share what they know and love doing. It’s also a chance to ‘stay close to the ground’ and learn about the very real concerns and challenges practitioners are experiencing.

TomChapelI just finished Tom Chapel’s (Chief Evaluation Officer, Centre for Disease Control)  2-day workshop on “Logic Model for Program Evaluation and Planning”. In this blog post, I share some of the more salient insights gathered from his session.  Rarely  can one abstract evaluation issues so clearly from a practitioner perspective and be able to teach it so succinctly. He draws in great case example; they are rich, sufficiently complex, yet simple enough to carry great educational value. Kudos to Tom.

My interest in this is two-fold. I am interested in the practical aspects of logic modeling. I am also interested on a theoretical level how he argues for its role in evaluation practices. So, in no particular order, here are nine key insights from the session.  Some are basic and obvious, while others are deceivingly simple but not.

Some foundational ideas:

1)   At the most basic level, a logic model is concerned with the relationship between activities and outcomes. It follows the logic: if we do this, then we can expect this to occur.

2)   Program outcomes—more appropriately, a series of outcomes—drive at a “need”, i.e. the social problem that the program aspires to change.

3)   A logic model is aspirational in nature. It captures the intentions of a program. It is not a representation of truth or how the program actually is (that’s the role of evaluation).

4)   Constructing a logic model often exposes gaps in logic (e.g. how do we get from this step to this step…??). Bringing clarity to a logic model often requires clarification from stakeholders (drawing on practical wisdom) or  empirical evidence (drawing from substantive knowledge underlying the field). It also sets up the case to collect certain evidence in the evaluation if it proves meaningful in an evaluation to do so.

5)   And in talking with program folks about their conceptions of a program, differing logic about why and how the program works is often exposed. These differing views are not trivial matters because they influence the evaluation design and the resulting values judgment we make as evaluators.

6)   And indeed, explicating that logic can surface assumptions about how change is expected to occur, the sequencing of activities through which change is expected to occur, and the chain of outcomes through which change progresses towards ameliorating the social problem. Some of these assumptions can be so critical that unless attended to could lead to critical failure in the program (e.g. community readiness to engage in certain potentially taboo topics; cultural norms, necessary relationships between service agencies, etc…).

7)   Employing logic modeling, thus, avoids the business of engaging in black-box evaluation (a causal-attribution orientation)  which can be of limited value in most program situation. I like the way Tom puts it: Increasingly evaluation are engaged in the improving business, not just the proving business. Logic modeling permits you to open the black box and look at how change is expected to flow from action, and more importantly, where potential pitfalls might lie.

But here’s the real take-away.

8)   These kinds of observations generated from logic modeling could be raised not only at the evaluation stage, but also during planning and implementation. These process use (an idea usually attributed to Michael Patton) insights could prove tremendously useful even at these early stages.

9)   Indeed, problems with the program logic is especially problematic when raised at the end. Imagine telling the funder at year 5 that there is little evidence that the money made any real impact on the problem it set out to address. Early identification of where problematics could lie and the negotiations that ensue can be valuable to the program.

The Design Argument inherent in using Logic Modelling for Planning

First, what Tom is essentially suggesting here is that attention paid to the program logic is worthwhile for evaluators and program staff at any point during the program life cycle.

Where these conversations stand to make a real, meaningful contribution is before the “program is let out of the barn”.  This is important because the intentions inherent in the logic underlying a program gives rise/governs/promotes the emergence of certain program behaviour and activities (in much the same way that DNA or language syntax gives rise to complex behaviour). The logic both defines what IS and IS NOT within the program, doesn’t it.

So, if we accept  the premise that a program can be an object of design (i.e. that we can indeed design a program), then we could argue that the program logic constitutes a major aspect of the design. And because we can evaluate the design itself, as can we with any design objects, evaluating the program design becomes a plausible focus within program evaluation.

Spotlight on Productivity: 5 Productivity Tricks for Researchers/Evaluators/Graduate Students

This is the sixth and final post  of the Spotlight on Productivity series, in which I examine productivity challenges associated with academic/knowledge work and take stock of current thinking and tools to help us get things done.

5. Mise en place

Everything Ready
(via Flickr, wickenden, http://www.flickr.com/photos/wickenden/3629186048/)

Mise en place is French for ‘put in place’. It describes a practice by chefs preparing all the necessary ingredients in advance of service. All ingredients are prepared for use, organize, and within reach. Taken to the context of productivity, it means  gaining as much clarity around the nature of the problem you’re solving, the tasks that need to be performed, and having the necessary pieces to execute a task. Execution is not the time to fumble around with getting things ready. Because knowledge work is often emergent,  take  preparation as far as you can.

4. Workflow

Adobe Photoshop Lightroom presents a workflow-based solution to photographers. (via Flickr, devar, http://www.flickr.com/photos/59874422@N00/253450773)

Professional photographers rely on a well-rehearsed workflow to maximize  productivity. (After all, any time not spent behind a camera is time wasted not making money.) A workflow refers to the general sequence of tasks that need to be performed for any projects. Associated with each step of a workflow are inputs, processing, and outputs.

For research projects, chances are you need to: 1) define the scope and context of a study, 2) design the study, 3) apply for ethics clearance, 4) collect data, 5) analyze data, 6) interpret data, 7) write-up the data, and 8) disseminate the findings. That constitutes a generalized workflow for researching/evaluating. Practicing and adhering to a workflow means less thinking and planning. The GTD workflow I wrote about here is another example.

3.  Define your top 3 tasks to complete for each day.

583-the-emergent-task-planner-01

Identify and limit your day to completing only 3 tasks. Do them when your are mentally charged and refreshed (i.e. soon after you wake up).

2. Pomodoro

Italiano: Autore: Francesco Cirillo rilasciata...
(Photo credit: Wikipedia)

Pomodoro is a timing technique for maximizing productivity.  Pomodoro is Italian for tomato and the technique makes reference to those manual kitchen 30-minute timers. ///CHECK To use the pomodoro technique, simply work in bursts of 25 minutes, followed by a 5-minute break. Each 30-minute burst consistute a pomodoro.  During each pomodoro, avoid any distraction and work ONLY on your task. Pomodoro aficionados would tell you to do 4 pomodoros, totalling 2 hours, and take a longer break.

1. Apply OHIO — only handle it once — to your e-mails.

For each piece of correspondence, only handle it once. Act on it immediately. Then file it, or delete it. Apply David’s GTD workflow.  (via FastCompany, http://www.fastcompany.com/3004136/11-productivity-hacks-super-productive-people#2)
There you have it. I hope you found this series helpful in enhancing your productivity!

Spotlight on Productivity – Day-Planning using David Seah’s Emergent Task Planner

This is the fifth post  in the Spotlight on Productivity series, in which I examine productivity challenges associated with academic/knowledge work and take stock of current thinking and tools to help us get things done.

Being Productive = Staying Focus

One of the most important realization about being productive is maintaining razor sharp focus on doing only a few big things a day. The brain, like a piece  of muscle, does tire out. That’s why it makes sense to start the day off doing cognitively demanding tasks when you are fresh and recharged. Leave technical tasks towards the end of the day.

But meetings and errands do get in the way of producing. This requires conscious effort to prioritize tasks and arrange to do them during “down time”. It’s also helpful to create time-blocks where you purposefully block off to dedicate to certain important tasks, like writing a paper or doing literature searches.

In the last post, I introduced David Seah’s tool for project-task tracking. In this post, I introduce David Seah’s Emergent Task Planner for day-planning. It’s has several built-in features that work well with knowledge work.

What is the Emergent Task Planner?

In David’s words, the ETP is designed around three ideas: The ETP is designed around three ideas:

  • Focus – A small set of important tasks is more likely to get done.
  • Assessment – Estimating and tracking task time helps you allocate your time more effectively.
  • Time Visualization – There are only so many hours in the day. By showing you the time you have left, you can see whether your planning is realistic or not.

HOW TO USE IT

How to Use It

ETP Instructions (via David Seah, http://davidseah.com/blog/node/the-emergent-task-planner/)
ETP Instructions from David Seah (via David Seah, http://davidseah.com/blog/node/the-emergent-task-planner/)

1. Write-in the date and hours of the day at the top and left-side of the form with your favourite pen.

2. Write-in three tasks you want to do, more if you are feeling optimistic!

3. Block-out the time to do them in the day grid on the left.

4. Keep notes of interruptions and unplanned tasks as necessary.

5. Review at end of day, and prioritize what’s left for tomorrow.

Why use ETP

The ETP is excellent for tracking how much time is spent on each task. Since adopting it, I find that I am more conscious of how I am to spend my time, and how I actually spent time. It allows me to do a post-game analysis each day to fine-tune my productivity. I now feel more in control of my time and of my day.

Like the TPT, the ETP is free to download and print in B/W and Colour. The ETP also comes in several different sizes (US Letter/US Half-size 2-Up; A4; A5).

Give it a try and let me know how it goes!