Social Innovation

Key Journals on Evaluation

The following is a list of journals that focus on evaluation theory and its application in various contexts and settings. These titles are recommended for readers looking to develop a grounding in evaluation theory and methodology.

American Journal of Evaluation
Each issue of the American Journal of Evaluation (AJE) explores decisions and challenges related to conceptualizing, designing and conducting evaluations. Four times/year it offers original, peer-reviewed, articles about the methods, theory, ethics, politics, and practice of evaluation.

Canadian Journal of Program Evaluation
The Canadian Journal of Program Evaluation seeks to promote the theory and practice of program evaluation. To this end CJPE publishes full-length articles on all aspects of the theory and practice of evaluation and shorter Evaluation Practice Notes which share practical knowledge, experiences and lessons learned. A third section of the journal, Addressing Challenges in Evaluation Practice, presents real-life cases written by evaluation practitioners. The journal has a particular interest in articles reporting original empirical research on evaluation. CJPE is a completely bilingual journal, publishing in both English and French. Readership includes academics, practitioners and policymakers. CJPE attracts authors and readers internationally.

New Directions for Evaluation
New Directions for Evaluation, a quarterly sourcebook, is one of two official journals of the American Evaluation Association. The journal publishes works on all aspects of evaluation, with an emphasis on presenting timely and thoughtful reflections on leading-edge issues of evaluation and the organizational, cultural, and societal context within which evaluations occur. 

The journal Evaluation launched in 1995, publishes fully refereed papers and aims to advance the theory, methodology and practice of evaluation. We favour articles that bridge theory and practice whether through generalizable and exemplary cases or theoretical and methodological innovation

Journal of MultiDisciplinary Evaluation 
The Journal of MultiDisciplinary Evaluation is a free, online journal published by the Evaluation Center at Western Michigan University. The journal focuses on news and thinking of the profession and discipline of evaluation.

Evaluation and Program Planning
Evaluation and Program Planning is based on the principle that the techniques and methods of evaluation and planning transcend the boundaries of specific fields and that relevant contributions to these areas come from people representing many different positions, intellectual traditions, and interests. The primary goals of the journal are to assist evaluators and planners to improve the practice of their professions, to develop their skills, and to improve their knowledge base.

Practical Assessment, Research & Evaluation
Practical Assessment, Research & Evaluation is a peer-reviewed online journal whose purpose is to provide free access to articles that can have a positive impact on assessment, research, evaluation, and teaching practice.

Social Innovation

Calling for Canadian and International Bloggers/Twitter/Podcaster attending CES?

Most of you already know that the call for proposal is out for the Canadian Evaluation Society 2014 annual meeting. Do you know of any Canadian or international evaluation bloggers or twitter users? 

Brian Hoessler (of Strong Roots Consulting) and I are organizing a session to be presented at CES on discussing and showcasing Canadian’s use of social media to the global evaluation community. We‘re looking for collaborators who’d be interested in sharing their experiences and co-developing a session on blogging, twitter, podcasting, or other social media activities. The tentative foci are to 1) take stock of the different purposes evaluators use social media  and 2) profile different ways  evaluators are leveraging social media to connect across sectors and boundaries.

Most recently Chris Lysy, Ann Emery, Sheila Robinson, and Susan Kistler presented a Think Tank session at AEA13 that was a hit.

Indeed, we welcome collaborations and partnership with our American and international colleagues as it offers us the chance to compare and contrast the Canadian context to the broader international landscape. 

So… might you or any one you know be interested in this? Hit me up on twitter  @chiyanlam or via the Contact Me page. Let’s talk!

Social Innovation

Spotlight on Productivity – Day-Planning using David Seah’s Emergent Task Planner

This is the fifth post  in the Spotlight on Productivity series, in which I examine productivity challenges associated with academic/knowledge work and take stock of current thinking and tools to help us get things done.

Being Productive = Staying Focus

One of the most important realization about being productive is maintaining razor sharp focus on doing only a few big things a day. The brain, like a piece  of muscle, does tire out. That’s why it makes sense to start the day off doing cognitively demanding tasks when you are fresh and recharged. Leave technical tasks towards the end of the day.

But meetings and errands do get in the way of producing. This requires conscious effort to prioritize tasks and arrange to do them during “down time”. It’s also helpful to create time-blocks where you purposefully block off to dedicate to certain important tasks, like writing a paper or doing literature searches.

In the last post, I introduced David Seah’s tool for project-task tracking. In this post, I introduce David Seah’s Emergent Task Planner for day-planning. It’s has several built-in features that work well with knowledge work.

What is the Emergent Task Planner?

In David’s words, the ETP is designed around three ideas: The ETP is designed around three ideas:

  • Focus – A small set of important tasks is more likely to get done.
  • Assessment – Estimating and tracking task time helps you allocate your time more effectively.
  • Time Visualization – There are only so many hours in the day. By showing you the time you have left, you can see whether your planning is realistic or not.


How to Use It

ETP Instructions (via David Seah,
ETP Instructions from David Seah (via David Seah,

1. Write-in the date and hours of the day at the top and left-side of the form with your favourite pen.

2. Write-in three tasks you want to do, more if you are feeling optimistic!

3. Block-out the time to do them in the day grid on the left.

4. Keep notes of interruptions and unplanned tasks as necessary.

5. Review at end of day, and prioritize what’s left for tomorrow.

Why use ETP

The ETP is excellent for tracking how much time is spent on each task. Since adopting it, I find that I am more conscious of how I am to spend my time, and how I actually spent time. It allows me to do a post-game analysis each day to fine-tune my productivity. I now feel more in control of my time and of my day.

Like the TPT, the ETP is free to download and print in B/W and Colour. The ETP also comes in several different sizes (US Letter/US Half-size 2-Up; A4; A5).

Give it a try and let me know how it goes!

5x52 Social Innovation

Evaluation Lessons from The Stanford $5 Dollars Challeng

English: Many dollar banknotes.

Yesterday, I introduced the Stanford $5 Challenge. Today, I look at what evaluators doing design / developmental evaluation work could learn from this.

If you have $5 dollars in seed funding and only 2 hours to make it happen, what would you do to make the most money?

This is known as the Stanford $5 Challenge. Tina Seelig asks this of her students at Stanford University enrolled in the Stanford Technology Ventures Program. Most students, she explains, would use the money towards a lottery ticket or gamble away the money at Las Vegas. These students assume that the $5 is too little money to do much with, and engaging in high risk/high reward activity is the way to go to net the most profit.

Surprisingly, the teams that made the most money kept their $5. Instead, they reframed the problem and challenged assumptions, and looked to opportunities beyond the initial framing of the problem. Focusing on the $5 seed money framed the problem too tightly.

So, what could design-informed evaluators learn from this?

There are two questions we must raise in working with any innovative program at any phase of our engagement:

  • Does the program serve a real and significant (i.e. meaningful) need, and;
  • Is the program design optimal for effecting the intended change.

Raising the question of whether the program can serve a real and significant need is analogous to asking whether there is a market for a product/service in the business world. A program may be mounted in response to some perceived needs on the part of the implementers (e.g. government, funders, etc.), but not from the perspective of the program targeted recipients. For instance, universities may feel the need to introduce educational programming for students living in residences out of a sense of social purpose, but the program may be deemed  ineffective and flawed, because students see little reasons to be ‘educated’ in their living spaces. Some might view such intervention as an intrusion of their down time, while others might actually resent such attempts on the University’s part. In other words, our job is to raise the question of whether the program serve some real and significant need from the perspectives of the program recipients.

However, raising such a question of program recipients can sometimes be problematic. Recipients may very well perceive that a program is unwarranted, when in fact they could very well benefit from participation (and in some cases, they should participate in the program in spite of feeling no particular need for it). Those of us who have worked with children know this:  few children would volunteer to sit patiently and practice at the piano or voluntarily sign up for swimming lessons at their own will. What good parents do is that they expose their children to these opportunities, build their confidence, and help them persist despite initial resistance. Why? It’s because they know that some activities are good for the kids in the end. In other words, misinterpreting that there are no extant needs in a program situation can be equally dangerous, as the $5 challenge illustrated; those students who focused too narrowly on the problem saw $5 as too little money to do anything meaningful and subsequently gave up.  Evaluators can help their clients by raising questions and questioning assumptions. One way to do is to problematize the situation to promote discourse. “Is it really the case that… ” On to the second question.

The question of whether the program design is optimal for effecting the intended change is about the linkage between the theory of change and the theory of action within a particular program. We saw in the $5 dollar challenge who made the most money thought outside of the box and turned to different ways to make money.  In program evaluation, we can ask the following questions of the theory of change: is the way we currently conceptualize change appropriate? Might there be other ways to effect change? What blinders might we have on? Where else can we learn more and think differently about this program? If this is where we want to end up (and see these kind of changes happening in the program recipients), how else could we facilitate these changes?

Assuming that we are satisfied with the theory of change, we can begin to consider the theory of action, i.e. how a program marshal its resources to operationalize its theory of change. Ask yourself , might there be other ways of achieving the same intended change, given how we think change could be realized? This is a challenge that the business world is especially well-adept at tackling due to

competition. Let’s take the example of a fueling station franchise.  While the business model to turn a product (gasoline or diesel) into profit is essentially the same across different companies, each theory of action differs in where companies place their refueling stations, loyalty programs, pricing of products, and other convenience items (e.g. coffee, car washes). These different operations (activities) influence the purchasing decision, and so companies develop strategies in hopes of gaining a competitive advantage over one another.  In the social space, these inevitably will be questions of the comparative sort, and will have to be answered empirically.

  • To sum up, the take-home lesson here is to think hard about whether the program model fits the program context. When it does, we have a viable program that serve a real need (and therefore stand to make a real difference in people’s lives.)
  • When we are designing program models, i.e. when we are trying to come up with a program, the focus is about optimizing the program model to fit the program context.
5x52 Social Innovation

The Stanford $5 Dollars Challenge

If you have $5 dollars in seed funding and only 2 hours to make it happen, what would you do to make the most money?

English: Many dollar banknotes.
English: Many dollar banknotes. (Photo credit: Wikipedia)

This is known as the Stanford $5 Challenge. Tina Seelig asks this of her students enrolled in the Stanford Technology Ventures Program at Stanford University.

Most students, she explains, would use the money towards buying a lottery ticket or gamble away the money at Las Vegas. These students assume that the $5 is too little money to do much with, and engaging in high risk/high reward activity is the way to go to net the most profit.

Surprisingly, the teams that make the most money kept the $5. Instead, they reframed the problem and challenged assumptions. Focusing on the $5 seed money framed the problem too tightly. Seelig tells of students who looked for opportunities around them. One team set up a free bike tire pressure check-up service outside of Stanford Student Union. They charged a few dollars to re-inflate tires. Stanford students were appreciative of the service, so much so that they generated a higher profit when they switched to a by-donation model. Another team secured reservations in local restaurants for diners and sold them for a profit. The team that made the most money did something even more inventive: they sold their three-minute slot, when teams were to present to their classmates on their strategy, to the very same companies that wanted to recruit the program’s graduates.

Lessons learned: The take-away here is one of learning to think innovatively and creatively. Identify what the perceived problem to be. Identify what assumptions are at play that frame the initial problem formulation. Then, question those assumptions at play. Finally, reframe the problem.

You can watch Tina Seelig talking about  the $5 Challenge here. Tomorrow, I’ll explore the implications of this challenge for developmental evaluators and design-minded evaluators.

Social Innovation

CES Toronto 2013 Presentation: Insights on Using Developmental Evaluation for Innovating: A Case Study on the Co-Creation of an Innovative Program

I’m delighted to be speaking on this topic tomorrow at the CES Toronto 2013 conference, Tuesday, June 11, from 10:15-11:45AM, Main Mezzanine,  Confederation 6.

Title:  Insights on Using Developmental Evaluation for Innovating: A Case Study on the Co-Creation of an Innovative Program


Developmental evaluation (DE) supports social innovation and program development by guiding program adaptation to emergent and dynamic social realities (Patton, 2011; Preskill & Beer, 2012). This presentation examines a case study of the preformative development of an innovative educational program. It contributes to research on DE by examining the capacity and contribution of DE for innovating. This case provides evidence supporting the utility of DE in developing innovative programs, but challenges our current understanding of DE on two fronts: 1) the necessary move from data-based to data-informed decision-making within a context of innovating, and 2) the use of DE for program co-creation as an outcome to the demands of social innovation. Analysis reveals the pervasiveness of uncertainty throughout development and how the rendering of evaluative data helps to propel development forward. DE enabled a nonlinear, co-evolutionary development process centering on six foci of development-definition, delineation, collaboration, prototyping, illumination, and evaluation-that characterize the innovation process.