Screen Shot 2015-11-25 at 3.28.55 PM

5 things I’m resolving to doing post-conference #eval15.

I don’t know about you, but after every conference, I go into hermit mode. This often means that I fail to follow up with dear colleagues whom I meet only once a year or act on important ideas and learning. But, slack no more.

Here are five things I’m resolving to doing post-conference #eval15.

1. Follow up with contacts e-mails.

2. Upload my slides to TIG libraries.

3. Consolidate and review my conference notes. Follow up on new resources.

4. Declare that I am a part of a global eval community

5. Heed the call to embrace and report on failure in evaluation, and to learn from failure. To that end, I’m committing to writing up a less-than-successful developmental evaluation.

So, who’s with me?


PD-TIG #EVAL15 Panel: Huey, Gargani, Stead, & Norman on Program Design: Evaluation’s New Frontier?

I’m really looking forward to #EVAL15 because this will be the first year that the conference will feature a program track in program design. Here’s a look at the full agenda.
I am especially looking forward to the PD TIG-sponsored panel,  “Program Design: Evaluation’s New Frontier?”. The session will feature:
The panelists been asked to consider what program design could mean in the context of evaluation theory and practice. The goal of the session is to attempt to arrive at an initial articulation of what program design could mean in terms of theory and practice.
Here’s the abstract: Notions of design have entered the mainstream in both public and private sectors. Underpinning this shift is an emerging realization that the once-professionalized approaches and mindsets designers employ to solve complex problems may be applied to other contexts. Bridging evaluation with design holds potential to reconceptualize both the theories and practices of evaluation, and as a consequence, enhance evaluation influence.  This panel of expert evaluators draws on their theoretical and practical experiences to explore what ‘program design’ could mean for evaluators and evaluation practice.
 Without giving too much of the plan away, the speakers will be responding to the following prompts:
  • How have you come to ‘program design’? What do you mean by it?
  • What potential do you see in program design in enhancing evaluation, if at all? What hazards do you see in evaluators engaging in program design?
  • Are there any dangers in evaluators assuming the role of a program designer? Is there not a risk of cooptation?
  • What competencies or skills do you see as critical to doing PD work? How might newcomers go about learning these skills?
  • If there is potential in program design, what might be next step toward growing or legitimizing its practice? What should we strive to understand better? What might this body of knowledge be comprised of?
 It promises to be an exciting panel. This session has been scheduled for November 12th, 2015 (3:00PM – 4:30PM) in “Field”. Come for the panel and stay for the business meeting, which will be short!
See you there!

Haggerty and Doyle on 57 ways to screw up in grad school.

Who hasn’t screwed up in grad school? Been there, done that.

Professors Kevin Haggerty (Professor of Sociology and Criminology, University of Alberta) and Aaron Doyle (Associate Professor in the Department of Sociology and Anthropology, Carleton University) recently published a book on the many ways one could screw up in grad school.

“The book, written by two former graduate directors, covers the rookie mistakes made by new graduate students and delivers a how-to guide that sets would-be PhDs on the right track and off the path to failure—which these days includes a only 50 percent completion rate. The authors’ have a bang-up website, the aptly named, and the book has recently been profiled by Inside Higher EdScience, and CBS News’s Money Watch.”  – 

In their book, they identified 57 ways one could “screw up” (reproduced below from the book’s table of contents).

And on Times Higher Education,  Haggerty and Doyle shared 10 of them.

I may be too far along to change course. But for many of you, this book may be just what you need.


An Introduction to Screwing Up
Who are I?
Gendered Pronouns
Thesis vs. Dissertation

Starting Out
1. Do Not Think about Why You Are Applying
2. Ignore the Market
3. Stay at the Same University
4. Follow the Money Blindly
5. Do an Unfunded PhD
6. Do an Interdisciplinary PhD
7. Believe Advertised Completion Times
8. Ignore the Information the University Provides You
9. Expect the Money to Take Care of Itself

10. Go it Alone and Stay Quiet
11. Choose the Coolest Supervisor
12. Have Co-Supervisors
13. Do Not Clarify Your Supervisor’s (or Your Own) Expectations
14. Avoid Your Supervisor and Committee
15. Stay in a Bad Relationship
16. Expect People to Hold Your Hand

Managing Your Program
17. Concentrate Only on Your Thesis
18. Expect to Write the Perfect Comprehensive Exam
19. Select a Topic Entirely for Strategic Reasons
20. Do Not Teach, or Teach a Ton of Courses
21. Do Not Seek Teaching Instruction
22. Move Away from the University Before Finishing Your Degree
23. Postpone Those Tedious Approval Processes
24. Organize Everything Only in Your Head
25. Do Not Attend Conferences, or Attend Droves of Conferences

Your Work and Social Life
26. Concentrate Solely on school
27. Expect Friends and Family to Understand
28. Socialize Only With Your Cliques
29. Get a Job!

30. Write Only your PhD Thesis
31. Postpone Publishing
32. Cover Everything
33. Do Not Position Yourself
34. Write Only to Deadlines
35. Abuse Your Audience

Your Attitude and Actions
36. Expect to be Judged Only on Your Work
37. Have a Thin Skin
38. Be Inconsiderate
39. Become “That” Student
40. Never Compromise
41. Gossip
42. Say Whatever Pops Into Your Head on Social Media

Delicate Maters
43. Assume That the University is More Inclusive Than Other Institutions
44. Rush into a Legal Battle
45. Get Romantically Involved with Faculty
46. Cheat and Plagiarize

Am I Done Yet? On Finishing
47. Skip Job Talks
48. Expect to Land a Job in a Specific University
49. Expect People to Hire You to Teach Your Thesis
50. Turn Down Opportunities to Participate in Job Searches
51. Neglect Other People’s Theses
52. Get an Unknown External Examiner
53. Do Not Understand the Endgame
54. Be Blasé about Your Defense
55. Do Not Plan for Your Job Interview
56. Persevere at All Costs
57. Consider a Non-Academic Career a Form of Failure

Final Thoughts
Appendix: A Sketch of Grad School
The Thesis
The Program
Your Department
The People


PDTIG Screenshot

On launching the Program Design Topical Interest Group

I recently wrote about our motivation behind starting a Topical Interest Group (TIG) on Program Design on AEA365.

Our interest in organizing the PD-TIG grew out of a casual conversation. We (Karen Widmer, Terence Fitzgerald, and I) realized that we each held responsibilities for program design in our respective practice. We were inspired by the potential for infusing evaluative thinking and evidence into program development, and in doing so, evaluators might further contribute to clients’ goals of developing robust, impactful programs. However, even among ourselves, we had differing perspectives on what this might look like in practice. As a group, we were inspired by Gargani and Donaldson’s work on program design, Patton’s work on developmental evaluation, and more generally, writing on theory-driven evaluation. We said to ourselves: Wouldn’t it be great if we could get together with others who might share our passion and curiosity about program design?

It’s been over two years since that initial conversation, and it has taken a lot of work behind-the-scene to get the TIG up and running.

I am most excited by the idea that the TIG can engage the broader evaluation community on program design than individuals alone.

New this year to the annual AEA conference is the program design track to the program. A program schedule can be found on the PD-TIG web site.

In an upcoming post, I’ll profile an all-star panel session being organized during the PD-TIG Business Meeting. It features Dr. Huey Chen, Dr. John Gargani, Brenda Stead, and Dr. Cam Norman as panelists.

Until then, onwards!

Flickr/Pawl Pacholec

A Developmental Reflection and to a New Beginning.

Well, it’s been a while. My last concerted effort at blogging was in 2013 when I launched the 52×5 project. I set out to blog once every workday for an entire year. I sputtered out fairly quickly. Projects got in the way. Writing projects took precedent. In retrospect, my goal was too ambitious. Evaluating this undertaking in a summative sense would suggest it to be an utter failure. Neither its objectives nor its goals were met.

Evaluating my efforts developmentally might lead one to render a different judgment. To do that we would have to look to the reasons for undertaking the project and its activities in the first place. At the time, I knew I had several writing projects coming online, and so, I had wanted to develop myself as a writer. I saw blogging as a means to stretching my writing muscles and warming up for my writing projects; setting out to write 52×5 posts was only a means to an end. And at that, I’ve some demonstrable success.

Between now and then, I’ve been busy preparing manuscripts for publication. I also went through the credentialing process with the Canadian Evaluation Society and was successfully awarded the Credentialed Evaluator designation. All the while, I finished three major evaluation projects—of which two were developmental evaluation projects.

Moving forward, I hope you will find a renewed energy on this blog. There’s much that I hope to share on evaluation theory, principles, and practices—especially those that operate at the intersections of design, evaluation, and social innovation. And I can’t wait to get started. Only this time I won’t commit to as ambitious a publishing schedule as I had before.

Please say hi. It’s been a while!



Merit/Worth/Significance Explained in Plain Language

I recently received an e-mail from a fellow doctoral student asking me to explain Scriven’s notion of merit/worth/significance. One part of her dissertation is around determining the value of test preparation training (e.g. MCAT/GMAT/LSAT prep courses) among language learners. One of her committee members suggested that she use M/W/S as a framework for tackling this aspect of her work. So, I wrote back to her, saying, why don’t we Skype and talk about this.

I’ve been thinking about this problem since. As an evaluator, I am reminded that one of the basic purposes in evaluation is the determination of merit/worth/significance of something. And, we typically refer to whatever we are evaluating (the ‘something’) as the evaluand. This classical definition of evaluation constitutes a part of what Scriven (1991) refers to as the logic of evaluation in a paper by the same name in the Evaluation Thesaurus. The logic of evaluation is a seminal contribution to the field as it gets at the core of what makes evaluation unique as compared to, say, research–evaluation allows us to make evaluative claims. The distinction between M/W/S and its application in evaluation is an important one, but finding accessible writing on this topic is difficult. Perhaps, m/w/s is so obvious to everyone else but me :). Hopefully not.

So… what’s merit, worth, and significance?

Merit, worth, and significance can be easily explained by reference to evaluating anapple. Say you’re at a grocery store. The decision you’ll have to make is to buy an apple. 


Merit has to do with the intrinsic properties, characteristics, or attributes of an evaluand. When buying an apple, most people would prefer an apple that is not rotten, is sweet to taste, and is not otherwise damaged or deformed. That’s typically what people would look for if the apple were to be eaten on its own. But, what if you were buying the apple to make an apple pie? Then, you may wish to buy an apple that is not sweet but  tart. So, as we can see, what we value to be desirable attributes of an object depends on other contextual factors. 

Here is another example. A car has merit if it is reliable (i.e. does not break down while you’re driving down the highway; predictable), is safe (i.e. has adequate safety features and operates as intended), and is powerful relative to its intended application (i.e. say, a commuter car vs a pick-up truck to haul construction material). Now, you may say, a car has merit only if it has an integrated air conditioning unit or a stereo system. A design-conscious person may insist that a car be visually appealing. Increasingly, drivers want good fuel consumption. Different people may hold different views of what constitutes merit. In other words, an evaluand may be evaluated against different dimensions of quality, i.e. criteria. Part of what makes evaluation  fun is surfacing the criteria that one might use to evaluate an evaluand. What’s ‘good’ to you is not necessarily ‘good’ to me. That’s why there are so many kinds of cars out there. 

In a program evaluation, we typically think of a program as having merit if: 1) it does what it sets out to do, i.e. achieves its intended outcomes, and that 2) it makes a meaningful difference as a consequence to its operation.


Now, worth is a trickier concept. In everyday parlance, we might say that an apple (assuming that is ‘good’) is worth something; that ‘something’ is typically expressed in some monetary value (e.g. this apple is worth $2.00; that car is worth $24,999.) So, worth is the value of an evaluand that is expressed as an equivalence to something else. We may say… that this activity is worth ‘my time’. Whereas merit can be difficult to measure, worth is usually expressed in some more easily measurable unit.

Another way to think about worth is in a comparative situation. Let say you’re evaluating two instances of the same program: Program Breakfast-for-all at Site A and Site B. While they may both have merits, the worth of the program at Site A may be different from Site B depending on its impact on the constituents. Worth between two comparable, but different programs may also differ if one is cheaper to run (so one is worth more than the other).

Finally, significance.

Significance is the fuzziest of the three. Significance refers to the values and meanings that one ascribe to an evaluand. Typically, one can learn about the significance of something by asking questions about: What makes this evaluand special? What meaning does it hold for particular individuals?

Ask any young bride about her diamond ring. While it may not feature a big diamond (so, the ring is of limited worth), it probably holds great significance. A young college graduate may be driving a high-mileage car that is nearing the end of its service life. We might speculate that the car has limited merit (i.e. the transmission is wonky, the body is rusting, but the car is still roadworthy), and as a result is of limited worth to any body, but to the college graduate it may hold significance for his/her livelihood depends on it to get him to work everyday.

Notice that significance often have little to do with merit. Indeed, a program may be shown to have limited impact on a community, but it may hold great significance for its symbolic value. We may say that “it matters! Even if it is to a few.” As another example, a program may be shown to be inefficacious, but if it is the only program of its kind that serves a specific need for a vulnerable population, that’s significance to know, isn’t it?

So what?

Knowing m/w/s well enables us not only to unpack what others mean by ‘good’, but it also helps in raising questions around understanding quality, say, when designing an interview guide or constructing survey questions.

Question for you: Is this how you understand merit/worth/significance? Might you have other powerful ways of explaining m/w/s to others? Comment below.  Thanks for reading!

PS: For all you educators out there, is a grade an indication of merit, worth, or significance, or any/all of three?