Chi Yan Lam, PhD(c), CE, is a program evaluator, researcher, and educator. He works closely with social innovators and public sector leaders across Canada to bring analysis and strategy to bear on program development, evaluation, and decision-making.
View all posts by Chi Yan Lam →
This course will guide students in conducting systematic evaluative inquiry in support of data-informed program decision-making. Students will examine the multiple purposes of program evaluation, applying the principles, methods and logic inherent in the needs of targeted program personnel and decision makers.
The overall learning objectives for GDPI 802 are:
To develop an understanding of theoretical frameworks for inquiring into social programs, their organizational structures, learning activities, and outputs
To develop an understanding of how to inquire into social programs and develop program evaluation plans
To develop engage in professional dialogues about program inquiry and evaluation within a community of evaluation practice
To articulate complex understandings and response to dilemmas in program evaluation theory and practice
To apply understandings about program evaluation theory to a specific context of evaluation practice
“Policy evaluation refers to the systematic investigation and determination of value of a policy— and can take place in various sectors, including education. Policy evaluators apply evaluation methodologies and employ social scientific research methodologies to answer evaluation questions in support of policy making, policy development, and policy decision making. Policy evaluation is rarely undertaken for its own sake but mostly conducted in connection to the policy cycle. Because of the inherent political nature of policies, policy evaluation is increasingly undertaken by policy actors—both governmental and nongovernmental parties— with interests in shaping the political agenda. Hence, awareness into both the technical and political dimensions of policy evaluation is important to its understanding and execution.”
“The consumer-oriented approach to evaluation is the evaluation orientation advocated by evaluation expert and philosopher Michael Scriven. The approach stems from the belief that evaluation ought to serve the consumer, that is, the ultimate end user of the particular object under evaluation, the evaluand—be it a program, a curriculum, a policy, a product, or a service. This entry first discusses the history and the key aspects of the consumer-oriented evaluation approach, including the centrality of the consumer, the goal of the evaluation, and the role of the evaluation and the evaluator. It then looks at the techniques used in consumer- oriented evaluation, the checklist developed by Scriven for this evaluation approach, and the advantages and challenges of the approach.
The consumer-oriented evaluation approach arose in the 1960s in reaction to the then- prevailing stances that saw evaluation as an exercise in value-free measurement of whether program goals were achieved. The consumer-oriented evaluation approach reminds evaluators, and those who commission and use evaluation, that an evaluation ought to produce a determination about the merit, worth, and/or significance of the evaluand and that the basis of evaluation ought to be referenced to the needs of consumers.”
“The term inputs refers to the resources made available to a program, policy, or curriculum to enable its operation. More precisely, inputs provide the antecedent conditions from which some programmatic activities are to occur and, as a consequence, achieve some predetermined objectives. Put simply, inputs are what get invested to do the work.
Inputs are important to make explicit because they play a limiting function in the implementation program, policy, or curriculum. For instance, the reach of a program is dependent on its inputs, such as the funding allocated to the program, the size of the venue in which the program is delivered, or the availability of program staff with expertise in the area.Without sufficient input, the efficacy and/or the effectiveness of a program may suffer. Yet, the opposite is not necessarily true. Overinvesting in a program, policy, or curriculum does not necessarily yield greater or better outcomes, if processes are unable to take advantage of abundant inputs. Hence an accurate accounting of inputs is important to understanding the effects of a program, policy, or curriculum.”
The Program Design Method series profiles design methods and exercises from professional design communities that program designers and evaluators can use to work more creatively and innovatively with clients to develop more impactful programs. The first in this series is User Persona.
From evaluating programs we know that effective programs are those that respond to some genuine need(s) or perceived problem(s) for some targeted users in some purposeful ways over a period of time which can lead to meaningful change. Program developers typically have a good sense about what meaningful results they wish to promote with the program. And, program evaluators can help program developers to clarify, measure, and interpret what meaningful, measurable results might be. Where there can be more ambiguity is uncovering constitutes purposeful ways to working with those targeted users to respond to their genuine needs.
Take, for instance, a hospital’s initiative to promote hand sanitation ( the desired behaviour change) to reduce iatrogenic, hospital-acquired infections (impact). What might be effective means toward that end, given a constrained budget? Would installing hand sanitizer dispenser at doorways suffice? Should the hospital, instead, equip its staff with mini sanitizer bottles to be worn on the body? Or, should the initiative focus on producing public health educational messaging to remind its health care staff the potential risk and harm to patients that improper hand sanitation can cause? Or, should it instead highlight the benefits and protection to handwashing for the clinicians? Should the program target both health care staff and visitors?
More often than not, we do know what can be done to remedy a problem or address a user need; we just don’t know which of many plausible solutions would prove effective for a particular situation. Deciding on which course of action to take depends a great deal on how we construe the problem and who the end-users are. This is important because the more specific we understand our users, the more targeted and tailored a program intervention can be designed, and the higher the likelihood that we can make a real contribution.
So, how can program designer design programs with users in mind? One design method is called User Persona.
What is User Persona?
A User Persona is a composite character constructed for the purpose of program design to represent a significant user group. A user persona is typically presented as a profile, detailing the constructed user’s background, histories, needs, aspirations, and values, among other attributes.
It helps if you can see some examples. Here’s one.
The substance of User Persona should be empirical grounded, i.e., the designer should go out there and meet with people who could shed light on the program’s target users; the process to creating and refining a user persona is not fiction-writing.
A good User Persona gives its reader a good sense about the profiled user. This means that the information presented in User persona has to be highly specific and contextual. It is unlike the kind of information that can typically be learned from survey research (e.g., demographics), which tends to be summaries of particular measures. The most useful User Personas are often constructed from the extreme cases, the outliers, and fringes. This is because even users who are alike in their demographics can differ on important dimensions. In a recent developmental evaluation, the distinction technology-welcoming vs tech-adverse was helpful in guiding program decisions for even Millennial teachers.
User Persona is an important program design method for at least two reasons:
The process to creating and refining a user persona brings program designers closer to its users. In fact, designers are encouraged to develop an emphatic understanding, i.e., first-hand experiences of the particular needs and situations user find themselves in. The old adage rings true here: to walk a mile in someone else’ shoes. Such experiential understanding may be the source of some important insights and further inform program directions
A User Persona provides a reference point for evaluating how (and how well) a program is expected to work for particular users.
When to use/do it?
The User Persona method can be initiated during all phases of program development. It is especially helpful during initial planning or as a part of ongoing development:
Initial Planning: User Personas allow us to make some projection about how the studied user might access, benefit from, and participate in the program. Insights generated from the exercise would inform decisions to be made of the program.
Ongoing Development: User Personas allow us to evaluate how the studied user would access, benefit from, and participate in the existing program.
In either case, User Personas provide a means for clients and program designers to engage in evaluative discussion. For a particular user, consider the following:
How might he/she experience the programs?
How might his/her experience differ from the assumed theory of change?
What implication might this person’s needs/values/assumptions/characteristics have on realizing program outcomes?
Might this person experience any barriers or ‘hiccups’ as they participate in the program?
What could be meaningful adaptation/modifications for this user (and others like him/her)?
Okay, so how do I create a User Persona?
User Persona should be constructed from empirical data, i.e., they should not be works of fiction. And you should get as close as you can to targeted users. After all, it is not helpful to design a program for hypothetical users.
Christof Zürn of Creative Companion has constructed an excellent poster to illustrate the kinds of information that can be helpful to capture and represent in a User Persona.
Such information can be learned through interviews, observation, focus group, and even surveying. Usability.gov has a great section on which questions to ask during persona development here.
It is helpful as you can imagine to work with a number of different User Persona during program design.
User Persona is an user-centered method in program design. It allows program evaluators and developers to interrogate a program from a particular user’s perspective. The User Persona should tell a story about the user’s history, needs, values, and aspirations; the substance of which should be grounded empirically.
Are you curious about program design? Have you any particular questions about its methods and methodologies that you’d like us to write about? Drop me a note below or find me on Twitter @chiyanlam, where I curate tweets on evaluation, design, social innovation, and creativity. My thanks to Brian @StrongRoots_SK for inspiring this post.
AEA 2015 was a special one for me for it was the first time the concept of program design got traction. I presented with fellow design-minded evaluators in two sessions.
In the first one, I reported on my experience of embedding design principles into a developmental evaluation. The presentation was entitled, Lessons-learned from embedding design into a developmental evaluation: The significance of power, ownership, and organizational culture. And, here’s the abstract:
Recent attempts at developmental evaluation (DE) are incorporating human-centered design (HCD) principles (Dorst, 2011; IDEO, n.d.) to facilitate program development. HCD promotes a design-oriented stance toward program development and articulates a set of values that focuses the evaluation beyond those ideals expressed by stakeholders. Embedding design into DE promises to offer a more powerful means to promoting program development beyond either approach alone. Yet, embedding design into DE introduces additional challenges. Drawing on a case study into a design-informed DE, this panelist discusses the tensions and challenges that arose as one developmental evaluator attempted to introduce design into a DE. Insights from the case study point to the importance of:
– Attending to power dynamics that could stifle or promote design integration; and,
– Evaluator sensitivity over the deep attachment program developers had over program decisions
These findings allude to the significance of organizational culture in enabling a design-informed DE.
In the second presentation, Chithra Adams (@ChithraAdams), John Nash (@jnash), Beth Rous (@bethrous), and I discussed how principles of human-centered design could be applied to the development of programs.
Specifically, we introduced two design exercises–Journey Mapping, and User Archetyping–as means to bringing human-centered design principles into program design and evaluation.
In an upcoming post, we’ll take a deep dive into these design exercises and examine their application to program design.
Are you curious about program design? Have you any particular questions about its methods and methodologies that you’d like us to write about? Drop me a note below or find me on Twitter @chiyanlam, where I curate tweets on evaluation, design, social innovation, and creativity.