My first opportunity to seriously consider the intersections between evaluation and design was in a class on writing. The instructor, a poet herself, had us developing our craft as mature writers would. She introduced us to how writers think about writing and how writers approach the task of writing. The formulaic, straight-through write-once approach learned and honed in grade school made way for a more organic approach—a practice I continue to this day.
The approach goes something like this: Start with flow-writing, an interrupted 10-min session of brain dump, to pen thoughts onto paper. (This is the creative phase of the writing process.) Then, return to the writing and edit ruthlessly. Focus on clarity and precision of language. Finally, copyedit the writing after all the heavy-lifting is done. (See Peter Elbow’s work, Writing without Teachers; video clip included below).
What struck me about this approach to writing was how it mirrors the developmental approach advocated in developmental evaluation. Both approaches focus on promoting purposeful and intentional changes made the object of development, be it a piece of writing or a program. However, the kind of change desired is not one of incremental changes, but more of changes in form and function. In program evaluation, we understand this to be changes to the program model.
Like writing, there has been a strong emphasis and reliance on utilizing a linear approach to developing programs (needs assessment –> program planning –> program implementation –> program evaluation). It would seem, though, that this linear approach has limited utility and is only appropriate for few conditions meeting strict conditions. More on this thought in the future.
A young program evaluator was casted into the deep waters of innovation and discovered the potentials of developmental evaluation and design.
One afternoon during the winter term of my first year in graduate school, my advisor asked if I had a second to spare. Standing in a darkened, narrow corridor, in one of those serendipitous moments, she asked if I could help her re-think the undergraduate training of preservice teachers. For five years, she and a colleague had been working on developing a course in classroom assessment that was mandatory for all 700 students enrolled each year. Despite their efforts, the a different quality of learning was desired. Their instruction was constrained by limited financial resources, restrictive lecture-style instruction, and a meagre allotment of 7 hours for instruction. They had wanted to experiment with alternative ways of structuring learning in classroom assessment. I was intrigued by the problem and the prospects of making a small contribution to a practical problem. I agreed. As was typical of conversations with my advisor, I left with a sense of intellectual bewilderment and stimulation.
My curiosity in using evaluation as a vehicle for problem-solving and social change thus began. What I had originally thought to be a simple project involving some literature review, analysis into our particular program context, and maybe some coaching sessions, blossomed into a full-fledged developmental evaluation project. A few months later from that first interaction I found myself helping to pilot a blended learning mini-course that saw the integration of microblogging to connect teacher candidates, who were by then on field placement, with their peers and with faculty mentors. Just when they were grappling with doing assessment in their respective classroom, the instructional team had the opportunity to guide and inject new thinking about assessment. Assessment suddenly sprung to life became a practical and situated professional practice.
What was remarkable about that project was the role that evaluation–specifically, developmental evaluation–played. Developmental evaluation not only provided a means towards anchoring our decision-making through ‘best-available data’, it also helped us to continually develop the program through incremental learning, itself a developmental process. It also occurred to us that incorporating some sort of evaluation exercise as a way of reality-testing was the prudent and responsible way to go about piloting an unknown, untested way of teaching; DE offered that.
Emerging from this substantial experience myself, I subsequently went back, analyzed the project through a researcher lens, and wrote it up in the form of a case study. This project proved not only intellectually gratifying but also viscerally draining. During this project, I experienced anxiety. I experienced emotions like feeling lost and feeling stuck. In the write-up, I explained that these emotions resulted from the uncertainty associated with innovating. Often we see and talk about evaluation as if it’s a systematic, clinical procedure, and far too often we neglect the emotional component of leading and participating in evaluations.
In trying to unpack the evaluation for the case study, I contended with the notion that what I had participated in was indeed not an evaluation. Indeed, the project (the evaluation) and the evaluand (the program) lacked many of the hallmarks of a program evaluation: clear, specific, and measurable goals; an operational program; or program participants. But a closer look at some of the processes and activities told a different story. What had transpired wasindeed an evaluative exercise and evaluativethinking played a prominent part in moving the project forward.
But there was something more…
the way we systematically explored options and made decisions about what might had been appropriate for use in our particular context… the aim to ‘do different’… and the permission to be vision-driven and participant-driven…oh, and the way in which we went from ‘nothing to something’…. how we embraced uncertainty, how I led and coached inservice to my clients and participants… and creating the space to think creatively and innovatively…
If anyone is going to be attending, I’d love to get together. I will be presenting the results of my project on Developmental Evaluation and its application to social innovation, which won the CES/CESEF SEEK award, over two sessions: Tuesday, June 11, 15:45 – 17:15 EST, the second, to be confirmed.
I’ll be tweeting about the conference at @chiyanlam under #cestoronto2013 and posting lessons-learned following the conference! See you there and online!