Spotlight on Productivity: A discussion on productivity challenges among grad students/researchers/evaluators/academics

Graduate students, researchers, evaluators, and academics (“knowledge workers“) encounter productivity challenges unlike other fields. Productivity strategies that work for most often do not transfer well for us. One primary reason is because knowledge workers are expected to contribute original knowledge. Lots of it. Boosting productivity means learning how to do more of it, in the least amount of time possible. By gaining greater clarity  into the nature of knowledge work and principles of productivity, and insights into one’s productivity habits, you can expect to produce more at a high quality, reclaim lost time, while keeping  stress down.

Knowledge work requires one to digest a high volume of information, and increasingly, requires close collaboration. Complicating matter is that knowledge work is time-bound: a few months for a project, 2 years  for a masters, 4 years for a Phd; 5 years until tenure review for asst. profs…  The pressure to produce is real.

At the most basic level, engaging in knowledge work requires one to do three things:  Reading + Thinking + Writing.

Productivity matters because performance is hinged upon successes in executing tasks (both in quantity and quality). Grad students are increasingly expected to publish in today’s competitive climate. Tenure decisions are made based on producing a substantial body of works. And of course, for starving grad students,  pay is dependent on productivity

Doing more in less time, while maintaining a high quality is not enough. Knowledge work also requires a high degree of creativity and integrative thinkingBeing able to free the mind to think in novel ways is as important as being able to play to the rules.

Having finished four years of graduate studies and working in the roles of researcher/evaluator, what have I learned about the nature and demands of productivity?

1. Project work

I’ve learned that much of knowledge work is project-based. Projects have definite start-dates, end-dates, and key deliverables. Some projects are low-stakes (e.g. class assignments), some are medium stakes (RA work), and some are high stakes (e.g. scholarship/grants). It’s important to be able to deliver quality work in all situations.

To be high functioning, we must be able to juggle between multiple projects at once.  This requires high-level planning to keep track of project statuses. This means being able to switch between projects. This means recognizing the rhythm to when things tend to get busy in a project, and when things tend to slow down. This means recognizing project bottle-necks.

2. Substantial time horizon

Many of our projects span some significant time. And projects are often put in waiting patterns, like when planes are queuing up to land. Developing a tolerance for waiting is key. Know when to follow-up on projects to give it that little nudge to move things along.

3. Thinking is hard work. 

Thinking is a crucial, and obligatory part of our work, but rarely do we give it enough attention.   It occurred to me some time ago that there are different kinds of task and each require a different level of engagement.

  • Repetitive tasks  require little cognitive demand, but demands high accuracy. Tasks like data entry, processing e-mails, searching for literature are of this type.
  • Intellectual tasks require some creativity. They  usually require some integration or synthesis of information. Most acts of writing are of this type. Qualitative coding is also another example.
  • Finally, creative tasks are those that require a high level of creativity, high level of integrative thinking, and high engagement on task. Theming qualitative data, planning writing pieces, and synthesizing literature are examples of creative tasks.

Now, why do we care about these distinctions?  We care because creative tasks are those that  matter most in knowledge work, but that’s those are the tasks that are most draining. It’s really hard to sustain creative bouts. We  have about two hours of golden productivity each day—- allocate these time towards creative tasks!

4. Balancing obligations

Complicating productivity are those obligations that get in the way. Groceries, car troubles, emails, administrative paperwork, etc… Balancing obligations requires one to be mindful about what really counts and what can wait.  There’s increasing pushback against checking e-mails first thing in the morning. Create time intentionally to allow you to do tasks that really matter. For grad students and academics, that usually means one thing: publications.

5. Tracking success

Finally, I’ve found that tracking productivity successes to be one of the most important, but least obvious ways of boosting productivity. Tracking successes require us to clarify what success looks like. When we being to track success, we can begin to assess how we actually spent time, how much work got done, and how better to optimize our work. At the simplest level,  checking off  to-do lists is one way to track success. But what about higher-level success tracking? Productivity tools are really good at planning, but not at evaluating successes. One of the tools that I’ll be introducing tackles this problem.

Now that I have laid the foundation to how I understand productivity, in the posts that follow, I’ll look at how we can tackle each of these challenges. I’ll be introducing some tools and concepts that I have found to have transformed how I work. 

How are you finding this article? What productivity challenges are you experiencing? Does any of this resonate with you? Share below. I would love to hear from you. 

Advertisements

Spotlight on Productivity

Theming qualitative data
Theming qualitative data.

It’s been a week since I posted on this blog. During this time I have made significant progress on several projects. I analyzed data, wrote up findings, and planted seeds for new projects. Needless to say, I haven’t had the time and space to think about my evaluation and design! Since I have committed to posting once every workday (my 5×52 project), I’m going to be doing a bit of catch up in the next few days. Before returning to discussing evaluation, let’s turn to the topic of productivity.

The next series of post will feature  productivity tools that works well for those of us leading the researcher/evaluator lifestyle.

(List will be updated with links when content becomes available.)

Post 1) Productivity challenges among grad students, researchers, and evaluators

Update. Post 1.5) Getting Things Done: Mindset and Approaches

Post 2) Project Dashboard: Kanban Style

Post 3) How to: Track your time and progress using Task Progress Tracker

Post 4) Day Planning: Emergent Time Planning

Post 5) 5 principles to jumpstarting productivity

Evaluation Lessons from The Stanford $5 Dollars Challeng

English: Many dollar banknotes.

Yesterday, I introduced the Stanford $5 Challenge. Today, I look at what evaluators doing design / developmental evaluation work could learn from this.

If you have $5 dollars in seed funding and only 2 hours to make it happen, what would you do to make the most money?

This is known as the Stanford $5 Challenge. Tina Seelig asks this of her students at Stanford University enrolled in the Stanford Technology Ventures Program. Most students, she explains, would use the money towards a lottery ticket or gamble away the money at Las Vegas. These students assume that the $5 is too little money to do much with, and engaging in high risk/high reward activity is the way to go to net the most profit.

Surprisingly, the teams that made the most money kept their $5. Instead, they reframed the problem and challenged assumptions, and looked to opportunities beyond the initial framing of the problem. Focusing on the $5 seed money framed the problem too tightly.

So, what could design-informed evaluators learn from this?

There are two questions we must raise in working with any innovative program at any phase of our engagement:

  • Does the program serve a real and significant (i.e. meaningful) need, and;
  • Is the program design optimal for effecting the intended change.

Raising the question of whether the program can serve a real and significant need is analogous to asking whether there is a market for a product/service in the business world. A program may be mounted in response to some perceived needs on the part of the implementers (e.g. government, funders, etc.), but not from the perspective of the program targeted recipients. For instance, universities may feel the need to introduce educational programming for students living in residences out of a sense of social purpose, but the program may be deemed  ineffective and flawed, because students see little reasons to be ‘educated’ in their living spaces. Some might view such intervention as an intrusion of their down time, while others might actually resent such attempts on the University’s part. In other words, our job is to raise the question of whether the program serve some real and significant need from the perspectives of the program recipients.

However, raising such a question of program recipients can sometimes be problematic. Recipients may very well perceive that a program is unwarranted, when in fact they could very well benefit from participation (and in some cases, they should participate in the program in spite of feeling no particular need for it). Those of us who have worked with children know this:  few children would volunteer to sit patiently and practice at the piano or voluntarily sign up for swimming lessons at their own will. What good parents do is that they expose their children to these opportunities, build their confidence, and help them persist despite initial resistance. Why? It’s because they know that some activities are good for the kids in the end. In other words, misinterpreting that there are no extant needs in a program situation can be equally dangerous, as the $5 challenge illustrated; those students who focused too narrowly on the problem saw $5 as too little money to do anything meaningful and subsequently gave up.  Evaluators can help their clients by raising questions and questioning assumptions. One way to do is to problematize the situation to promote discourse. “Is it really the case that… ” On to the second question.

The question of whether the program design is optimal for effecting the intended change is about the linkage between the theory of change and the theory of action within a particular program. We saw in the $5 dollar challenge who made the most money thought outside of the box and turned to different ways to make money.  In program evaluation, we can ask the following questions of the theory of change: is the way we currently conceptualize change appropriate? Might there be other ways to effect change? What blinders might we have on? Where else can we learn more and think differently about this program? If this is where we want to end up (and see these kind of changes happening in the program recipients), how else could we facilitate these changes?

Assuming that we are satisfied with the theory of change, we can begin to consider the theory of action, i.e. how a program marshal its resources to operationalize its theory of change. Ask yourself , might there be other ways of achieving the same intended change, given how we think change could be realized? This is a challenge that the business world is especially well-adept at tackling due to

competition. Let’s take the example of a fueling station franchise.  While the business model to turn a product (gasoline or diesel) into profit is essentially the same across different companies, each theory of action differs in where companies place their refueling stations, loyalty programs, pricing of products, and other convenience items (e.g. coffee, car washes). These different operations (activities) influence the purchasing decision, and so companies develop strategies in hopes of gaining a competitive advantage over one another.  In the social space, these inevitably will be questions of the comparative sort, and will have to be answered empirically.

  • To sum up, the take-home lesson here is to think hard about whether the program model fits the program context. When it does, we have a viable program that serve a real need (and therefore stand to make a real difference in people’s lives.)
  • When we are designing program models, i.e. when we are trying to come up with a program, the focus is about optimizing the program model to fit the program context.

The Stanford $5 Dollars Challenge

If you have $5 dollars in seed funding and only 2 hours to make it happen, what would you do to make the most money?

English: Many dollar banknotes.
English: Many dollar banknotes. (Photo credit: Wikipedia)

This is known as the Stanford $5 Challenge. Tina Seelig asks this of her students enrolled in the Stanford Technology Ventures Program at Stanford University.

Most students, she explains, would use the money towards buying a lottery ticket or gamble away the money at Las Vegas. These students assume that the $5 is too little money to do much with, and engaging in high risk/high reward activity is the way to go to net the most profit.

Surprisingly, the teams that make the most money kept the $5. Instead, they reframed the problem and challenged assumptions. Focusing on the $5 seed money framed the problem too tightly. Seelig tells of students who looked for opportunities around them. One team set up a free bike tire pressure check-up service outside of Stanford Student Union. They charged a few dollars to re-inflate tires. Stanford students were appreciative of the service, so much so that they generated a higher profit when they switched to a by-donation model. Another team secured reservations in local restaurants for diners and sold them for a profit. The team that made the most money did something even more inventive: they sold their three-minute slot, when teams were to present to their classmates on their strategy, to the very same companies that wanted to recruit the program’s graduates.

Lessons learned: The take-away here is one of learning to think innovatively and creatively. Identify what the perceived problem to be. Identify what assumptions are at play that frame the initial problem formulation. Then, question those assumptions at play. Finally, reframe the problem.

You can watch Tina Seelig talking about  the $5 Challenge here. Tomorrow, I’ll explore the implications of this challenge for developmental evaluators and design-minded evaluators.

The Design-Informed Program Evaluation Manifesto

the mastery within.

Consider the lyrical lines
from a Verdi’s bel canto aria,
or the highly evolved
ab
strac
tion
from Picasso’s bull.

Consider the artful sentence,
or a poet’s communication through
white space.

Consider:
the injustice of genocide
exposed
on a photograph;

the peaceful, pulsating
indicator light
on an Apple computer;

the transformation of an apprentice
under the tutelage of a master.

the catharsis from a Shakespearian tragedy
(or, say, a modern-day Steven Sondheim Sweeney Todd);

to inspire, to catalyze, to set in motion.

rhythm, sound, harmony, syntax,
colour, shade, composition.
This gestalt generates creative tension.

the ingenious thinking that lies within springs to life.

To the trained eyes,
simplicity may reveal complexity
and from chaos reveal order;
the resolution is one of beauty and wholeness.
To the untrained eyes,
the sophistication remains,
though unnoticed;

An experience is nevertheless shaped,

experienced, and inspired by the

thinking.

To design is to render our intentions into the active voice.

 

 

On Writing and Program Development

My first opportunity to seriously consider the intersections between evaluation and design was in a class on writing. The instructor, a poet herself, had us developing our craft as mature writers would. She introduced us to how writers think about writing and how writers approach the task of writing. The formulaic, straight-through write-once approach learned and honed in grade school made way for a more organic approach—a practice I continue to this day.

The approach goes something like this: Start with flow-writing, an interrupted 10-min session of brain dump, to pen thoughts onto paper. (This is the creative phase of the writing process.)  Then, return to the writing and edit ruthlessly. Focus on clarity and precision of language. Finally, copyedit the writing after all the heavy-lifting is done. (See Peter Elbow’s work, Writing without Teachers; video clip included below).

What struck me about this approach to writing was how it mirrors the developmental approach advocated in developmental evaluation. Both approaches focus on promoting purposeful and intentional changes made the object of development, be it a piece of writing or a program. However, the kind of change desired is not one of incremental changes, but more of changes in form and function. In program evaluation, we understand this to be changes to the program model.

Like writing, there has been a strong emphasis and reliance on utilizing a linear approach to developing programs (needs assessment –> program planning –> program implementation –> program evaluation).  It would seem, though, that this linear approach has limited utility and is only appropriate for few conditions meeting strict conditions.  More on this thought in the future.