Developmental Evaluation: Questions and Qualities

Posted on August 1, 2014
By Cameron Norman

Developmental evaluation, a form of real-time evaluation focused on innovation and complexity, is gaining interest and attention with funders, program developers, and social innovators. Yet, it's popularity is revealing fundamental misunderstandings and misuse of the term that, if left unquestioned, may threaten the advancement of this important approach as a tool to support innovation and resilience. 

It's quite likely you've been hearing a lot about developmental evaluation, an emerging approach to evaluation that is suited for programs operating in highly complex, dynamic conditions.

Developmental evaluation (DE) is an exciting advancement in evaluative and program design thinking because it links those two activities together and creates an ongoing conversation about innovation in real time to facilitate strategic learning about what programs do and how they can evolve wisely. Because it is rooted in both traditional program evaluation theory and methods as well as complexity science it takes a realist approach to evaluation making it fit with the thorny, complex, real-world situations that many programs find themselves inhabiting.

I am seeing and hearing this term throughout the non-profit, government and civic engagement worlds and I ought to be excited at seeing DE brought up so often, yet I am often not. Why? Is it that I have some Charlie Brown syndrome, or is it something else?

Building a better brand for developmental evaluation?

Alas, with rare exception, when I hear someone speak about the developmental evaluation they are involved in I too often fail to hear any of the indicator terms one would expect to accompany stories about such an evaluation. These include terms like:

  • Program adaptation
  • Complexity concepts like emergence, attractors, self-organization, boundaries,
  • Strategic learning (or continuous learning and LOTS of learning)
  • Surprise!
  • Co-development and design
  • Dialogue
  • System dynamics
  • Flexibility

Alas, DE is starting to follow a well-worn path laid by terms like systems thinking, which is getting less useful every day as it starts being referred as any mode of thought that focuses on the bigger context of a program (the system (?) -- whatever that is, it never seems to be elaborated on) even if there is no structure, discipline, method or focus to that thinking that one would expect from true systems thinking. In other words, its thinking about a system without the effort of real systems thinking. Still, people see themselves as systems thinkers as a result.

The term DE is following this pattern and is being used more frequently in a manner that some might call cavalier, but that I suspect reflects aspiration rather than reality.

That aspiration is about wanting to be seen (by themselves and others) as innovative, as adaptive, and participative and as being a true learning organization. I don't think anyone is lying, rather I think they've been caught up in using terms that haven't been given enough thought. 

DE has the potential to support all of the abovementioned things (innovation, learning, participation, adaptation and more), but to accomplish these things requires an enormous amount of commitment. It is not for the faint of heart, the rigid and inflexible, the traditionalists, or those who have little tolerance for risk.

Rolling up the sleeves to do DE

Doing DE requires that you set up a system for collecting, sharing, sensemaking, and designing-with data. It means being willing to -- and competent enough to know how to -- adapt your evaluation design and your programs themselves in measured, appropriate ways. This is tough stuff and well worth the effort, but tough nonetheless.

DE is about discipline, not precision. Too often, I see quests to get a beautiful, elegant design to fit the 'social messes' that represent the programs under evaluation only to do what Russell Ackoff calls "the wrong things, righter" because they apply a standard, rigid method to a slippery, complex problem or organizations think its something they can tack on to the end of a project. 

It's so much more than that, which is maybe why I get a bit concerned with the language. 

Much ado about something

Why does this fuss about the way people use the term DE matter? Is this not some academic rant based on a sense of 'preciousness' of a term? Who cares what we call it?

It matters because the programs that use and can benefit from DE matter. If its just about gathering some loose data, slapping it together and saying its an evaluation and knowing that nothing will ever be done with it (or maybe some dumb things will be done with it), then maybe its OK (actually, that's not OK either -- but let's pretend here for the sake of the point). When real program decisions are made, jobs are kept or lost, communities are strengthened or weakened, and the energy and creative talents of those involved is put to the test because of evaluation and its products, the details matter a great deal.

If DE promises a means to critically, mindfully and thoroughly support learning and innovation than it needs to keep that promise. But that promise can only be kept if what we call DE is not something else.

That 'something else' might be a form of utilization-focused evaluation, or maybe participatory evaluation or it might simply be a traditional evaluation model dressed up with words like 'complexity' and 'innovation' that have no real meaning in the way they are used.

(When was the last time you heard someone openly question what someone meant by terms like innovation or complexity when they used it? It's because its used so loosely that we forget they are actually quite sophisticated concepts in practice.

We take such terms as given and for granted and make enormous assumptions about what they mean that are not always supported).

There is nothing wrong with any of these methods if they are appropriate, but too often I see mis-matches between the problem and the evaluative thinking and practice tools used to address them. DE is new, sexy and a sure sign of innovation to some, which is why it is often picked as the term/method/tool to use.

Yet, it's like saying "I need a 3-D printer" when you're looking to fix a pipe on your sink instead of a wrench, because that's the latest tool innovation and wrenches are "last year's" tool. It makes no sense. Yet, it's done all the time.

Qualities and qualifications

There is something alluring about the mysterious. Innovationdesign and systems thinking all have elements of mystery to them, which allows for obfuscation, confusion and well-intentioned errors in judgement depending on who and what is being discussed in relation to those terms.

We are now seeing developmental evaluators who have almost no concept of complexity, service design, and have completed just a single course in program evaluation. I'm seeing traditional organizations recruit and hire for developmental evaluation without making any adjustments to their expectations, modes of operating, or timelines from the status quo and still expecting results that could only come from DE. It's as I've written before and that Winston Churchill once said:

I am always ready to learn, but I don't always like being taught

Many programs are not even primed to learn, let alone being taught. Why? Because that is hard stuff and something we need to invest more than words in. 

What to offer and what to ask for

So what should someone look for in DE and those who practice it? What are some questions those seeking DE support ask of themselves?

Of evaluators

  • What familiarity and experience do you have with complexity theory and science? What is your understanding of these domains?
  • What experience do you have with service design and design thinking?
  • What kind of evaluation methods and approaches have you used in the past? Are you comfortable with mixed-methods?
  • What is your understanding of the concepts of knowledge integration and sensemaking? And how have you supported others in using these concepts in your career?
  • What is your education, experience and professional qualifications in evaluation?
  • Do you have skills in group facilitation?
  • How open and willing are you to support learning, adapt, and change your own practice and evaluation designs to suit emerging patterns from the DE?

Of programs

  • Are you (we) prepared to alter our normal course of operations in support of the learning process that might emerge from a DE?
  • How comfortable are we with uncertainty? Unpredictability? Risk?
  • Are our timelines and boundaries we place on the DE flexible and negotiable?
  • What kind of experience do we have truly learning and are we prepared to create a culture around the evaluation that is open to learning? (This means tolerance of ambiguity, failure, surprise, and new perspectives?)
  • Do we have practices in place that allow us to be mindful and aware of what is going on regularly (as opposed to every 6-months to a year)?
  • How willing are we to work with the developmental evaluator to learn, adapt and design our programs?
  • Are our funders/partners/sponsors/stakeholders willing to come with us on our journey?

Of both evaluators and program stakeholders

  • Are we willing to be open about our fears, concerns, ideas and aspirations with ourselves and each other?
  • Are we willing to work through data that is potentially ambiguous, contradictory, confusing, time-sensitive, context-sensitive and incomplete in capturing the entire system?
  • Are we willing/able to bring others into the journey as we go?

DE is not a magic bullet, but it can be a very powerful ally to programs who are operating in domains of high complexity and require innovation to adapt, thrive and build resilience. It is an important job and a very formidable challenge with great potential benefits to those willing to dive into it competently. It is for these reasons that it is worth doing and doing well.

In order for us to get there this means taking DE seriously and the demands it puts on us, the requirements for all involved, and the need to be clear in our language lest we let the not-good-enough be the enemy of the great.

Topics:
Evaluating Community Impact, Community Innovation


Cameron Norman

By Cameron Norman

Cameron Norman is the the Principal of CENSE Research and Design in Toronto, Ontario. He works with health and human services organizations and networks to learn about what they do and design and re-design their offerings to better support their goals and aspirations.

Related Posts

BACK TO THE LATEST