Over the last twenty years, people interested in building strong communities have been making an important shift. Eager to “move the needle” on our quality of life issues, they are experimenting with new ways to create mutually reinforcing community-wide strategies that yield big changes as opposed to hoping that the individual efforts of organizations and services end up being more than the sum of their parts. This new approach to community change requires a different way to evaluate.
Conventional evaluation techniques typically focus on discrete programs and services and are carried out by external experts. Evaluation practices from the private sector are narrowly concerned with “operational” and “return on investment” of their organization rather than the perspective of outcomes for the entire community. Neither is suitable for the scale and complexity of community impact. Both have a tendency to approach evaluation as a mechanical exercise in accountability rather than a process of community learning. After many iterations of trying to “do the old stuff on steroids”, the field of change-makers is self-correcting. We are experimenting with new ways of measuring change, exploring who is responsible for outcomes, developing methods that can keep up with the fast-moving pace of community change activities, alternative approaches for getting changemakers involved in the actual assessment process, and using the results to drive new thinking, better strategies and deeper impact.
Rhetoric about "data-based decision making" and "evidence-based" practice can give the impression that one simply looks at evaluation results and a straightforward decision follows. Erase that image from your mind. That is seldom, if ever, the case.
This article describes five simple rules that have emerged out of 50 years of trial and error that can assist participants, funders, and evaluators of Collective Impact initiatives to track their progress and make sense.
This resource offers five practical shared measurement challenges that, if not handled well, can weaken a group's ability to evaluate and manage their Collective Impact effort.
This paper will help prepare the terrain for doing community-engaged evaluation work, explore transparency, learning and consent as an integrated part of evaluation practice, and successfully engage communities in evaluation.
By: Pamela Teitelbaum
Cellphilming is a participatory video technique for creating learning experiences, collecting data and sharing stories in your community. When applying this tool to the evaluation context, it can encourage greater participation of all community members in telling the challenging and successful stories resulting from community change initiatives. This can facilitate dialogue on policy and program changes within the community
By: Mark Cabaj
The User Profile Worksheet and User Profile Card explore the primary users’ questions, what kind of decisions they will make with the answers to the questions, their preferences for how the assessment should be conducted, and when they need the evaluation feedback.
By: Mark Cabaj
This paper was written by Liz Weaver, Tamarack and Max Hardy, Collaboration for Impact about the opportunities and challenges they face in supporting collective impact and community engagement efforts in Canada and Australia.
Jean-Marie Chapeau is the Consulting Director Evaluating impact at the Tamarack Institute. Deeply convinced that local and collaborative approaches are an essential lever in solving complex social issues, Jean-Marie has been involved in community development and social change for 25 years.