M&E matters: Our emerging approach to monitoring and evaluating digital transformation
Through conversations, research and some trial and error, I’ve been exploring how Code for Canada — and the civic tech ecosystem as a whole — can address the challenges of monitoring and evaluating our work. We are actively trying to create better monitoring and evaluation (M&E) systems for ourselves and hope that sharing our learning as we go will provide tools for others implementing digital transformation projects.
This is the third blog post in a series, intended to openly track our progress in terms of creating an M&E model for civic tech and digital government. In the first post, I shared learning from my exploration into existing M&E practices in the field. I proposed that although a lot of people are thinking about how to ‘prove and improve’ their work, monitoring and evaluation in the civic tech ecosystem is often limited to counting metrics and struggles to capture the nuance and messiness of behaviour change and capacity building. This is a pressing concern for us at Code for Canada; our programs use things like product development, usability testing, and collaborative policy or process development as the means to equip governments and communities with the skills they need to better leverage digital technology.
In the second blog post, I outlined some of the challenges I heard when speaking with people about the state of M&E in civic tech. The challenges highlighted both how unique the field is, as well as the need for an approach that can adapt to the characteristics of working at the nexus of technology and user-centred service delivery.
In response to the need for better M&E and the inherent challenges facing practitioners, we’ve been developing a set of principles to guide our approach. As always, I am sharing them here as a way to elicit feedback.
- Monitor and evaluate what we can: We acknowledge that we do not have the organizational capacity to do all of the monitoring, evaluation and research that we’d like to. Making deliberate decisions to monitor and evaluate what we can at a particular time allows us to do it well, rather than trying to do everything in a superficial way.
- Acknowledge limitations: We will be open about the decisions we are making, the aspects of our theories of change that we are not actively monitoring and evaluating and why. These decisions will be guided by resources, capacity, and organizational priorities.
- Use existing processes: We will incorporate and build upon existing processes that our partners and clients use. If a government partner is collecting metrics or monitoring key performance indicators, we will use that data as a part of a comprehensive approach to monitoring and evaluation. Our approach will adapt with our stakeholders.
- Be transparent about our approach: We will clearly outline the approach we took to data collection, analysis and learning in our program and organizational reporting. This includes highlighting the assumptions we’re making, and where we have used existing and external data to support our claims. We believe that evaluations should be candid about their limitations and caveats because that allows pushback from the audience.
- Invite others to challenge our perspective: We will communicate the influential characteristics of the context we are operating within and how we as an organization have responded. This includes our overall approach to monitoring and evaluation as well as our decisions on using existing processes, adapting to limitations and monitoring and evaluating what we can, as stated above. We believe that inviting diverse perspectives that are external to our operations will strengthen our ability to tell a compelling evidence-based story of our work.
These principles, and our response to them, have led us to adapt and test a theory-based evaluation approach called contribution analysis (CA). CA requires evaluators to contemplate and articulate their context and the assumptions they hold about how to make the change(s) they want. It helps us answer questions about whether a change has occurred, whether the program has contributed to the desired change and what other influencing factors exist. Put simply, CA enables us to show how a program or activity contributed to an outcome, even if that activity wasn’t the sole cause of the change we’re evaluating.
Interested in learning more about contribution analysis and how it could apply to your work? Merlin Chatwin from Code for Canada and Laura Nelson-Hamilton from the Ontario Digital Service will be leading a workshop on monitoring and evaluation at the upcoming Code for Canada Summit on March 11, 2020 in Toronto.
Contribution analysis has been used in the public sector and in international development initiatives. It has not yet been applied to civic tech initiatives that often combine digital product development with capacity building and behaviour change. We want to see if contribution analysis is an effective way for us to learn about our work, to apply those learnings and provide better service into the future. We think that more and better monitoring and evaluation will help strengthen the evidence base for civic tech and digital transformation around the world, and we are hoping to demonstrate the applicability of one tool to support our peers.
Above all, Code for Canada’s approach to monitoring and evaluation is pragmatic. We believe the the context you’re operating in should drive decisions about which methods are most appropriate. While contribution analysis is an overarching framework, the methods we use to answer the specific evaluation questions within each program will vary based on context.
We also believe that M&E should be adaptable and iterative. A robust theory of change and a project’s M&E plan should be developed while you are constructing the program. This ensures that you’re surfacing your assumptions and identifying the conditions within your context that are necessary to affect the change you want to see. This type of evaluative thinking, taking a disciplined approach to inquiry and reflective practice, is key to an organization’s growth and adaptation.
To put this approach into practice and test it beyond Code for Canada’s programs, we are partnering with the team at the Ontario Digital Service (ODS). We’ll be collaborating to evaluate three different approaches to digital transformation: a product-led capacity-building approach, a product development approach with capacity building as a by-product, and a targeted capacity-building approach. Beyond the in-depth analysis of these three approaches, we want to demonstrate the applicability of contribution analysis to the civic tech ecosystem and show how effective it can be for stakeholders inside and outside of government.
If you’re interested in applying a theory-based evaluation approach to your work and your context, send me a message at merlin@codefor.ca and we can chat. I look forward to seeing this approach develop, and to hopefully seeing it proved and improved by the incredible community of civic tech practitioners in Canada.