Monitoring and evaluation for civic tech: Part 2

Code for Canadaā€™s Merlin Chatwin (right) and Laura Chang at the 2019 Code for Canada Showcase in Edmonton.
Code for Canadaā€™s Merlin Chatwin (right) and Laura Chang at the 2019 Code for Canada Showcase in Edmonton.

Merlin Chatwin is Code for Canadaā€™s Monitoring and Evaluation Lead. In this series, he explores how we might reimagine and improve the way we measure the impact of civic tech projects. You can read Part 1 here.

Over the last few months, Iā€™ve had fascinating conversations with academics at restaurants in Toronto and Edmonton, visited and video-conferenced with government folks across Canada working on digital service and public engagement teams, and Iā€™ve shared coffee with advocates looking to help civic tech get to the next level in their communities.

As you can imagine the conversations were far-reaching, but always came back to the challenges we need to address in creating and sustaining the social change we all desire from civic tech work. As a part of my role with Code for Canada, Iā€™ve been having these conversations while simultaneously doing a literature review on monitoring and evaluation in civic tech.

From the beginning, I experienced two distinct challenges:

  1. Very few people in the civic tech ecosystem are afforded the time and space to specifically focus on monitoring and evaluation; and
  2. Globally, there is a limited amount of literature on evaluating what ā€˜goodā€™ civic tech looks like and how it can be measured.

Weā€™ll be publishing an article later in the year that proposes a way forward for civic tech evaluation in Canada, but in the meantime, I wanted to share some of what Iā€™ve been hearing as a way to catalyze more conversations.

Iā€™ve taken some insight from the conversations and early readings on civic tech M&E and synced them into some ā€˜polarityā€™ tensions. The way Iā€™m thinking about these (aligned with writing on polarity management) is that neither aspect of the tension is wrong or inherently bad, but as a sector, we need to figure out how to continuously adapt where we sit in the midst of the tension.

Here are some of the polarities, in no particular order:

ā€˜Getting sh*t doneā€™ vs. Monitoring and Evaluation

One of the most common themes Iā€™ve seen, whether in conversation or in the literature is that monitoring and evaluation work isnā€™t seen as a priority. Itā€™s either not happening at all, or itā€™s happening off the side of someoneā€™s desk. People talk about M&E as something thatā€™s ā€œtime-consumingā€ or that takes resources away from efforts to ā€œget shit done.ā€

When time is being dedicated to M&E, itā€™s often because itā€™s imposed as a requirement, either by governments or funders. Unfortunately, this has turned what should be a tool for leadership, learning and improvement into an exercise in checking boxes. As a result, this kind of M&E too often focuses on some pretty low-hanging fruit (website hits, clicks, or other digital interactions) and shies away from the more substantive evaluation.

Product development vs. relationship evolution

Something the people I spoke with consistently brought up is that technology is a means to an end. Much of what civic tech is trying to achieve through product and platform creation is a fundamental change in the relationship between government and the public. This can look different based on context, but ultimately itā€™s about harnessing the intelligence of the collective, providing access to important information, and bringing people back into the decision-making processes that impact them.

At the risk of stating the obvious, this is hard to measure. That doesnā€™t mean we shouldnā€™t. Monitoring and evaluating subtle changes in behaviour and relationships arenā€™t as appealing as proving an app saved the world, but itā€™s this gritty work at the human level that makes the real and sustained difference.

ā€œItā€™s entirely possible that a failed product can still lead to a positive change in the relationship between government and the public.ā€

I heard consistently that improvements in the way governments and the public collaborate will ensure technology is applied in ways that actually address complex civic challenges. This doesnā€™t mean that we donā€™t evaluate the products, but that we donā€™t end there. After all, itā€™s entirely possible that a failed product can still lead to a positive change in the relationship between government and the public.

Stating ambitious goals vs. risk of accountability

Another theme that came up in conversations, time and time again, is that folks working in civic tech and digital government are averse to (or even afraid of) stating ambitious goals. Itā€™s easy to see why. Community organizations or social enterprise start-ups donā€™t want lofty goals to be tied to their funding in case they donā€™t meet them. Public servants face a similar struggle; failure to meet goals can be seen as a failure of public trust, or worse as a ā€œwaste of taxpayer money.ā€

Ultimately, this is hindering their ability to work in the open, iterate, adapt, and learn from failure. The volume on the conversations of ā€˜learning from failureā€™ is increasing, but there is still a pressure to ā€˜proveā€™ success rather than demonstrate learning and growth. When the standard is set that civic tech initiatives must prove that they achieved a stated goal, itā€™s not surprising when governments or community organizations avoid putting ambitious goals on the record.

This is not to say that accountability isnā€™t important, but it seems that a shift in how we think about success and working in the open is necessary. How can the civic tech ecosystem use M&E to learn, adapt, and grow and do a better job of managing expectations along the way?

Contribution causal claims vs. attribution causal claims (We did this vs. we helped move this forward)

A theme of many of the conversations (often because I asked this question directly) was how to manage the need to ā€˜proveā€™ an initiative was successful. Governments and funders increasingly favour ā€˜Impact Evaluation;ā€™ they want to prove through the use of control groups that a given initiative was the sole cause of a given impact.

ā€œRequiring organizations to ā€˜proveā€™ that their intervention is the sole reason for a positive impact on a beneficiary group is counter to the culture of civic tech.ā€

In my conversations, there were two key issues with this type of evaluation. First, community organizations understand that the complexity inherent in civic tech work limits their ability to make causal claims (saying we did this without any help). Community organizing, civic participation, and related work is complex and should be done collaboratively, with multiple interventions aimed at bringing about the necessary change. Thereā€™s a common refrain in civic tech, ā€œbuild with, not forā€. Requiring organizations to ā€˜proveā€™ that their intervention is the sole reason for a positive impact on a beneficiary group is counter to the culture of civic-tech.

Second, these types of evaluations are often beyond the human and financial resource capacity of many civic tech organizations. Thereā€™s no money for civic tech start-ups to create control groups and conduct experiments or quasi-experiments to prove causation. People are interested in how to conduct rigorous M&E that is contextually appropriate and within the existing resources for civic tech initiatives.

These tensions are a result of people trying to do good work and answer tough questions. Naming and addressing M&E challenges in the field is necessary for civic tech to move to the next level of difference-making.

The current lack of M&E is a sign of sector disempowerment; the civic tech ecosystem is not empowered to design an intervention, implement and have it achieve different results than originally intended. Improved M&E is a part of a broader culture change, one that empowers people to be ambitious, do their best work, and learn how to do it better.

As always: if youā€™re working on similar things, Iā€™d love to connect. Send me a message at merlin@codefor.ca and we can chat!