Here’s the single question we think about most with our climate conversations project: how do we know if the training and the conversations are really having an impact?
We know we’re not holding this question alone. Many orgs are using conversations as part of their campaigning toolkit (Greenpeace, The Wildlife Trusts and City of Sanctuary to name a few) so how do we support each other and share our learning? Here’s where we’re up to…
Let’s start with what we think are the four key dimensions of political impact – and how we might go about measuring it in each case.
1. What’s the impact of the 6-week Challenge on trainees themselves?
At first glance, it might seem odd to start by looking at the impact of our training on trainees themselves: surely they’re already on board with the need for ambitious climate action, and the impact we’re really interested in is on the people they talk to?
Not so fast. As we’ve mentioned in previous blogs, one of the stats that made us want to try out this training in the first place is the striking finding from More In Common that the UK has the least welcoming climate movement of any country in Europe. That’s seriously bad news if we want to be reaching beyond our base to engage with so-called ‘Persuadables’ on the issue (and we really do) – so the first and most basic test of our approach is whether it helps trainees to be able to build bridges to those people, through taking an engaged, listening approach.
The good news: this aspect of impact is reasonably easy to measure, through questionnaires and interviews with our trainees. More good news: the feedback from trainees is really positive, with 85% of Workshop participants and 95% of Challenge participants reporting that they feel more able and confident to have climate conversations. So far so good.
2. Who are our trainees and who are they having conversations with?
Next up: who are we actually reaching with this project and the conversations that it generates?
Most climate activists in the UK hail from the political ‘segment’ known as Progressive Activists (‘PAs’ for short; for an explanation of the seven different segments, click here). But if most of our trainees are PAs, and the people they’re talking to are also PAs, then we end up just talking amongst ourselves rather than reaching the ‘Persuadables’ we mentioned a moment ago – who are found among different segments, like Established Liberals (ELs), Loyal Nationals (LNs), or Civic Pragmatists (CPs).
Here, though, things get a bit harder to measure. Because – even for those steeped in voter segmentation and certainly for the vast majority who aren’t – it’s not possible to size someone up at a glance and guess which segment they belong to with 100% accuracy. The people who designed the seven segments use a lengthy questionnaire to determine what segment someone belongs to, after all!
And while we can obviously put that questionnaire to our trainees, it’s a lot harder to figure out how to apply it to the people they talk to: it’s not like our trainees can run a long segmentation questionnaire past someone during an informal chat at the school gate or in the post office queue.
3. What is the impact of the conversation on the person that our trainee speaks to?
Third, there’s the question of how conversations affect the people whom our trainees talk to. Does chatting about climate make them think about the issue more, place a higher priority on it, change their minds about the issue in some way?
Here too, though, the methodological challenges are steep. It would be great if our trainees could put a set of a ‘before-and-after’ questions to people at the start and end of each conversation (“how important would you rate climate action on a scale of 1-10?” … “and how about now that we’ve had this chat?”) – but again, this would feel deeply weird in a chat in the queue at Sainsburys, to the extent that it would seriously undermine the whole aim of having genuine, organic, open conversations.
Similarly, while we’d love to be able to get contact details from people our trainees talk to, so as to be able to run longitudinal studies to find out the impact of the conversation after, say, 1, 3 or 12 months – but again, trying to get to this level of precision in measuring the means risks undermining the end of the approach, by turning a conversation into something more like a market research survey,
4. What is the wider political impact of conversations?
Finally and most important of all, how will we know whether these conversations – at any scale – are having any real impact on society and politics?
Many campaigning organisations have for a long time – partly driven by funders – relied on theories of change and campaigning tactics that draw a neat line between tactic and political impact (be that petitions, marches, letters or whatever).
But while these kinds of actions are highly measurable, they don’t actually tell us anything about whether attitudes and values are changing more broadly – and it’s here, rather than petition signatures, where we have most work to do.
Climate Outreach, the go-to experts on this whole area, have called for a much greater focus on the social science of how change happens. They point out that in the UK (and elsewhere), “a social climate silence – an informal silent agreement not to talk about climate change – has prevented active discussion of the topic” and helped to enable political polarisation. Our core hypothesis with this project is that conversations – supported by an online and offline comms package that shares the project and its impact directly with decision-makers and the wider public – can make a key difference here.
And of course, measuring values and attitudes is possible, either nationally or in specific places, for instance through opinion polling and focus groups. But even where values and attitudes are shifting, it’s hard to attribute that directly to particular conversations or projects – when it could equally be, say, a spate of weird weather, or climate being in the national news, or controversy around a local issue.
So where does this leave us?
There are real challenges with securing precise measurements of the impact of this work on the people our trainees talk to and on wider politics – either because the very act of trying to gather data turns organic conversations into a totally different kind of interaction, or because it’s hard to attribute causality to one intervention when real life is just a lot more complex than that.
And this kind of comes with the terrain we’re working on. The blunt truth is that the stuff we can easily quantify is the stuff we’ve been doing for years – and it hasn’t got us to where we need to get to. We have to try different approaches, and accept that this will involve taking risks, getting stuff wrong, and adapting as we go.
And just because it’s hard to quantify impact to a tenth of a percent does not mean that it’s impossible to say whether the approach is working; it’s just that we have to be smart about the approach we take. While it may not be possible to do a laboratory style Randomised Control Trial for every chat across the garden fence, we think there is an exciting learning approach to be taken to ensure we are accurately documenting lessons learned at every stage, inviting peer review and sharing transparently to allow for iterative – and fast – learning.
And there’s also huge scope to take an ecosystem approach to this learning, given – as we mentioned at the beginning – how many other organisations are also currently trialing conversations as part of their change-making toolbox. So we are very keen to work with these and other organisations and experts, to bring together a small community of practice to continue to develop our thinking in this area and to allow for shared evaluation and learning (and needless to say, please let us know if you’d be interested in finding out more or getting involved!)