How do we evaluate our impact?

Steve_Walker's picture

Evaluation is often seen as a necessary evil: a requirement for financial support. It can often feel like that, certainly. When we’re trying to understand how to use novel technologies, though, evaluation can be an important part of the learning process. After all, it’s the opportunity for us to understand technologies ‘in the wild’: and our ‘wild’ is often very different from that of business or government whence many technologies originate. How then, can we organise evaluation as a learning process, in this case, about technology?

As Andy Dearden argued in the introduction to a PRADSA hotseat a few weeks ago, ‘Value in social action … is delivered not by technology alone, but by networks of people working collectively … supported by and communicating through technology’. This raises two difficult questions for evaluation:

  1. what do we mean by ‘value’? The objectives of social action are not usually reducible to an economic bottom line, and I suspect most of us would try to resist doing so. Value will frequently be defined locally, and defined in multiple ways.
  2. how can we disentangle outcomes due to a technology from those which are attributable to other factors (for example, pedagogic innovation in a learning project)?

If we aim to use evaluations to learn from each others’ experiences, then we need to have ways of answering these questions. Otherwise, we will be limiting the wider usefulness of our evaluations. As Clodagh Miskelly and myself have evaluated projects in very different contexts, we thought it would be useful to explore the lessons and tensions together and with the wider social action community.

Steve Walker's bio

Steve is a Senior Lecturer at Innovation North, Leeds Metropolitan University, where he is also a researcher in the Centre for Social Innovation. He has worked with ICT and social action organisations as both a technology practitioner and a researcher since the mid-1980s. He has led the evaluation of a series of large-scale projects developing the use of ICT for learning in European trade union education, led by the ETUC (see Dialog On Project Evaluation Report[1] and TRACE Project Evaluation Report[2]). Steve was a co-investigator in the Technology and Social Action project, and is a co-investigator in the PRADSA project. Steve is making a rather feeble attempt at blogging at http://stevewalker.wordpress.com


[1] http://stevewalker.files.wordpress.com/2007/12/doevaluationreport.pdf

[2] http://stevewalker.files.wordpress.com/2008/01/trace-evaluation-report-final.pdf

Clodagh Miskelly's bio

Clodagh Miskelly is an independent researcher and evaluator and works in participatory media facilitation and training. She was previously a research associate at the Community Information Systems Centre at the University of the West of England where she also completed her PhD. Her current roles include Network and Knowledge Co-ordinator for Creative Exchange (http://www.creativexchange.org) and content producer for the award-winning ‘L8R’ (http://www.l8r.uk.net) for Hi8us South (having previously spent 4 years as the evaluator of L8r).

Clodagh has a background in evaluation of community, voluntary and public sector projects, specialising in projects involving participatory media in the education and youth sectors including L8R an online participatory drama resource used in schools and youth projects across England, and the pilot of the Loop (Mental Health Media), a web-based resource using digital stories produced by young people to inform other young people about issues relating to mental health. Her work with Creative Exchange also involves managing evaluation processes on projects concerned with the role of arts and culture in development. Most of Clodagh’s work involves projects that struggle to demonstrate their value in quantitative terms and where value and learning can be difficult to evidence using ‘traditional evaluation’ methods. She uses qualitative and interpretive approaches to evaluation which place emphasis on surfacing the unexpected and unintended outcomes of a project as well as considering how well it might have met the initial aims and which prioritise the views and needs of participants (or ‘beneficiaries’). She favours approaches which place emphasis on generating learning from a project and its participants in order inform further development rather than to declare success or failure on initial project plans and objectives.

Leonie Ramondt's picture

hi both, my encounters with evaluation is that it has potential for really useful feedback for a project's team, though wading through the rheems of conversational data to analyse it can be a huge task.  Any tips ?

Clodagh Miskelly's picture

How can we organise evaluation as a
learning process, about technology social action?

Steve
raised three big questions in his introduction

  • How can we organise evaluation
    as a learning process, in this case, about technology?

    • What
      do we mean by ‘value’ in evaluation?
    • How
      can we disentangle outcomes due to a technology from those which are attributable
      to other factors (for example, pedagogic innovation in a learning project)?

As Leonie has also raised the usefulness of evaluation for project development so I thought I
might talk about a little about my perspective on evaluation as a learning process.

And as Steve says evaluation is often seen as a requirement for financial support. However it’s important not to dismiss this. Many organisations who are really committed to building evaluation into their work in order to learn and develop their projects, still have to meet funders’ requirements in order to survive. A second reality is that they have little in the way of resources for evaluation.

This can be a difficult balance for both the organisation and the evaluator, and one that involves multiple understandings of both value and values. (I’ll come back to that in another post)

However I’m wary of focusing evaluation and learning in this context around the technology. While many of the projects I’ve evaluated have had new uses of technology or new technologies at their heart, the point of the projects has been some form of social change. To my mind this should be the focus of the learning process. Technology is considered in as much as it is hindering or enabling those goals (or both depending on context).

What’s more I think that to consider the evaluation as an evaluation about technology can actually disguise how that technology is being used.

For example, a couple of years ago I was evaluating an online drama project being used in UK schools and youth projects to support young people in addressing lifeskills choices. The project involved a mix of DVD based episodes (traditional tech?) and activities and discussion etc through a website (the newer uses of technology). At times there wasn’t a great deal going on on the website in view of the number of schools using the resource. (This may not be a great surprise to anyone who has grappled with ICT facilities in UK schools and youth projects). A focus on the use of the website suggested that the project wasn’t really reaching young people. However when I spent time in schools and youth projects I saw a completely different picture. Most of the places I visited didn’t have sufficient access to the internet to allow young people to spend time so they adapted the resource to suit their context.

One youth project had over 60 kids showing up to see the episodes. They couldn’t accommodate them on their small number of computers, but did have rich group discussions with kids who who previously wouldn’t even show up to the project. They didn’t have the resources to use the new aspects of the technology so stuck to the DVD, but this was still supporting young people in considering their life choices.

Another project elected a few people to do research on the website and then used that research to create their own dramas. They knew the issues and discussions on the site in great detail but their involvement was invisible if you focused on their technology use rather than
their opportunities to address lifeskills.

AndyDearden's picture

Is a part of the problem about what we are trying to evaluate, for what reason, and for whose benefit?

The link between a technological intervention and the (social ) outcomes that motivate social action will always be indirect, mediated by many other factors. Disentangling the effects that our intervention as technologists has played is going to be complex. This will be more difficult where the technology is used to support 'back-office' activities, as opposed to interventions where technology plays a key part of in the social intervention. 

So let's think about the readers of our evaluations.

If we are accounting purely to funders to demonstrate that the funds were spent in a 'reasonable' and 'responsible' manner, then we will go for one form of evaluation, and detailed qualitative approaches are likely to be useful. But we still need to consider what the opportunity costs are of having done this project with IT instead of other (non-IT centric) projects.

If we are trying to improve our own performance as practitioners in future projects, then we may need to develop some general frameworks and comparative frameworks, as well as being open to the specifics of individual projects. We can compare case studies and try to identify 'critical success factors' (not that I like working with CSFs) in our own practice. Ideas like design patterns may be a relevant format.

If we are trying to create general lessons that can justify major investments (e.g. proposing that the voluntary sector increase its average annual IT spend) then we will be required to be even more systematic. In this situation, is it necessary to develop a range of quantitative instruments to supplement the qualitative? If so, what should those instruments look like?

Hannah Beardon's picture

This conversation is provoking a response for me, but I am not sure what it is... on the one hand I agree with Andy that you need to be clear what you want to do with an evaluation. On the other hand, I am not sure that evaluations should be undertaken with a set purpose/ objective in mind... so you have to be careful. Evaluations are always subjective, and partial, and a lot of it is based on impressions and hunches. EWspecially when we are dealing with social change which is not linear or easily quantifiable ... and ripples out in ways you could not ever foresee, or fully capture.

I mean, i recently facilitated a reflection with various stakeholders in a long-term project in Colombia, and had to choose the story which I wanted to tell, based I suppose on what I thought would be the most relevant and interesting for those managing this type of project (which is where Andys comment comes in). I could have done about 3 different PhDs based on the leads I got as to how social change translates into reality there, in different contexts and different dynamics. For example, how different types of relationships form with those rising up as agents of social change and more traditional leaders (collaboration, competition etc) in areas run under indigenous councils and areas under mainstream government... SO Ihad to limit what I asked and heard while there, to get a fuller (but very partial stil!) picture, and then limit what I told aswell... partly for space and time issues, readability etc and to get some points well heard. This process was a lot more honest and open about the role of an evaluator/ facilitator and therefore these choices and subjectivity issues were aired and shared. however, it was still a real challenge because i was asked for more objective information about scale, for example, which i felt gave a different impression to the story i was trying to tell.

 

Ramble ramble... but perhaps my point is that it is important to recognise that the evaluation process is very limited, also very important to inform certain future decisions and learning, and that a lot of decisions need to be made during the evaluation process which perhaps need to be made in a transparent and open way.

Clodagh Miskelly's picture

Hannah, your description of the evaluator as a sort of distiller is one that's very familiar to me.

( my metaphor is going to stumble now as I don't know enough about distillation...)

I thinks it's often a bit like being a bottle neck or some kind of filter and that can have the benefits of filtering out a nice clear story that gets certain important things across or it can equally be a process of obscuring.

It's definitely partial and subjective and I suppose the way to highlight what is filtered out and how, is to identify all the different filtering stages, some which might be required by those who asked for the evaluation and others which are habits, ad hoc compromises and politically or ethically informed decisions by the evaluator. Or maybe .. a bias towards certains kinds of methods etc.. and a lack of feedback or data on a certain aspect might also shape what comes through...

Anyway in an ideal world.... I think i acknowledging and making plain how the evaluator comes to tell a particular story should be sit alongside as open and full an evaluation process as possible. By which I mean that rather than tailoring an evaluation to a tight set of objectives or a particular purpose it should be a process where a picture or lots of different pictures are progressively built up and open wide ranging reflections are shared which can then be used as a body of evidence (i can't think of a less stuffy phrase right now) from which an evaluator (or others involved in the project) can draw out different stories/evidence to address different interests (such as Andy's different suggestions above).

Ofcourse this kind of approach has the danger of being a bit of lucky dip.

Apart from/related to the partiality of the stories we tell and the stories people ask us to tell there are plenty of tensions in this.

One issue is scale, open approaches to evaluation are manageable at community level .. what about the level on which Steve works with international organisations...?

A key issue is the reality of funding requirements often at odds with what different participants in a project see as its value.

I've recently being party to a whole series of discussions around evaluation of /making the case for/how to evidence the role of culture and/or arts in social change mainly in relation to HIV prevention (and using technologies but those of puppetry, dance etc..) . In one workshop a large group of participatory arts and media practitioners shared their different approaches to documenting their work from which they were able to evaluate if and how their work was making any difference in the communities in which they work. These practitioners were critical of their own practices, discussed the limitations. They also had countless examples of the successful work they had achieved working within communities and groups to support discussion and address taboo subjects, share information etc.. etc...

However most of this work and the documenting and evidencing was about working closely with small numbers of people and within communities and about subtle shifts in attitude and about having the information to make informed choices. The funding they were receiving and therefore the objectives they were required to meet were coming from an entirely different worldview. In some cases this was based on a pyramid model of the numbers of people they 'reached' (for example, 200 people watched a video) and often measured success in terms of the spreading of particular set of behaviours (abstinence, be faithful, wear a condom) which weren't often either realistic or appropriate in the context.

The experience of this group (and my own experience in other contexts) was that the stories they could tell about their work and the documentation /evidence they could produce could never relate to the stories their funders wanted to hear or the form in which they wanted to hear them.

This is I think a common experience and to illustrate it I often use this perhaps overly simplified example from sex and relationships education in the UK. Some funders are very focused on information and on reducing teenage pregnancy. They want to know how many young women learned about how to use a condom. A measure of success might be that condoms were demonstrated in 200 classrooms. But knowing what a condom is doesn't mean that a young woman will be assertive enough or sober enough to get a man to wear one. Similarly another longer term measure of success is a reduction in teenage pregnancies in a particular area. The assumption is that all teenage pregnancy is a bad thing and a mistake; that to a large degree it can be reduced through sex and relationships education. However a major study with young mothers in the UK recorded that many of these young women saw having a baby as a constructive life choice which was better than many of the other opportunities that seemed to available. So perhaps it wasn't about knowing how to use a condom it was about aspiration and economic and social opportunity and other forms of learning and education. Perhaps having babies isn't where the problem lies but not having any support.

This is where some of the tricky issues come in for the evaluator and where it really depends who's evaluating. I see and interpret the evidence in the above example in a different way to other evaluators, because I have a certain view and experience of learning and social change etc.. I can write the story that the funders asked for but in addition I can also tell them my version of the story too.. And in that respect when I work as an evaluator I have a certain degree of power and responsibility.

.... and I've drifted very far from the focus of design for social action... so I'll stop here and hope that maybe Steve can do some distilling and bring us back on track ...

 

 

 

Ann Light's picture

Continuing on distillation:

I'm struck by how evaluation by participants often starts in narrative, based in reflection and experience of change, and often external evaluators looking for impact tend to see impact in terms of metrics (because metrics are nice to have to win arguments, but it's all in the wording anyway) and the best evaluators are grappling with a third path.

If a project is being evaluated for stand-alone success, then stories of change are the highest form of outcome, but I guess a lot of money in all our worlds is given for pilots and research programmes, both of which have not only to demonstrate success, but also scaleability, sustainability or repeatability which requires a further stage of distillation. So, not only will evaluation differ depending on 'what we are trying to evaluate, for what reason, and for whose benefit' and who by, but in my experience, there are two classes - 'did it succeed in doing what it meant to and is now finished?', which can be coded as metrics though not necessarily helpfully, and 'what are the sustainable/transferable lessons so that we can do more/better?', which isn't a metric question though it may not be terribly narrative either. So there is a 'what' and a 'how' question to answer. Indeed, part of the 'what' question may be whether one has adequately worked out the 'how'. Is this what others have found too?

This also relates to the matter of whether the effect or the actual practice is what needs to be scaled up, depending on understanding the significance of the original context(s) and how it compares with further contexts. This is something of a new research project of itself, but perhaps comes within the evaluation brief. Steve W has been talking to us about 'underlying mechanisms' and that seems very pertinent.

 

-- Ann

Hannah Beardon's picture

what is the question?  Evaluation is such a difficult thing to pinpoint, the process of distillation, the angle and interests of the evaluator, and experiences or ideas which can make sense of the information and perspectives given... as well as the motives and needs of the commissioners/ project management and other stakeholders.  There is a lot about people telling you what they think you want to hear etc...

 I suppose what I, and it seems clodagh and ann, are wanting is the sort of things for different actos in an evaluation to talk honestly and openly about throughout the evaluation process - from selecting the evaluator to using the report.  If we recognise that the process is entirely subjective (who is the filter, what they are filtering or not, and what they think important or useful to relay for a start) and will throw up some interesting lessons and insights, but not inform on everything, then how can people commissioning evaluations prioritise what they want to get out of it and how do those priorities translate to action points/ selection criteria/ ToRs etc? 

Hannah Beardon's picture