Participatory methods in mixed methods research – a methodological treasure

The IDS GrOW project (2015-2017) on unpaid care work and women´s economic empowerment, currently in its data analysis phase, is taking place in India, Nepal, Rwanda and Tanzania as led by BRAC, IDS and ISST. Since its inception, the team mixed three strands of research methods – qualitative, quantitative and participatory. This compares to more common options of using randomised control trials and qualitative-quantitative mixed methods. Yet, what added value do participatory methods bring to mixed methods research?

It has been a challenge to me as a professional. While I was used to applying participatory methods (calendars, maps, matrices…) in smaller research projects and in pure participatory research, this was the first time I was doing so in a three-year, mixed-method, and four-country comparative research. Participatory methods had to ‘fit’ within a broader family of methods, pass the exam of standardisation and comparability and still give the best of themselves by keeping their participatory essence.

Valuing participatory methods – a one-day slot in all workshops
Three of the five days in the inception, methodological and analysis workshops were dedicated to qualitative, quantitative and participatory methods. Having a full day for participatory methods was sending a clear message: they were not an appendix of qualitative methods. Having this space for manoeuvre in a mixed methods research was ground-breaking for me. IDS GrOW Principal Investigator Deepta Chopra was key in championing the idea and thinking out of the box. The research team – quantitative and qualitative leads and country research teams – also embraced the unknown in mixing three strands of methods.

On those early days, we needed to be clear about what an extra workload of participatory tools added to an already bulky basket of methods (a survey, case studies, key informant interviews and secondary literature). Two complementarities with qualitative and quantitative methods seemed clear from scratch – the ‘group data’ and ‘action data’ nature of participatory tools. We drew on Robert Chambers’ (2007) four quadrant axis. On one axis, the ‘type of data’ could be numbers or words, facts or opinions, wide or deep, and that is best captured by either qualitative or quantitative data. On the other axis, the ‘collection of data’ can be individual or group-based (whatever the type of data is) and purely extractive for data collection or further used for direct local benefits. Our starting point was that participatory methods are group-based and that we should aim to use them to hint at local actions derived from group data beyond mere ‘data take-away.'

Standardising participatory methods without losing their open essence.
To allow comparability, all four countries used the same tools (we chose 9 out of 15 possible tools including a body map, a calendar, a matrix, a public service map and a role play). Training and pilot protocols also had to be the consistent. We also closed down some of the variables in the tools while others were left open for local groups to define. This was a conscious trade-off made by the team between comparability and specificity.

Setting participatory role protocols – the facilitator, the co-facilitator and the note taker.
If a solid literature existed for the conducts of ‘enumerators’ in quantitative data and ‘interviewers’ in qualitative data, less was found for participatory method roles. The protocols we found for ´facilitators´ mostly related to workshop contexts and not necessarily to research. We thus agreed on a set of three research roles that, we thought, would fulfil minimal standards to keep rigour in participatory methods. The facilitator would lead discussions and prevent diversion from content and research questions. The co-facilitator would support the facilitator with noticing power relations between group members and between the group and the facilitator – e.g. talkative people. The note takerwould note group (dis)agreements, misunderstandings, and potential action points mentioned by the group during the discussion.

Coding controversy
Participatory methods gather ‘group data’ and so it was important to add a code stating whether a particular set of data had been ‘controversial’ or ‘agreed’ by a given group. We thus created a ‘disagreement’ code. While data from interviews and questionnaires gives us a general picture on (dis)agreements once data is aggregated, part of the aggregation in participatory methods happens ‘live’ during data collection as we are working with groups. Thus, potential clashes between people about an assertion also happen ‘live’. This is where the treasure of group data probably lays – in witnessing controversial data on the spot and being able to explore that energy further. 

Coding ‘action data’
During the pilot in India, a care matrix with mothers in a local crèche raised the fact that they all worked as domestic workers but were not affiliated to a union. A worker of the local partner, the crèche, noted she would put them in touch with SEWA. Small as it seems, the move could mean a lot to the women. That is ‘action data’ (there is usually a question asking ‘what can be done’ at the end of each participatory tool). Action data can add validity to findings or ‘content data’ in that those findings could ‘stretch’ and be useful for short-term local actions independently of later broader processes, e.g. policy recommendations. We thus thought of collecting and coding these action points and giving them value not only as ‘actions’ but as ‘data’ in the research process.

The role of local partners proved indispensable for action data
With no strong local partners or leaders, action points lose their action validity as processes are not triggered and potential success of the mini-actions not followed up.
 
A different interaction between advocacy and research.
Commonly, international and national advocacy requires all data to have been aggregated beforehand. Thus, national advocacy partners (ActionAid, ECCE Alliance and Oxfam in IDS GrOW) often come into play once research partners (BRAC, IDS and ISST) are done with the analysis. Yet, the local level sequence can happen more organically and start an ongoing collaboration between local advocacy partners, national advocacy partners and national research partners since the data collection phase. That is, action and research can overlap in time locally, e.g. a local advocacy partner following up action points derived from discussions, or, a national advocacy partner reflecting strategically as per the collection of action points in a zone. For instance, ECCE Alliance does capacity building workshops concurrent with the research with local level advocacy and research facilitation partners.

This is work in progress and we welcome suggestions from others involved in integrating participatory methods in mixed methods research and the value that it adds. Please leave a comment below for suggestions and/or email us. Kas Sempere is the participation lead in IDS GrOW. Contact her on k.sempere@ids.ac.uk. With thanks to Shraddha Chigateri (ISST) and Deepta Chopra (IDS) for comments on the blog. 

[1] See ‘the quiet revolution of participation and numbers’, section ‘beyond conventional qual-quant complementarities’, page 10, www.ids.ac.uk/files/Wp296.pdf

[2] While traditional qualitative focus group discussions converge with participatory methods in that they gather group data, they have not traditionally been used to gather numeric data out of groups, nor necessarily for directly working on local action points.

[3] See, for instance, Chambers, R. (2002) Participatory Workshops: A Sourcebook of 21 Sets of Ideas and Activities http://community.eldis.org/?233@@.598f9f60!enclosure=.598f9f5d&ad=1

[4] We were only gathering qualitative data, not numerical data, with the participatory tools. Thus the coding of participatory data followed that of qualitative data (for participatory numbers, it would have followed quantitative data entry processes).

[5] As a word of caution, country research teams warned that some action points could have been derived from the prompting of the external research teams, e.g. the role of the state in providing crèches as a result of asking about the state.