In the large international projects where many qualitative researchers work, generating qualitative Big Data, data sharing represents the status quo. This is rarely acknowledged, even though the ethical implications are considerable and span both process and product. I argue that big-team qualitative researchers can strengthen claims to rigor in analysis (the product) by drawing on a growing body of knowledge about how to do credible secondary analysis. Since this necessitates a full account of how the research and the analysis are done (the process), I consider the structural disincentives for providing these. Debates around credibility and rigor are not new to qualitative research in international development, but they intensify when new actors such as program evaluators and quantitative researchers use qualitative methods on a large scale. In this context, I look at the utility of guidelines used by these actors to ensure the quality of qualitative research. I ask whether these offer pragmatic suggestions to improve its quality, recognizing the common and hierarchized separation between the generation and interpretation of data, or conversely, whether they set impossible standards and fail to recognize the differences between and respective strengths of qualitative and quantitative research.
- secondary qualitative data analysis
- computer assisted data analysis