Coding and Analysis
My coding process followed that of thematic analysis in that both semantic codes which hold surface and explicit meanings, and latent codes which hold implicit and underlying meanings were used (Braun & Clarke, 2021). Most of the codes captured one idea or facet, with potentially multiple codes attached to one statement in the transcript, artifact, note, memo, or social media element (Braun & Clarke, 2021). Each interview was coded within one week of completion to ensure I had a clear memory of the event. All interviews were re-coded in Phase Three of the research in order to bring codes, memos and notes together at the same time, with the intention of seeing how themes would emerge.
Rereading Braun and Clarke (2022), I was reminded that themes do not emerge, they are generated from the data. Thus, themes are constructed from the codes “like multi-faceted crystals – they have a core, an ‘essence’, which is evident through different facets, each presenting a different rendering of the ‘essence’” (Braun & Clarke, 2021, p. 208). It was through this insight that I realized I needed to revisit the crystallization methodology as a way through the messiness of the codes to crystallize the core findings generated from the data entanglements in which I was mired.
As previously mentioned in the data gathering section, the transcript texts from the interviews were imported into the WordArt word cloud generator. I recursively reviewed and revisited the word cloud images, and curating them into a collection, thus providing a quick way to glance at differences or commonalities occurring between the participants’ lived experiences. In this way l engaged in the crystallization of understanding since recordings and digital artifacts “offer lively and intriguing options for making, assembling, and becoming qualitative data” (Ellingson & Sotirin, 2020, p. 33, emphasis in original).
I returned to the concept mapping Draw.IO to bring ideas and conceptions into focus. I looked for examples lived experiences of MDL in OEPr as evidenced in the interview transcripts, observational notes, and word cloud images. I exported and revised the codebooks from the coding of each interview done in NVivo. These provided a record of the evolution of my coding skills and the changes in the data set as each interview was coded, but also became data moments worth gathering. I addressed changes in my growing confidence level as an emergent issue, since I was coding differently over time. At the beginning of Phase Three I reviewed and re-coded the interviews in NVivo, as well as the memos and notes documented in the interview transcripts.
Vagle (2018) suggested a whole-part-whole sequence for data analysis that I followed for each interview. This included: 1) a holistic reading of the full text to become “attuned to the whole material-gathering event” (p. 110); 2) a line-by-line reading while note taking, adding marginalia, and journaling; 3) writing follow-up questions; (4) a subsequent line-by-line reading to examine meanings and extracting excerpts, thus creating a new data moment from these gathered texts; (5) a third line-by-line reading focusing on analytical thoughts; and (6) additional readings as needed to reveal and name the emergent patterns, themes, and meaningful units across and amongst the participant’s collective data (Vagle, 2018). Within this process, I applied multimodal, media making and creative constructions to enhance the potential of opening new lines of meaning and understanding, of seeing what frames my seeing (Lather, 1993). Even the patterns that I detected from the memos, notes and visualizations were subject to categorization and coding (Saldaña, 2016). In my subsequent deep readings I generated code memos and themes reflective of the participant’s “routines, rituals, rules, roles, and relationships” (Saldaña & Omasta, 2018, p. 15).
As mentioned, following this period of active coding and review, I paused to take time to look at the whole data set gathered for trends and themes. Themes were elusive in the volume of data gatherings I examined, so I drafted a preliminary sketchnote to pull ideas together (see Figure 17). This was followed by an early version of a concept map where codes and connections were explored (see Figure 18). A coding chart description was also created to consolidate an understanding each of the codes from the data including an applicable example from the data gathered (see Appendix H).
Although the exact coding techniques and strategies were generated from the data and the research design, I was aware of essential skills and attributes that supported my coding process. Saldaña (2016) identified personal attributes that qualitative researchers should possess – organization, perseverance, ability to deal with ambiguity, flexibility, creativity, rigorously ethical, and an extensive vocabulary. These supported the cognitive skills of “induction, deduction, abduction, retroduction, synthesis, evaluation, and logical and critical thinking” (Saldaña, 2016, p. 338) required of qualitative researchers. Despite the extensive moments of ambiguity and uncertainty, it was knowing what I know about these personal attributes and cognitive skills in relation to my own skills and abilities with research and MDL in OEPr that gave me some measure of confidence in my coding process.