Research Phases and Timeline
Phase One included the preparatory work of seeking research ethics board (REB) approval, preparing the informed consent forms, drafting the interview protocol, developing a draft interview schedule, and searching the internet for potential participants. During this phase I conducted one interview with a teacher educator outside the Canadian teacher education context who was familiar to me. As a novice researcher, this pilot interview allowed me to reflect on the interview process and prompts, and make adjustments to the interview protocol as part of the REB submission. This first phase ended once the REB approval was received (see Appendix A).
Phase Two included a sequence of initial contacts over the space of five months. I aimed to schedule these at least one week apart in order to manage the data gathering and data-engagement process I had planned. Throughout this phase I maintained both an electronic spreadsheet and a research notebook form of tracking to ensure I followed a consistent sequence with each participant. An introductory email was sent to the participant (see Appendix B-1). Once the TEd agreed to participate, I conducted a web search for information that may be relevant for this research e.g. publications, course related information, and social media posts. I recorded this information in a Word doc version of my research journal, along with any notes on insights into MDL connections or thoughts for possible inclusion in the interview.
After the initial agreement to participate, I sent out the informed consent information (see Appendix C) along with a video link as a way of introducing myself to the participant and providing information about the research. The interview was then scheduled for a mutually convenient time and the informed consent was collected. I also sent a copy of the interview protocol (see Appendix D), not with an expectation that participants would prepare prior to meeting, but to provide a guide to our conversation. After the first few interviews were completed, I changed the process slightly to include sending out an electronic calendar invitation which included the Zoom link so participants could see this event on their preferred calendar software.
The interview was then conducted. Immediately prior to meeting the participant, I reviewed my research journal notes to ensure I was fully prepared for the conversation. At the end of the interview participants were asked to prepare a digital artifact using a technology of their choice (text, image, graphic, audio, video) that was reflective of their MDL and OEPr lived experiences. As suggested by Ellingson and Sotirin (2020), this “participatory data engagement requires exceptional openness to change, to uncertainty and ambiguity, and to attending carefully to how different forms of knowledge emerge” (p. 95).
After the interview ended, the recording was saved to my laptop. The audio file provided from the Zoom recording was uploaded to Otter.ai and transcribed, usually within one hour of the upload. After downloading the transcription from Otter.ai, I reviewed the document as I listened and watched the recorded interview. This supported making any necessary edits and observational notes. In this way, I re-encountered the data within an agentic and dynamic state (Ellingson & Sotirin, 2020). Although the recordings or transcripts did not materially change (Ellingson & Sotirin, 2020), my engagement with these data shifted to a different moment in time, thus altering my views in subtle and sometimes dramatic ways. Once the transcript was reviewed, it was saved prior to conducting a process of redacting identifying information such as names or geographic references. This redacted version of the transcript was then inserted into the Word Art software. The rendered word cloud image was then downloaded as a portable graphics network (PNG) file and stored on my computer. I also created a short screen-cast video of some of the interactive word clouds which allowed me to detect words that were not noticed in the first viewing.
In the post-interview email sent to each participant (see Appendix B-2), I included links to the transcript, the audio recording, and the PNG of the word cloud image for review and comments (see this curated collection of word cloud images). In this email I reminded the TEds of the second part of their participation – the creation of a digital artifact representative of their lived experiences with MDL in their OEPr. To provoke their thinking, I provided links to media and digital literacy frameworks that could be referenced for this artifact production. A soft due date was set for two weeks post-interview. I also included a digital e-card to a national bookstore chain as a way to recognize their gift of time with this project.
When I examined the artifacts, I delved more deeply into the TEds lived experiences with MDL within OEPr. This was an opportunity to “focus on analysis and creative representations of participants’ experiences, with consideration of the researcher in a secondary role” (Ellingson, 2009, p. 23). The participants created artifacts in a variety of formats – infographics, a sketch-note, blog post, video recording, interactive story created using Twine, and audio recordings. These digital artifacts revealed a representation of MDL and OEPr in action as a process of becoming. This part of the second phase was a way of “leading to a co-authored understanding of the experience being discussed between the participant and the researcher” (Ranse et al., 2020, p. 6). As mentioned, a spreadsheet and research journal chart were maintained throughout this phase to confirm completion of each task, to track progress, and ensure I reached projected timeline benchmarks.
Phase Three included work done after the interview phase was fully complete. During this phase I blocked one week to review all the interview video recordings while reading the transcripts, modelling the whole-part-whole process in P-IP methodology. This allowed me to make note of connections among and between participants’ stories, as I began to notice trends and commonalities. Immediately following this week-long review, I took time to revisit codes already done in NVivo for each transcript (see Table 2) and then created updated coding charts. I revisited the word art collections from the transcripts and created an overarching word art from all the keywords created by the Otter.ai software. As I did a third review of the transcripts, I further redacted the documents to ensure confidentiality, and added notes and memos as marginalia.
The time came to generate unifying codes to discern the overarching research story. I reviewed the codebook within NVivo to combine to reduce the listing and provided detailed descriptions (see Table 3 in Appendix H). Once this was completed, I created a graphic rendering of early and emergent ideas (see Figure 17) and a preliminary concept map (see Figure 18) as I attempted to bring ideas and conceptions together. I shared these digital artifacts with critical friends in my PLN. After receiving feedback, I took a pause from my immersion into the data. During the next period of time I immersed myself in reading and rereading literature, while also attending and viewing webinars relating to coding and generating themes. Phase three ended with a renewed plan for revising themes and organizing quotes for the writing of the findings section of the dissertation.