This case study describes a research project employing a sequential explanatory mixed-methods design investigating teachers' motivations, expectations, values, and teaching practices due to their attendance of an innovative Professional Development program in the United States. The Professional Development program, called Research Experience for Teachers, is based on a cognitive apprenticeship model which supports teachers' active learning in an authentic context and uses inquiry-based learning strategies.
Specific methodological problems arose in the course of the research. The case study describes particular challenges in carrying out a research study designed around the Professional Development program’s features and issues related to study design, measures, and data collection capturing participants’ retrospective accounts of their Professional Development experiences.
By the end of this case, students should be able to
- Have a better understanding of the importance of the Professional Development (PD) features in driving the research questions and the research design
- Assess the challenges related to validity and reliability when designing study measures (i.e., a survey, an interview protocol)
- Address the issues related to providing a balance in data collection (quantitative vs qualitative) in an explanatory mixed-methods design
- Critically examine issues related to data collection measuring participants’ retrospective views about their PD attendance
The research project described in this case was carried out for 1 year and tracked teachers who participated in a Professional Development (PD) program called Research Experience for Teachers (RET). The research project was housed at the National High Magnetic Field Laboratory (MagLab) at Florida State University in the United States. The MagLab is the largest and highest powered magnet laboratory in the world, hosting approximately 300 scientists from various fields.
I was in my fourth year as a PhD student, and I was hired as a graduate research assistant at the MagLab to carry out a research study about teachers’ experiences in a RET program. As a research assistant, I was given a lot of flexibility in designing and carrying out the study, which was very exciting for me. I had the necessary resources and support from the director of the project, but I was given the task to think about a study design, implement it, and lead (as first author) the research article describing the study findings. It is uncommon for many graduate students to receive such an opportunity. However, I’ve had a lot of readings done in my research area, I’ve had some research experiences up until that point, and this was valuable knowledge that I could rely on. The research project that I designed (with the help of co-authors) was funded by the National Science Foundation (NSF) and investigated teachers’ motivations, expectancies, values, and instructional changes due to their involvement in a RET program. The RET programs are hosted annually by Florida State University during the summer and are designed to promote teacher active learning using inquiry-based strategies in order to support quality science teaching at all grade levels. The RET programs are based on a cognitive apprenticeship model (Dixon & Wilke, 2007) which promotes learning in collaborative settings. In a cognitive apprenticeship model, teachers collaborate with a mentor scientist and work on a project together in an authentic environment, such as a science laboratory in a research center (Pop, Dixon, & Grove, 2010). This type of PD program provides long-term involvement (regularly 6 weeks) and gives teachers the opportunity to learn science from a mentor scientist in a laboratory setting.
In the research project that I was involved, the process of selecting teachers for the RET program starts about a year in advance. An invitation to attend the program, containing details about the duration, content, and purpose of the RET program, is advertised via the main channels of professional communication (i.e., the university website, professional organizations websites, educational conferences). Teachers of all grade levels (i.e., elementary, middle and high school) across the United States apply for the RET program during the application window and a committee from the university selects the participants among the applicants, based on set of established criteria. Once selected, the teachers are in the program for 6 weeks during the summer. Each teacher is paired up with another teacher, and then further, they are both paired with a scientist. Traditional RET programs place one or two teachers in a laboratory working with a scientist in this cognitive apprenticeship model to provide two layers of support in their learning: (1) support from a peer teacher and (2) support from a more advanced peer, the scientist. The RET program where this study took place usually hosted 10-17 teachers collectively at the national research laboratory, where 5-10 scientists regularly were assigned as mentors for these teachers (one or two teachers were assigned to a mentor scientist). At the end of the PD program, the RET participants were required to present their work, usually a research product in a formal public presentation. Teachers’ final products reflected a variety of interests and individual research experiences in physics, chemistry, geochemistry, and optical microscopy.
The purpose of the research study was to track all teachers who attended a RET program at the MagLab anytime during the 7-year period investigated (from 1999 to 2006). A total of 90 teachers from all grade levels (i.e., elementary, middle, and secondary education) participated in the RET program during the 7 years examined in the research study. The RET participants were teachers from different schools around the United States and varied in their demographics.
The RET program has been very successful throughout the years at MagLab and became quickly very popular among teachers (usually teachers attending the program would encourage others in their school to attend). However, the research study that I designed to capture teachers’ past experiences in the RET program had a few methodological challenges. The main issues can be summarized as follows:
Because of the small number of the RET attendees in the first place, the sampling of the research study was affected. Of the 90 teachers, past attendees of the RET program from 1999 to 2006, I was able to find and contact 73 teachers. An invitation to participate in the study and complete an online survey was sent to all 73 teachers. However, only 67 teachers completed the online survey (response rate 91.7%), and a small percentage (8.2%, n = 6) chose not to take the survey. Sampling issues (i.e., small sample size) can affect the validity and generalizability of the quantitative results.
At the time when this research was carried out, no studies were published investigating teachers’ motivations, expectations, and changes in teaching practices due to the RET attendance. Moreover, no studies to that date compared elementary teachers’ views of the RET with those of middle and high school teachers. Thus, when I developed the study, I had to design my own instruments, given the lack of measures existing in the area. The study survey comprised a demographic questionnaire and three additional questionnaires (i.e., Motivation to Attend the RET, Expectations about the RET, and Changes to Teaching Practices) in which participants were asked to rate each item based on a 4-point Likert scale (1 = Strongly disagree to 4 = Strongly agree) to indicate their reasons for attending the RET program, their expectations about the program, and changes to their teaching practices due to RET program involvement. To ensure validity, the content of the survey was analyzed by several teachers and educational researchers, and the survey items were piloted on a small sample size (about 10 teachers) before full implementation in the study. However, since the study sample size was relatively small (N = 67), a validation study was not possible. A larger sample size is necessary for validation studies (i.e., over 300 participants for both exploratory and confirmatory factor analysis). Descriptive results were reported from this research study, with caution to readers about the generalizability of findings.
The study explored participants’ views about the RET program, and both measures, the survey and the interview questions, asked thus participants’ retrospective accounts about the program. For some participants, their RET experiences have been more recent compared to others, depending on which year they attended the program. Thus, their views about the program (i.e., motivations for attending, expectations, values, and the types of changes they made to classroom practices) could be altered due to memory faults.
The research study employed a sequential explanatory mixed-methods design (Creswell & Plano Clark, 2007; Creswell, Plano Clark, Guttmann, & Hanson, 2003) and was conducted in two phases. Both quantitative data (survey in Phase 1 of the study) and qualitative data (interviews in Phase 2 of the study) were collected (see Appendix A). An explanatory mixed-methods design consists of collecting quantitative data first and then follow-up qualitative data in order to help explain, or elaborate on, the quantitative results (Creswell et al., 2003). The sequential explanatory design is characterized by the “collection and analysis of quantitative data followed by the collection and analysis of qualitative data” (Tashakkori & Teddlie, 2003, p. 672). However, the two methods, quantitative and qualitative, are integrated during the final phase of the interpretation of study findings complementing each other in assisting explain the study results overall.
The research questions addressed by the study were as follows: (1) Who attends the RET program? This research question aimed at describing the population that attends RET programs in terms of gender, age, teaching experience, background, the types of PD prior to attending the RET, and the frequency of PD involvement per year. (2) In what ways do elementary education teachers differ from the middle and high school teachers with respect to their motivations and expectations about the program? (3) In what ways do elementary teachers differ from the middle and high school teachers with respect to implementing changes to their teaching practices due to the RET program attendance?
In the first phase, online surveys were conducted with all participants (N = 67) about participants’ RET motivations, expectations, and changes to teaching practices. In the second phase of the study, telephone interviews were conducted with selected participants (N = 12), including thus four teachers from each grade level (i.e., elementary, middle, and high school). Interviews were transcribed verbatim and pseudonyms were assigned to each teacher to protect anonymity.
The online survey was administered in Qualtrics, and data collected were generated in Excel. The Excel file was then converted to an SPSS file and data were analyzed in the SPSS program. Quantitative analysis consisted of descriptive statistics (i.e., counts, frequencies, mean, standard deviation), comparative analysis (i.e., analysis of variance [ANOVA], t-test), and correlations.
The transcribed interviews were the main qualitative data for the analysis. Two coders developed a coding scheme which was gradually tested and revised. Once the coders reached a complete agreement on the coding scheme, all interviews were coded. The main themes from the interviews described participants’ experiences in the RET program, such as their motivation for attendance, expectations, values associated with the RET program attendance, and changes they made subsequently in their teaching practices.
To provide a quick overview of study findings, I summarize below the key study findings.
The first research question asked about the demographic characteristics of the RET participants. Our study findings indicated that the majority of the RET participants were elementary education teachers (49.3%), and middle and high school teachers (combined) were 47.8%. Of all participants, females were the majority (77.6%) compared to males (22.4%). Participants’ ages ranged between 19 and over 46 years (over 46-year-old constituted 37.3% of participants; 21% were between 26 and 30 years old, 15% were between 36 and 40 years old, and 3% were between 19 and 25 years old). Also, participants’ teaching experience ranged from 0 to over 30 years; the majority of participants had 0-10 years of teaching experience (49%), and 24% had 11-20 years of teaching performance.
The second research question aimed at describing in what ways elementary education teachers differ from middle and high school teachers regarding their motivations and expectations about the RET program. The study findings showed that overall there were no significant differences between the two groups of teachers. Both quantitative and qualitative findings showed that participants generally were motivated by intrinsic motivators (i.e., learn new ideas, professional growth, help their students success academically). However, significant differences were found with respect to teacher expectations; middle and high school teachers’ expectations were related to professional networking, and elementary education teachers’ expectations were related to better understanding of how to infuse an inquiry-based teaching approach in their instruction.
The third research question aimed at describing in what ways elementary education teachers differ from middle and high school teachers regarding their instructional changes due to the RET program attendance. Study findings revealed significant differences between the two groups; elementary teachers reported (both quantitatively and qualitatively) that they made more instructional changes compared to middle and high school teachers after attending the RET program. Such changes consisted in using a more student-centered instructional approach, more hands-on activities, more collaborative activities, and more applied science in their teaching.
Besides learning about the study findings, which answered the research questions, I also learned a lot about conducting a study. These lessons were ‘reminder notes’ for my future studies, and I’d like to share some of my thoughts with other researchers (or soon to be researchers) about the process of designing and implementing a mixed-methods study. Here are some practical suggestions:
- Learn about your study context. More specifically, knowing your study context helps you in designing and carrying out the research project. If your study is designed around a PD program, like mine was, make sure you understand the components, structure, and goals of the PD program. If you have a chance to participate in the PD program yourself, or observe the trainers delivering the PD, this would be a great advantage in designing your research study and in understanding the findings of your research. Before designing my research study, I had the opportunity to observe how the RET program was delivered for one summer session (6 weeks). During this time, I’ve observed a few workshop sessions delivered by the PD trainers, got to know the teachers attending the program, talked to trainers and scientists, and asked questions about the program. This helped me understand the features of program better and the practical implications of my study findings for the PD developers as well as for the teachers.
- Know your area of research. I mentioned this previously, that before embarking in my study quest I have done a lot of reading in my area of research (i.e., teaching motivation, expectations, values, changes) and I had a few prior research experiences in this area. However, my knowledge of the RET programs and similar PD programs that employed a cognitive apprenticeship approach to learning was limited. So, I’ve gathered all the information I could about different RET programs before even starting to design the study. I’ve read the NSF reports on RET programs, the different types of RET programs, the existing research with RET participants up until that point, and similar PD programs.
Reading as much as you can in your area of research before you start designing your study can help tremendously. Be knowledgeable of the seminal articles as well as the most recent research findings in your field, and be knowledgeable about the practical implications of such research. Knowledge in your area of research will help you design your study. You might find similar research designs that you can draw from, or even more importantly, you can find validated instruments that you can use in your research design. Using validated instruments is key to providing trustworthiness to your findings.
- Use validated instruments. Since the study of motivational constructs (i.e., motivations, expectations, values, and changes) was not investigated quantitatively with a RET program, in designing my study I had limited sources that I can draw from. Thus, I designed my own instruments (surveys and interview protocols) with the help of co-authors and piloted them on a small sample of teachers. Ideally, a valid, existing instrument would be the best option when designing a new study. However, because a validation study was not possible with our sample size, the instruments just designed were thus used in the study. Fortunately, the reliability scores for all instruments designed in our study were good. However, my strongest advice to anyone designing a quantitative study or a mixed-methods study would be to use a valid instrument, already tested before your study and published in a research journal. A previously validated measure is most certainly a reliable measure as well. Using instruments lacking validity evidence will raise questions over the validity of your study results.
- What to do when you don’t have available validated measures. Again, using valid, reliable instruments in your study, already published in major research outlets, is the best. However, there are cases when you will design a study and you might be a pioneer in that area (like I was with my study of the RET participants). In that case, if you don’t have available a valid measure that you can use in your study, here are my suggestions:
- The best alternative is to adapt existing valid instruments to your study. If there is already an existing similar study and used validated instruments that you can adapt, the validity of your adapted instrument is improved already. Adaptation can be done by changing a few items in the survey, changing wording if necessary in certain items, and adding or removing survey items.
- If you don’t have similar studies to draw from, and cannot adapt instruments, then design your own measures. If you are able to conduct a validation study for the newly designed instrument, that would be great! Make sure you have a large sample size and meet the requirements for validation studies. If, however, you are not able to carry out a validation study prior to using your newly designed survey, then make efforts to ensure validity as much as possible in your instruments. You can do this by developing the survey items in a team; by asking for feedback on items from other educational researchers, graduate students, or teachers; and by piloting the survey items before full implementation in your study. By doing these, you can increase evidence of construct and content validity and increase chances of having good internal reliability.
- Ask for help in designing your interview protocol. When I designed my interview questions, I asked feedback from my PhD peers (with experience in educational research) and from a few teachers. They helped me revise some of the questions, include a few additional ones, and look overall to see whether I capture all that I needed with my interview protocol.
To ensure dependability or trustworthiness of your qualitative results, you can follow similar principles to designing a survey measure. In designing your interview protocol, you can use help from previous research; similar studies possibly used a protocol that you could adapt, or use a few questions in your study. You can ask your peers for help or someone similar to your population of study (i.e., teachers, parents, students)—they can review the questions, provide feedback, and even help with developing new questions.
- Practice your interview questions beforehand. Once you have your interview protocol ready and you are happy with it, it’s always a good idea to practice your questions. By practicing the questions, you can hear yourself asking the questions, make sense yourself of what you are asking, and make changes if you need. My first interview in this study was probably the worst. I haven’t practiced the interview questions beforehand, and listening to the interview after it was done, I knew something was not right. Some of the questions were way too long, and I was practically asking multiple questions at the same time. The participant asked me a few times to repeat the question, and I spent a considerable amount of time on chunking the question, reading it over and over again to the interviewee until it made sense to her. This could’ve been avoided, and I could’ve used my time better during the interview. I also had a long protocol and a long interview session; half the time through the interview, the participant was already tired and asked whether she can do the interview at some other time. Luckily, I was able to gently persuade her to finish the interview session, and we completed the interview in good spirits after all.
- Practice writing mixed-methods results. Writing up the article once I had the data analyzed brought another set of challenges for me. Designing your study brings one set of challenges; carrying out the study and collecting data could bring another set of challenges (and joy of course), but writing it all up could be the final one. I think the challenge in writing the mixed-methods findings is probably described in any research book. For me, the difficulty consisted in finding the balance in describing both the quantitative results and the qualitative results in an effective way. However, after several drafts and versions of my article, I was able to write more effectively by describing the findings for each research question. Thus, I first described the quantitative results for each research question, followed by qualitative findings to augment the quantitative results. I’ve read various studies using a mixed-methods approach, and these served as models in conceptualizing the study findings. So, by reading similar research studies (using a mixed-methods design) and using these as models in your writing could be a time-saver! Also, practice writing the mixed-methods results; this would require constant reading of your results and discussion sections, editing, and making sure you don’t miss the important message you would like to send out to your readers about your study relevance (i.e., why is your study important?).
The study built around the RET program’s features aimed at capturing participants’ motivations for attending the PD, their expectations and values about the program, and most importantly, the changes they made to their teaching practices after attending the RET program. Because there were no other similar studies to draw from, I enthusiastically set up my study to capture as much as I could from the participants, not being entirely aware of the challenges posed by the study sample and collecting quality data. However, the importance of the study was a driving force for me; teachers’ beliefs and views about the program may influence the ways the PD programs will be designed and delivered. What teachers believe they can do with new ideas, and how much they value the knowledge gained in a PD program, may indicate the extent to which they make changes in their classrooms. Study findings indicated that the RET program was indeed a catalyst for future PD involvement of all study teachers and contributed to their professional growth. Also, all participants indicated the value of authentic (i.e., immersion-type) experiences, peer-collaborations, and deep learning they experienced in the RET program.
My own learning experience from implementing the study was mostly related to finding the balance in collecting both quantitative and qualitative data and in describing the quantitative and qualitative results in an effective way. Another issue that I wrestled with was related to participants’ retrospective accounts. Capturing people’s beliefs, motivations, and values about their experiences are in fact retrospective accounts. McCorkle (1991) said that even reasonable, coherent, and captivating accounts are limited by problems of memory and reporting. These reports may be based not on recall but on other cognitive processes such as knowledge construction. However, despite the limitations posed by retrospective accounts, they still prove valuable in studying individual beliefs and the process of personal understanding of past experiences.
- In the mixed-methods study described here, I employed an explanatory design. Discuss some of the strengths and challenges in using a mixed-methods explanatory design.
- Given the nature of my study (built around the RET program), the sample size was relatively small (N = 67). Discuss other feasible study designs that could be used to investigate teachers’ motivations, expectations, values about their Professional Development (PD) experiences, and changes they made to their teaching after attending a PD program.
- In this study, the data sources consisted of participants’ survey responses and follow-up interviews. What other data sources would you use to ensure triangulation?
- Capturing retrospective accounts of participants can pose challenges for the trustworthiness of findings. What kinds of recommendations do you have for anyone designing a study in which participants’ retrospective accounts are investigated?
Research Experience for Teachers (RET) at MagLab https://nationalmaglab.org/education/teachers/professional-development/research-experiences-for-teachers
Note: From Creswell, Plano Clark, Guttmann, and Hanson (2003).