Using Computer Screen Recordings and Think Aloud Protocols to Study Students’ Cognitive Strategies While Working Online

Abstract

Today, much of students’ researching and writing takes place on computers and the Internet. Many traditional forms of data (e.g., hard-copy writing artifacts) may no longer be sufficient to capture students’ writing activity. This case study will share experiences from a recent study using screen- and video-recording software (Camtasia Studio 6.0) to research students’ computer- and Internet-based writing. Particular strengths of such a method include the ability to capture disappearing or changing data, such as text that is later deleted; the ability to see correspondence between think-aloud utterances and online activity; and increased engagement and interest on the part of participants and audience members. Limitations and challenges include the fact that hard-copy documents are not easily captured in the recording, the need to strike a balance in terms of the volume of data and the unit of analysis, and the existence of the usual technical challenges that emerge when one works with technology.

Learning Outcomes

By the end of this case, students should be able to

  • Know how computer-screen recordings can be used to capture data about students’ processes and strategies for working online/on computers
  • Analyze the strengths and weaknesses of this approach
  • Consider how this approach might be used to address other research questions in related areas

Project Overview and Context

Many writing researchers – particularly those in cognitive psychology – are interested in phenomena such as writers’ thoughts, strategies, processes, skills, and behaviors. Data to support research include think-aloud protocols (e.g., Hayes & Flower, 1980), written artifacts such as notes and plans (e.g., Risemberg, 1996), marking of source documents (e.g., Spivey, 1997), journals (e.g., Segev-Miller, 2007), and observations (e.g., O’Hara, Taylor, Newman, & Sellen, 1993), among others. Much of the existing research examined students’ use and creation of hard-copy materials (e.g., Kirkpatrick & Klein, 2009).

Research on students’ reading and writing in an online/computer-based context is just emerging (e.g., see Coiro & Dobler, 2007; Haller, 2010; Kiili et al., 2008; Kirkpatrick, 2012; Kirkpatrick & Klein, 2016; Kuiper & Volman, 2008; Mateos, Martín, Villalón, & Luna, 2008; Rouet, 2006; Strømsø, Bråten, & Britt, 2011; Zheng, 2013). At times, researchers will need to use new methods to capture the complexity of such online and on-screen behavior.

The purpose of this research was to look comprehensively at the processes and strategies students use to research and write from the Internet. Data included recordings of students’ computer screens while they worked, think-aloud protocols, written artifacts, and post-writing interviews. This case study focuses primarily on the computer-screen recordings.

Research Practicalities

This study was done as the dissertation component of a PhD program in Education; the program specialization was Educational Psychology and Special Education. Data collection took place over several weeks. Analysis took place over several months, and the write-up of the analysis took place over another several months.

Research Design

The purpose of this study was to identify good strategies for researching on the Internet and writing an essay based on what was read. The participants were 9 very high-achieving grade 12 students (~18 years old). They were recruited from a high-achieving secondary school in south-western Ontario. The English Department Head at the school was asked to nominate 10 students who would be good potential candidates and to give them letters of information and consent. Those students who contacted the researchers to indicate their interest and who completed the forms were invited to participate.

Method in Action

Student participants were asked to write a persuasive essay about what Canada’s policy on the testing of cosmetics products on animals should be. Students were asked to think aloud as they researched and wrote (for discussions on think-aloud protocols, see Ericsson & Simon, 1993, 1998; Smagorinsky, 1998). The program Camtasia Studio (Techsmith Corporation, 2009) was used to record students’ computer screens, students’ think-aloud protocols, and students’ faces. That is, before students began working, the recorder was turned on. From that point on, everything that happened on the screen was recorded. Think-aloud protocols were captured through the same recording program. A webcam recorded participants’ faces as they worked.

When the recorded file is played back, the screen as the participant saw and used it appears on the researcher’s screen and the think-aloud protocol plays back in synchronized time. For example, if watching a recording of a student doing a search, the researcher would see the student open a web browser, go to a search engine, and type in keywords; at the same time, the researcher would hear the student say, “I’m just going to Google this.” The researcher would also see the participant’s face in a small toggled window (Figure 1).

Figure 1. Screen shot of a recording file being played back.

Figure

Students’ writing strategies, and the overall processes in which those strategies were embedded, were identified from the recordings. The other data collected as a part of this project (written artifacts, interview responses) were also analyzed, but are not the focus of discussion here.

Practical Lessons Learned

Benefit 1: Disappearing or Changing Data is Recorded

One of the difficulties with doing research in an online/electronic environment is that things happen very quickly. For example, a student can do a search, skim read through results, and choose a source in a matter of moments. It would be nearly impossible for a researcher to accurately record (e.g., in field notes) all the relevant details of those moments (e.g., which sources were returned after the search). The screen recorder captures all of the details, such as the search terms used, the results returned, and results selected (clicked on). The recording can be played back slowly or multiple times, in order for the researcher to fully understand the students’ entire process.

A second difficulty of doing research in an online environment is that the environment changes constantly. For example, when doing searches on Google, Bing, Yahoo, or another search engine, different terms will be suggested each time one searches and different results will be returned each time one searches. Thus, even if a researcher was able to accurately record a participant’s search terms, for example, there is no guarantee that the researcher could replicate the search as the participant experienced it, as the returned results could change before the researcher had the opportunity to replicate the search (Figure 2).

Figure 2. The same search can return different results. The search on the left was the participant’s; the search on the right was mine, done at a later time.
Figure

In addition to the searches, websites themselves also change frequently. For example, about a year after this dissertation was completed, 4 of the 22 (18%) sites we provided to students no longer worked at all; the sites they found themselves likely also changed. Even when the sites remained active, the content, layout, and internal links often change frequently. Thus, as with the searches, a researcher cannot simply retrieve the same site a participant used and see the site as the participant did (Figure 3).

Figure 3. The same website can change in terms of content, layout, and links. This is the site of the Canadian Federation of Humane Societies, as seen by a participant (left) and me (right).
Figure

A third difficulty of such research is that participants’ writing process unfolds quickly, and electronic written artifacts do not always reveal the ways in which they were created. For example, as students plan, they often create written notes or outlines. These are sometimes written out of the order in which they appear (e.g., a student might add information to the top of the outline, late in the outlining process). A screen recording allows the researcher to track the outlining process. Moreover, revisions to the outline (e.g., structural changes, deletions) can also be captured and analyzed.

As students write their electronic texts, they also write out of the order in which the text appears (e.g., write the introduction last), use mediating tools (e.g., bold or color a section to which they want to return; use temporary sub-headings), and make revisions (e.g., move or delete sections of text). Screen recorders allow these processes to be captured in full. Again, it is unlikely that a researcher could capture these changes quickly enough in hard copy to keep pace with a student (Figure 4).

Figure 4. Participants often wrote text out of the order in which it appears (example at left). Participants often moved or deleted large sections of text (example at right).
Figure

In sum, using a screen recorder allows the researcher to capture data about the process of researching and writing, in a way that would not be possible in hard-copy. Disappearing and changing data are preserved exactly, so that it can be revisited exactly as participants experienced it, during analysis.

Benefit 2: Correspondence Between Students’ Thinking and Behavior is Understood

Think-aloud protocols are used to access participant cognition—what is happening in the participant’s mind during the completion of the task (Ericsson & Simon, 1993). In the case of online researching and writing, the relevant behavior, documents, tools, and products exist largely online and/or on the computer screen. To understand participants’ think aloud protocols (e.g., “this seems really biased”), it is necessary to understand precisely what prompted their response. Because of the synchronized play-back of audio and video data, and the ability to re-watch the data slowly and numerous times, it is possible to understand the correspondence between thoughts and behaviors, even those that occur briefly or change quickly. Again, it would be difficult to capture this correspondence using field notes or an observation protocol, especially for a prolonged period of time.

Benefit 3: Interest and Engagement

The use of the screen recorder can generate considerable interest and engagement on the part of participants; our participants were interested in how the recording would be used and many wanted to see their file replayed for a few minutes when they finished. It is worth noting that participants acclimatized to having the recording running while they worked; it did not appear that they were overly cognizant of the fact that they were being recorded while they worked. The recorder is very discreet.

The use of audio and video data can also enhance the quality of presentations and publications. Using recorded clips can increase an audience’s interest in the work, and can give the audience a real sense of the work that was done. Every time we presented some of this work, people were interested in the programs used and many had ideas about how they could use such programs in their own work (Figure 5).

Figure 5. Slides from conference presentations (Kirkpatrick and Klein (2014) at left; Kirkpatrick (2013) at right). The blue links go to video files or writing artifacts.
Figure
Challenge 1: Hard Copy Artifacts Are Not (Easily) Recorded

An obvious challenge of the electronic screen recorder is that it does not (easily) capture anything that participants do in hard copy. In the Kirkpatrick and Klein (2016) study on which this is based, 4 of the 9 participants used hard copy documents (one printed sources and an outline, two wrote both hard-copy notes/outlines and electronic notes/outlines, and one wrote hard-copy notes and a hard-copy first draft). It is difficult for a researcher to record field notes quickly enough to be able to match think-aloud data with hard-copy artifacts. Even if they could, the hard-copy artifacts have to be integrated into analysis at some point. It is certainly possible to do this, but it requires some thought (Figure 6).

Figure 6. Sample hard copy materials from 2 different participants.
Figure

One possible solution is to use an external webcam to integrate video data of non-computer activity into the recording. Another possible solution is not to allow participants to create hard-copy artifacts.

Challenge 2: Volume of Data/Unit of Analysis

Participants in this study wrote a 1 to 2 page persuasive essay. In post-writing interviews, participants indicated that a difference between the research task and regular school tasks was the length of time on task (the research task was shorter). This is to say, the research task was somewhat truncated, and yet working with only 9 participants on a relatively short task generated approximately 27 hr of video data. The data were watched numerous time just to get a sense of the data, and then generate a preliminary set of codes. The data were watched several more times to refine the codes and do the actual coding. A second rater watched a sub-set of the data, to establish inter-rater reliability. Thus, working with this type of data is very labor intensive.

A corresponding challenge is determining a reasonable unit of analysis for such a substantial set of data. A reasonable unit of analysis could be a few-second occurrence (e.g., a keyword search on the Internet). Another reasonable unit of analysis is the entire 3 hr process (e.g., how did students research, plan, draft, and revise? and how did they move between these processes?). Our solution to this was to divide the data into 5-min segments. For each 5-min segment, each code was coded as used or not used. See Figure 7 for an example.

Recall that there were over 20 codes and up to 36 segments (180/5). Thus, this analysis allowed us to see how strategy use differed across time and by participant (e.g., participants started with researching strategies and followed with planning strategies; some participants used rhetorical terms in their searches and some did not). However, it did not allow for a fine-grained analysis of strategy use (e.g., how many searches a participant did in a given minute).

Challenge 3: Technical Elements

The biggest challenge of this project was figuring out how to code the data. In addition to the issue of the unit of analysis, discussed above, decisions had to be made about whether to code electronically or in hard copy. At the time this dissertation was done, qualitative coding software did allow for the coding of video software. That is, a researcher could watch a video and “tag” moments with codes. However, it was not possible to select a code, and then have all the instances of that code “cue up” to be scrolled through. Thus, coding might be relatively straightforward but later analysis might not be. At the time, the ability to “cue up” the clips that had been coded with a given code was in development; it is likely that software programs now exist that would allow for this.

Our choice was to print a large coding table for each participant, and code manually while watching the videos (see Figure 7). Although this was somewhat cumbersome, it was nice to be able to lay out all the coding sheets beside one another to make comparisons, and it was relatively simple and straightforward, in terms of the mechanics (e.g., it did not require the learning of coding software).

Figure 7. Full coding sheet (at right) and typed version of part of a coding sheet (above).
Figure

Another possibility might be to transcribe the think-aloud protocols and code them using qualitative software, but then one would have the challenge of trying to match the think-aloud protocol to online behavior.

Another technical challenge is that the audio quality of the think-aloud protocol has to be very good to be easily understood later. It is worthwhile to do several test recordings in different locations to be sure the device that is used can capture adequate audio. In our work, we decided to use a USB microphone headset to capture the think-aloud protocols.

Finally, learning a new piece of software takes time. It is strongly recommended that any researcher take the time to experiment with the software being used well before the research begins.

Conclusions

The use of a screen recorder in this research allowed for research that simply would not have been possible otherwise. The ability of the recorder to capture disappearing and changing data, to allow for the matching of thoughts with behavior, and to engage participants, made the research project a success. The fact that the method was relatively new, and clips could be shared, engaged audience members with the research and increased its appeal. The difficulties of working with a screen recorder, including the issue of capturing off-screen activities, analyzing so much data, and addressing technical issues, were all manageable and did not detract from the research process.

Exercises and Discussion Questions

  • Think in terms of validity and reliability. What are some strengths and weaknesses of the method discussed?
  • How else could analysis have been done? You might think in terms of the unit of analysis, for example.
  • How do you think the participants in this study affected the research process? Would all participant groups be equally well suited to this design?
  • What are some other screen recording programs? What are the strengths and weaknesses of those programs? How do they compare to one another (e.g., in terms of cost, system requirements, compatibility with different devices, usability, presentation options, and so on)?
  • To what other areas of research could this screen-recording method be applied?
  • How could screen recorders be used in a teaching context?

Further Reading

Bereiter, C., & Scardamalia, M. (1987). The psychology of written composition. Hillsdale, NJ: Lawrence Erlbaum.
Britt, M. A., Rouet, J. F., & Perfetti, C. A. (1996). Using hypertext to study and reason about historical evidence. In J. F. Rouet, J. J. Levonen, A. Dillon, & R. J. Spiro (Eds.), Hypertext and cognition (pp. 4372). Mahwah, NJ: Lawrence Erlbaum.
Chi, M. T. H. (1997). Quantifying qualitative analyses of verbal data: A practical guide. Journal of the Learning Sciences, 6, 271315. doi:http://dx.doi.org/10.1207/s15327809jls0603_1
Eveland, W. P., Jr., & Dunwoody, S. (2000). Examining information processing on the World Wide Web using think aloud protocols. Media Psychology, 2, 219244. doi:http://dx.doi.org/10.1207/S1532785XMEP0203_2
Mateos, M., Martín, E., Villalón, M., & Luna, M. (2008). Reading and writing to learn in secondary education: Online processing activity and written products in summarizing and synthesizing tasks. Reading and Writing: An Interdisciplinary Journal, 21, 675697. doi:http://dx.doi.org/10.1007/s11145-007-9086-6

Web Resources

The New Literacies Research Lab. http://newliteracies.uconn.edu/

Camtasia Studio screen-recording program. https://www.techsmith.com/video-editor.html

Screencastify screen-recording program. https://www.screencastify.com/

References

Coiro, J., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the Internet. Reading Research Quarterly, 42, 214257.
Hayes, J. R., & Flower, L. S. (1980). Identifying the organization of writing processes. In L. W. Gregg & E. R. Steinberg (Eds.), Cognitive processes in writing: An interdisciplinary approach (pp. 330). Hillsdale, NJ: Lawrence Erlbaum.
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data (Rev. ed.). Cambridge, MA: Bradford Books.
Ericsson, K. A., & Simon, H. A. (1998). How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking. Mind, Culture and Activity, 5, 178186.
Haller, C. R. (2010). Toward rhetorical source use: Three student journeys. Writing Program Administration, 34, 33.
Kirkpatrick, L. C., & Klein, P. D. (2009). Planning text structure as a way to improve students’ writing from sources in the compare-contrast genre. Learning and Instruction, 19, 309321. doi:http://dx.doi.org/10.1016/j.learninstruc.2008.06.001
Kirkpatrick, L. C. (2012). Students’ strategies for writing arguments from online sources of information (Doctoral dissertation). Western University’s Electronic Thesis and Dissertation Repository. Paper 583.
Kirkpatrick, L.C. (2013, June). Students’ processes and strategies for writing arguments from online sources of information. Paper presented at the annual conference of the Canadian Society for the Study of Education [Dunlop award winner presentation].
Kirkpatrick, L.C. (2014, February). New technology for researching writing. Paper presented at the tri-annual Writing Research across Borders conference of the International Society for the Advancement of Writing Research, Paris, France.
O’Hara, K. P., Taylor, A., Newman, W., & Sellen, A. J. (2002). Understanding the materiality of writing from multiple sources. International Journal of Human-Computer Studies, 56, 269305. doi:http://dx.doi.org/10.1006/ijhc.2001.0525
Risemberg, R. (1996). Reading to write: Self-regulated learning strategies when writing essays from sources. Reading Research and Instruction, 35, 365383. Retrieved from http://www.tandf.co.uk/journals/titles/19388071.asp
Rouet, J. F. (2006). The skills of document use: From text comprehension to web-based learning. Mahwah, NJ: Lawrence Erlbaum.
Segev-Miller, R. (2007). Cognitive processes in discourse synthesis: The case of intertextual processing strategies. In G. Rijlaarsdam (Series Ed.), & M. Torrance, L. van Waes & D. Galbraith (Vol. Eds.), Writing and cognition: Research and applications (Studies in Writing, Vol. 20. pp. 231250). Amsterdam, The Netherlands: Elsevier.
Smagorinsky, P. (1998). Thinking and speech and protocol analysis. Mind, Culture and Activity, 5, 157177.
Spivey, N. N. (1997). The constructivist metaphor: Reading, writing and the making of meaning. San Diego, CA: Academic Press.
Strømsø, H. I., Bråten, I., & Britt, A. (2011). Do students’ beliefs about knowledge and knowing predict their judgment of texts’ trustworthiness? Educational Psychology, 31, 177206. doi:http://dx.doi.org/10.1080/01443410.2010.538039
TechSmith Corporation. (2009). Camtasia studio (Version 6.0.3) [Computer Software]. Okemos, MI: Author.
Zheng, J. (2013). 18 minutes and 11 seconds online: Exploring the cognitive processes of 12 good writers writing on the internet (Doctoral dissertation). Available from ProQuest Dissertations and Theses database. (UMI No. 3559852)
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles