This study describes a multiple-method dissertation study on distance learning drawing classes. Many higher education institutions are resistant to offering studio art classes online, due to traditions that date back to Renaissance times when the apprentice and the master physically shared the same studio. The pedagogy for studio art classes promotes collaboration between students, their peers, and the professor. The research question in this study focused on the student experience of taking an online drawing class, focusing on virtual collaborative critiques. I chose grounded theory interviews to understand the student experience, followed by a content analysis of the critiques. The grounded theory data were then compared and contrasted with the results of the content analysis. The study revealed many insights about both learning to draw online and the process of collaboration, especially the student's need for visual presentations and the presence of a student culture. The comparison of the emergent themes of the dimensional analysis with the a priori codes of the content analysis model required a unique approach. With no existing models of such triangulation to reference, an abstract approach to analysis was taken, honoring the participants, the processes, and the visual nature of learning to draw.
By the end of the case study, you should
- Have an increased understanding of the unique strengths and challenges inherent in both a grounded theory dimensional analysis and a content analysis to examine a phenomenon
- Be able to understand how to choose methods appropriate to examining a phenomenon that has few existing references provided in the literature
- Be able to choose a method(s) that honors the perspective of the participants, subject area under investigation and the nature of the research question(s)
- Understand the challenges of triangulating diverse methods in a study that examines an emerging field
Drawing upon my 30 years of experience as an art educator and distance learning coordinator at a community college, I designed and taught my first online drawing course in 2011. Aware of the unique features and challenges that impacted the student experience, I elected to design a dissertation study on distance learning drawing classes. In an on-campus class, the professor provides motivation and historical examples to the students. He or she usually demonstrates the drawing technique and gives personalized attention to students as they make marks on their paper, sometimes even correcting their drawing. Sharing the same physical space, each student can also observe the progress and outcomes of their classmates. Students participate in group discussions and critiques, observing and evaluating the work of their peers and providing feedback. In the corresponding class delivered online, the instructor provides text-based instruction or videos and communicates with the students on the discussion board or through email. Students take photographs of their drawings and post them on the discussion board. Critiques and group discussions are mediated through the asynchronous discussion board, or in a synchronous live chat.
Despite the proliferation, popularity, and exponential growth of distance learning, a serious gap exists in the literature dedicated to visual or studio art classes delivered through distance learning. In fact, it is very rare to find a drawing class delivered online. Courses and entire degrees on digital design and graphic design are offered online, as well as art history, art appreciation, art criticism, film criticism, and art education, on both the undergraduate and graduate levels. I was interested in studying the student experience of taking a studio-based drawing course, especially how critiques might clarify the definition of online collaboration. As illustrated in Figure 1, I was able to gather a large body of literature dedicated to distance learning, art education, and virtual culture, which provided a framework in which to formulate my research questions.
A review of the distance learning literature provided a thorough investigation into the nature of collaboration, interactivity, and its effect on the co-construction of knowledge, but rarely privileged the student voice. Collaboration is commonly mediated on the discussion board provided by most learning management systems; however, the literature does not address whether some subject areas, like the experiential drawing class, are more or less adaptable to substituting the affordances of the discussion board for the synergy of face-to-face instruction.
A clear definition of collaboration has not been established, but factors contributing to collaboration such as engagement, interactivity, the divergence of time and place, social presence, and knowledge sharing behavior have been confirmed in the literature. Marc Prensky coined the phrase digital natives to describe how our millennium students optimally learn through collaborative social networks.
The art education literature describes how drawing is learned through making marks on paper, skills of visual perception, understanding aesthetics, and engaging in critiques. The critique is born from a long tradition of the aesthetic in art education, a collaborative process in which students share, evaluate, and interpret each other's work. Several models offer a structure to help instructors mediate a critique. Edmund Feldman's model has four distinct stages: description, analysis, interpretation, and evaluation. George Geahigan explains that approaches include oral and written work and acknowledges that the synergy in such group discussion results in collaborative learning. Examples of studies on the effects of virtual critiques included in traditional face-to-face classrooms indicate that collaborative discussion helps students to visually discriminate, facilitate deep learning, make meaning of the creative process, and connect with peers.
Not only is there a paucity of literature describing distance learning studio art classes, but it does not articulate the social processes, nature, and dimensions used to measure how students learn to draw through this interactive process. As Leonard Shatzman expresses it in his definition of qualitative grounded theory, I wanted to find out ‘what all is happening here?’ Translated into terms appropriate to this study, I wanted to know how the students described their experience and then examine their actual interactions. In my own experience teaching drawing online, I knew that the conversations on the discussion board would reveal the nature of the student's interactions. In order to fully examine these issues, I needed not only to interview the students but also to find an instrument to effectively analyze the collaboration at the source, virtual critiques, and conversations on the discussion board. Students from two institutions in the United States participated in the study, a Midwestern Community College and a 4-year university in a Middle Atlantic state. The 13 interviews provided insights into the students’ need to experience a visual practice through examples and demonstrations, and the presence of a strong virtual culture that mediated their participation. The content analysis included hundreds of transcripts from 46 students, which validated and located evidence of collaboration in virtual discussions. Three types of discussion boards were examined: synchronous and asynchronous critiques and synchronous discussions on an art-related topic chosen by the instructor. An asynchronous discussion is defined as a conversation that takes place in non-concurrent time, whereas synchronous refers to an online conversation that takes place in real time.
The research for the dissertation took place between March and August of 2012. Both classes had concluded within a year of the study. The professors teaching the two classes provided their rosters and the discussion board transcripts. All students in both classes were invited by email to participate in the study. Participation was voluntary, and the interviews took place over the telephone. Consistent with the grounded theory method, I asked open-ended questions, beginning with ‘What was it like to take an online drawing class?’ The interview data were then coded in NVivo software. The discussion board transcripts were chosen for analysis from the participant's referral. For example, when a student offered a specific reference to an online conversation, that discussion board transcript was chosen to be analyzed. The content analysis model was designed specifically by France Henri to measure the frequency and quality of collaborative exchanges in online discussion forums in 1992. A dimensional analysis of the grounded theory interviews yielded dimensions that were compared and contrasted with the quantified codes and categories from the content analysis and results interpreted in a graphic model. Utilizing the two methods strengthened the study by demonstrating common constructs and validating the findings of both methods.
Grounded theory is a qualitative, emergent method that generates theory from interview data. Its philosophical foundations include George Herbert Mead's work on pragmatism and Herbert Blumer's work on symbolic interactionism, both of which honor multiple interpretations of reality. This method was chosen for its ability to understand the experience from the unique perspective of the student in a distance learning drawing class. In their hour-long interviews, the students were quite candid and able to describe their experiences with great clarity, including being able to recall the details of their interactions on the discussion board. They described their challenges learning to draw in isolation and through the text-based learning management system. They discussed their experience in critiques candidly, many preferring the live chat, and all indicating that it was helpful but at the same time subject to the interpretation of other's postings.
The interviews were coded in NVivo by a team of three coders using the constant comparative method. Individual codes were consolidated into conceptual categories until primary dimensions emerged and eventually became saturated. A primary dimension can be best described as the largest and the most significant concepts that emerge out of the coding process. A dimensional analysis was used to analyze the dimensions into a conceptual framework. The dimensional analysis was designed by Leonard Shatzman to emulate the way humans naturally make sense of their environment and as a structure to analyze the conceptualized dimensions. An explanatory matrix is used to map out the data, in this case based on the model by Susan Kools and colleagues that compares the context, conditions, processes, and consequences. A core dimension surfaces as the strongest concept that serves to contextualize the relationship between the previously described condition, processes, and consequences of the primary dimensions.
A content analysis is a generic name for a variety of methods used to quantify, clarify, or classify text-based data. This method was chosen for its ability to analyze the text-based conversations between students as they critiqued each other's work on the discussion board. After careful research, I chose a content analysis model designed by France Henri, because it was a good fit to my research question and it was the basis of most other models being used to analyze discussion boards. It measures five categories of collaboration using 25 different codes. Eleven discussion board sessions were analyzed: five asynchronous critiques, four weekly discussion topics, and two live chats. The message unit analyzed was defined by the paragraph breaks of the author. Student postings demonstrated varying levels of length, depth of understanding, and comfort with the medium. As the course progressed and the students got to know one another, lively conversation was observed, especially in the forums dedicated to discussion topics. In the asynchronous critique forums, students offered helpful but polite suggestions to each other. The live chats were the most candid and social. The same team of coders analyzed the transcripts in NVivo, following three training sessions that revealed different communication styles across the three types of discussion board. These observed differences proved to be instrumental in the final analysis.
Many mixed method studies employ sequential data collection, utilizing both quantitative and qualitative methods. Usually, one method of collection either informs the other or one phase of data collection carries more weight in the analysis stage. In this study, the interview data informed the choice of discussion board transcripts. I was interested in comparing and contrasting the results of the two methods of analysis to find a balance between the students’ own voices and the actual text-based conversations that took place on the discussion board. The emergent approach of dimensional analysis and the a priori taxonomy of a content analysis held the capacity to better understand and define the construct of collaboration. For example, when a student described what it felt like for them to participate in an online critique, I was able to find the specific discussion board transcript they referred to and code the exchange according to the content analysis model. In their interviews, the students described the live chat as more interesting and spontaneous than the asynchronous counterpart, and this was demonstrated in the results of the content analysis.
Due to the fact that the findings were not actually combined, one of the dissertation committee members was concerned that the approach did not meet the definition of a mixed method. It was suggested that I define the study as multiple method. Regardless of the term used, I found that comparing the student's personal description of the critique experience with the frequency counts of the codes from the content analysis resulted in a deeper, richer understanding of the collaborative process in distance learning drawing classes.
The categories of the content analysis were averaged by discussion board types, expressed as a percentage and the results presented as descriptive statistics. The explanatory matrix of the dimensional analysis resulted in 12 primary dimensions which explained not one, but two distinct core dimensions. Due to this complexity, I renamed the core dimensions as core domains to give emphasis to each core. This served to validate the individual primary dimensions, each of which was situated in their own set of conditions, processes, and consequences. The term domain was adapted from a term used in a colleague's grounded theory dissertation. First, I compared the constructs of the primary dimensions and categories and then I explored where the two might converge or conflict. The comparison revealed aligning themes. An analysis of the relationship between the two helped locate and define collaboration across the three different discussion board types. One emerging theme from the dimensional analysis ‘sharing and comparing’ demonstrated a similar construct with the ‘interactivity’ code from the content analysis. The frequency counts of the cognitive skills processing code clarified the difference between the student's experience in the two types of discussion boards: synchronous and asynchronous. There were no conflicts found in the final analysis, which helped me define the final triangulation as a convergence. The results of the convergence were depicted in graphical illustrations in order to provide a model for the theoretical propositions.
The results of the dimensional analysis revealed two core domains, each of which was developed from primary dimensions. The first core domain was visual learning: the experience of learning to draw. The students described the need for more visual directions and technique demonstrations. Although the learning management system is largely text based, learning to draw employs the visual senses. This domain had five primary dimensions, illustrated in Table 1.
The second core domain was tacit assumptions: the experience of virtual culture. This domain demonstrated the power that culture had on virtual communication. The primary dimensions offered evidence that not only did a virtual culture exist, but it influenced the nature of collaboration. The second domain had seven primary dimensions, illustrated in Table 2.
I designed graphical models to display these results, designed to honor the work of the students. The graphics describing the dimensional analysis were modeled after student descriptions of still life and landscape drawings completed in the courses. The graphic shown in Figure 2 is a smaller sample extracted from the final composite illustration of the core domain, ‘Tacit Assumptions: the experience of virtual culture’.
The content analysis located and quantified student collaboration with the specific discussion boards. The results were expressed as descriptive statistics. The asynchronous critique was a weekly peer review of student drawings on the discussion board. Students took digital photos of their work and posted them, followed by small group discussions. The weekly discussion topic addressed such issues as the student's use of charcoal or asked the student to research and report on an artist of their choice. The live chat is a synchronous live critique that supports images, video, voice, and a digital white board. The significant results are listed below:
- Asynchronous Critique
- highest scoring category: cognitive skills surface processing
- highest scoring code: judgment defined as appreciation, criticism or evaluation
- Weekly Discussion Topic
- highest scoring category: interactivity
- highest scoring category across discussion boards: in-depth processing
- Live Chat
- highest scoring category: cognitive skills surface processing
- highest scoring code: clarification
- highest scoring category across discussion boards: social
To summarize, the asynchronous critique provided time for reflection, whereas the live chat moved quickly, as seen by comparing the in-depth processing category. Students engaged in deeper levels of cognitive skills when they weren't critiquing each other's work. The live chat scored highest in the social category, as compared to the weekly discussion board which scored the lowest. The results of the content analysis were mapped in concentric circles superimposed on a portrait, one of the drawing assignments described by the students. The graphic shown in Figure 3 is a smaller sample extracted from the final composite illustration of the content analysis results.
In the triangulation phase, it was found that ‘talking back and forth’ shared a construct with the social category of the content analysis. As explained previously, ‘sharing and comparing’ shared a construct with interactivity. Examining the relationship between the categories, I found that investing time explained the difference between the live chat and the asynchronous critique. Cognitive skills processing is higher in the asynchronous critique, but interactivity is higher in the live chat. Students wait for responses in between other life activities and have time to reflect before responding in the asynchronous critique. In the live chat, interactivity is in real time, encouraging social chatter. The primary dimension of ‘being nice’ explains the relationship between the weekly discussion topic and the asynchronous critique. Criticism of personal drawings in the asynchronous critique invokes the student code. In-depth processing and interactivity were higher in the weekly discussion topic, where the discussion focused on technique or an artist. The graphic shown in Figure 4 illustrates the convergence of the data demonstrating ‘collaboration’.
As in any study, there were a few challenges related to the research method. The content analysis model was written 20 years earlier to accommodate earlier versions of discussion boards that utilized single-threaded formats displayed in chronological order. The team of coders also observed that the three discussion board types appeared to have different ‘personalities’. After careful debate, I authored a training manual to define the use of the categories and codes for each of the discussion board types. This helped improve inter-rater reliability.
A content analysis can be qualitative or quantitative. A scholarly discussion with members of the dissertation committee centered on whether or not this model could be classified as quantitative. Marshall Scott Poole, Joseph P. Folger, and Dean E. Hewes define a quantified content analysis to include a frequency count of words, following a predetermined coding scheme, coded by those trained in restricted observer mode, and that which includes tests for inter-rater reliability. A statistical or regression analysis is optional. In the end, the members agreed that France Henri's model conformed to the definition of a quantitative analysis.
The most challenging issue in this study was to determine the best way to combine or triangulate the results of the two phases. My initial intention was to compare and contrast the results of the analysis, but on the onset of the study, this was only a vague, unformed concept. The nature of the inquiry allowed the data to be converged in a logical, organic manner. The interviews provided the perspective of the student to emerge in a dimensional analysis, while the codes and categories of the content analysis model provided ‘proof’ of the social phenomenon of collaboration. The results of both methods served to elucidate the term by allowing it to be defined in both qualitative and quantitative terms.
The results of the two methods supported one another while locating evidence of collaboration within the context of the two sets of data. The synergy of the two methods enabled me to preserve the voice of the student participants while confirming the phenomenon of collaboration in the text of the discussion boards.
The final analysis led to the proposal of three theoretical propositions about collaboration in a distance learning drawing class: (1) students preferred images and video over text-based directions, (2) a strong virtual culture exists among students which impacts collaboration, and (3) students appreciated the balance between synchronous and asynchronous discussion protocols (Table 3).
A visual model was created to depict how the findings from both methods fit together and to illustrate the term ‘In Situ Vision’, which was incorporated in the title of the dissertation (Figure 5).
This study examined the student experience of collaboration in an online drawing class. The two methods individually revealed a rich landscape that described collaboration and the co-construction of knowledge through collaborative critiques. Researchers may want to consider using either grounded theory or a content analysis to obtain more focused results. Grounded theory is an emerging methodology that may yield different results with different populations. A more traditional quantitative survey could be designed from the results of a grounded theory dimensional analysis for a different mixed method design. There are many other content analysis models that exist, designed to analyze transcripts from contemporary discussion board formats. Another future direction might be to compare the student experience of collaboration in another arts-based class, such as music or creative writing.
- In this dissertation study, I chose grounded theory to capture the student's voice in the research. What other methods could be used to honor the student perspective?
- Students volunteered to be interviewed for this study. How does this method of sampling impact the validity of the results?
- What other methods of triangulation could be used to combine the results in this study?
- In this essay, I stated that the use of the mixed methods strengthened the results. Do you agree or disagree with this assertion, and why?
- It had been suggested by one of the dissertation committee members that I name this design a multiple-method instead of a mixed-methods study. What is the difference between these definitions? What would you name it?