Conducting Mixed-Methodological Dating Violence Research: Integrating Quantitative Survey and Qualitative Data

Abstract

Dating violence is a major public health concern, with high prevalence on college campuses. Although traditional quantitative methodology has helped researchers understand important risk factors, correlates, and predictors of dating violence and its consequences, mixed-methodological approaches—used in this chapter to refer to both content analysis of open-ended questions and quantitative survey methods—can add much to our understanding of dating violence by examining contextual factors through the victims' own narratives. The current chapter defines mixed-methods research and explains the use of open-ended questions and content analysis in combination with quantitative methods utilizing two case examples.

Learning Outcomes

After reading this case study, students should

  • Have a better understanding of how to perform mixed-methodological research, by both a basic understanding of content analysis and how to combine quantitative and qualitative data
  • Have a better understanding of the reasons for using mixed-methodological and qualitative approaches to understanding phenomena such as dating violence
  • Be able to appreciate the contribution of mixed-methodological designs beyond traditional quantitative approaches

Researching Dating Violence

Dating violence, defined as aggression toward a current or former dating partner, is a major public health problem in the United States that leads to many negative outcomes for victims, such as mental and physical health problems (see Lewis & Fremouw, 2001, for a review of negative outcomes associated with dating violence). As a result of the high rates of dating violence and negative outcomes associated with experiences of dating violence among adolescents and young adults, there has been an increase in research over the past few decades on dating violence in order to inform primary prevention efforts (i.e. prevent dating violence from happening at all) and intervention and advocacy efforts (i.e. assist victims of dating violence in recovery).

We have been conducting research on dating violence since we were undergraduate students, and continue to conduct this research today in our positions as Faculty (Katie Edwards) and Graduate Student (Christina Dardis). Both of our research programs broadly focus on risk and protective factors for being a perpetrator of dating violence, as well as disclosure (i.e. telling someone about the abuse), leaving (i.e. terminating an abusive relationship), and recovery (e.g. healing psychologically and physically) processes associated with dating violence victimization. We have found it beneficial to use both quantitative and qualitative methodologies, referred to as mixed-methods or mixed-methodological approaches, in researching all of the above topics.

Mixed-Methodological Research

Similarities and Differences between Qualitative and Quantitative Research

Much of the research that is found in scientific journals and textbooks on dating violence and other subjects is based on quantitative research, which uses surveys and tests with limited response options to answer predetermined questions. For example, a researcher may ask a participant who experienced dating violence to ‘indicate how sad the experience made you feel’ and ‘how angry did the experience make you feel?’ with response options from 0 (not at all) to 7 (extremely). This would be an example of a quantitative survey question where the researcher has predetermined the categories of interest, such as sad and angry, and responses, such as a 0–7 scale.

Qualitative methodologies, on the one hand, involve the use of observation, interviews, or open-ended questions that allow for a more inductive approach; this approach allows for the meaning to come from the data without predetermined categories of responses. For example, a researcher may ask a participant to respond verbally or in writing to the question ‘How did the experience make you feel?’, and the participant would be asked to speak or write, in their own words, how he or she felt about the experience without any predetermined categorizations provided by the researcher. In other words, the researcher would not ask the participant how sad or how angry they feel, but rather, how they felt more generally to elicit any possible response. A strength of quantitative approaches, on the other hand, is that researchers can easily quantify phenomena across many individuals allowing for people to compare these responses across studies and samples. For example, whereas one study might find the average sadness score reported by victims of dating violence to be a 5, on a scale from 0 to 7, another study might find the average sadness score reported by victims of dating violence to be a 4. Because both studies used similar quantitative measures, we can more easily compare findings across studies.

Qualitative approaches permit a deeper and richer understanding of a phenomenon from the experiences or point of view of the individuals in the study. For example, in a qualitative study in which victims of dating violence are asked how the experience made them feel, the researcher would be able to obtain rich detail about the specific words they use to describe how they felt, and how they came to feel that way and so forth. However, with qualitative research, it is difficult to compare findings across studies because participants' experiences are not usually quantified as in quantitative research. For example, a theme that emerges from qualitative research may be that young women who experience dating violence go through a process of experiencing varying emotions, whereas a quantitative study may report the percentages of different emotions experienced. Given that in the qualitative study the findings are in words and in the quantitative study the findings are in numbers, comparing these across different studies can be a bit challenging at times.

The Utility of Integrating Qualitative and Quantitative Research

Given that both qualitative and quantitative methodologies have their advantages, there has been a growing focus on integrating these methodologies to create mixed-methods research, also referred to as mixed-methodological research. Mixed-methods approaches are especially relevant for studying complex behaviors like dating violence that are influenced by a number of factors, such as an individual's accepting attitudes toward dating violence and more broadly community norms toward dating violence and responses, such as legal and social consequences for dating violence perpetration.

Quantitative surveys allow for the documentation of the prevalence of dating violence (how much of it happens) and how dating violence is related to other factors, such as gender and attitudes toward dating violence. On the other hand, qualitative data help us understand more about what happened (e.g. who hit whom first, and what happened that led to the abusive incident), what the experience was like (e.g. how was it experienced in the words of the victim), and various processes that occur in the aftermath of an abusive incident (e.g. recovery, disclosure, help-seeking). Combining quantitative and qualitative methodologies into one study allows for a more complete understanding of dating violence than either methodology alone.

Various Ways to Conduct Mixed-Methodological Research

Mixed-methodological dating violence research can be conducted in a number of ways. Generally, there is a quantitative survey component along with a qualitative component. The qualitative component may involve interviews (either individual or group interviews), collection of daily diary narrative data (participants write each day about an assigned topic), or written responses to open-ended questions generally included alongside a quantitative survey. Professor Edwards has conducted mixed-methodological studies that included more in-depth qualitative methods such as interviews using grounded theory (i.e. focus on building theory about a phenomenon based on observation and immersion in interview data) and phenomenological (i.e. focus on people's subjective experiences and interpretations of their experiences and the world) approaches. These more in-depth interviews have focused on topics such as

  • rural youth's perceptions of helping in situations of dating violence;
  • rural and urban high school students' perceptions of factors that facilitate or hinder helping friends in situations of dating violence;
  • the process of leaving an abusive dating relationship, among first semester college women;
  • lesbian, gay, and bisexual young adults' experiences with dating violence and their perspectives on how social and community factors promote or hinder their recovery.

The richest and most in-depth qualitative data come from interviews that allow for probing and follow-up questions (e.g. ‘tell me more about that’, ‘how did that make you feel’, ‘could you give me an example?’). In addition, some participants with histories of dating violence may prefer to share their experiences in the format of an interview, although some participants prefer writing about their experiences on paper and not sharing them directly with a researcher. In-depth qualitative methods, like individual and group interviews, are often time-consuming in terms of data collection (time to conduct all of the interviews) and management (time to transcribe all of the recorded interviews) and are costly (often money to pay participants or research assistants to conduct the interviews is required). Institutional review boards (IRBs) often require individuals with advanced graduate and/or clinical degrees to conduct such interviews with survivors of dating violence. For all of these reasons, it can be challenging, if not impossible, for student-initiated research projects on dating violence to include in-depth qualitative methodologies.

Thus, integrating open-ended questions into surveys can also provide important, cost-effective, contextual information to supplement closed-ended, quantitative survey questions. We have conducted mixed-methodological studies that included participants' written responses to open-ended questions, which is the focus of this case study.

Overview of Mixed-Methods Dating Violence Research

In our research on dating violence, we often include several established surveys to assess the variables we are most interested in as well as a number of open-ended questions in which participants are asked to respond in writing. Closed-ended and open-ended survey questions generally assess the same phenomenon but in different ways. Regarding the open-ended questions, participants are generally asked to respond to these in great detail in order to gain as rich an understanding of their experiences as possible. Open-ended questions are constructed in order to minimize the predetermined responses and to allow participants to answer a question using any words and in any way he or she chooses. For example, in a study in which we were interested in understanding how participating in dating violence research impacted participants, we asked young women to describe

How and why this study affected your thoughts and/or feelings about your partner and your relationship. If the study did not affect your thoughts and/or feelings about your partner or your relationship, please explain why you think this is the case. (Edwards, Sylaska, & Gidycz, 2013)

Notice that we did not provide any predetermined categories or suggestions of specific responses.

Transcribing Written Data

After constructing the survey and collecting the data, written survey data can be typed into a qualitative data analysis program such as NVivo, or it can be typed into more readily available software like Microsoft Excel or Word documents. Depending on the length of written responses, we may often times include them in a column in our data set, such as SPSS, alongside the quantitative survey variables. We most commonly use content analysis of participants' written responses, which is the most common method for analyzing text documents in the social sciences.

Content Coding Written Data

In Table 1, we use examples from a study with both abused and non-abused women what caused them to be invested (i.e. the number and magnitude of resources that are tied to a relationship) in their relationships (Dardis, Kelley, Edwards, & Gidycz, 2013). The steps involved in content analysis are as follows:

Table 1. Examples of steps in content coding procedures.
None
  • in the first steps of content analysis, coders read all participants' responses repeatedly in order to obtain the gestalt, or whole picture, of the data.
  • second, words and phrases that address the question asked are highlighted in an effort to identify and categorize all aspects of participants' responses to the question(s). In this case, we found codes that included spending time talking and energy spent getting to know each other, sharing possessions, and having mutual friends as factors which have led to investment in their relationships.
  • third, similarities and differences in the responses are noted and lead to the emergence of categories of similar responses. In this phase, we found that many participants mentioned time spent and mutual friends as factors which had led to their investment.
  • After initially coding all participants' responses by noting the presence or absence of a particular code (e.g. time spent together) in each participant's response, the coders then examine all data within a particular code, and some codes are combined, whereas others are split into subcategories (Step 4 in Table 1). In this case, because participants often combined the terms time and energy (e.g. ‘I put a lot of time and energy into this relationship’), this code appears to represent a similar construct (i.e. ‘effort’).

The relationships among the categories that come from the content analysis often result in themes. There is often overlap in content and thematic analytic approaches. Thus, whereas most of our research using this method focuses primarily on the codes created from the data, the categories are often used to explore themes across categories.

When the coders are in disagreement, discussion of the discrepancy is conducted until mutual agreement is reached. In this case, the two individuals coding the data had to spend time discussing their interpretation of the data. Although they ultimately agreed on the codes, peer debriefing can be used, which involves bringing in someone else, such as an expert, to review the codes and responses as a third expert rater. Credibility, or validity of the interpretation of the data, is established through this process of peer debriefing, as well as prolonged engagement (spending sufficient time with each of the participants' written responses), and deviant case analysis (examining elements of the data that appear to contradict explanations emerging from the coding categories).

Writing Up Content Analysis

In writing up content analyses of written responses, we often provide readers with a sense of how frequently a written response is reflective of a specific coding category. For example, using the example of abused and non-abused women's perceptions of investment (Dardis et al., 2013), we found that 39% of abused women and 14% of non-abused women discussed negative qualities about their partners when discussing their level of relationship investment. This finding came from the ‘negative qualities’ code that emerged as one of five codes regarding women's perceptions of investment in their relationship. It is also important to provide the reader with example quotes for each of your coding categories. In the case of the ‘negative qualities’ code, a few examples from our publication included the following (where NA = not abused and A = abused):

  • ‘Sometimes I think [I am] too [invested] only because I don't think he is’ (NA)
  • ‘We used to live 1 hour away, now we live 4’ (NA)
  • ‘My parents’ views on the relationship, controlling and trust' (A)
  • ‘Sometimes I feel like I neglect my friendships’ (A)
  • ‘Our relationship is purely physical … I don't want to get hurt by him again’ (A)

In the presentation of our findings, the results from both quantitative and qualitative data analyses are discussed and integrated, and triangulation (cross validation and consistency in findings across methods) as well as discrepancies in results are discussed. Additional examples of our use of content analytic procedures are described in the next section.

A Mixed-Methodological Study of Young Women's Disclosure of Dating Violence

Research Objectives

In Edwards, Dardis, and Gidycz (2012), we were interested in better understanding disclosure of dating violence experiences among young women, a topic in which there remains a dearth of research. We attempted to answer the following questions utilizing quantitative data: To whom do women disclose dating violence? What are the correlates of women's disclosure of dating violence? Utilizing qualitative data, we attempted to answer the following questions: For women who disclosed, who was the most helpful and why? Who was the least helpful and why? For women who did not disclose, what were their reasons for not doing so?

Methods

Participants included 44 women, obtained from a larger screening sample, who reported at least one incident of sexual, physical, or psychological abuse in their current heterosexual relationship, as measured by the Conflict Tactics Scale–Revised (Straus, Hamby, Boney-McCoy, & Sugarman, 1996). The Conflict Tactics Scale–Revised is the most commonly used quantitative measure of dating and intimate partner violence (IPV); an example item is ‘my partner pushed or shoved me’, and participants report how frequently this has happened ranging from ‘never’ to ‘more than 20 times’.

Following the Conflict Tactics Scale–Revised, women were asked to review their answers on the measure and pick the most severe/upsetting experience that they endorsed, and to answer the remaining questions about this experience. Specific questions (with response options ranging from ‘Not at all’ to ‘Very much’) assessed self-blame (‘How much do you feel responsible for what happened’), partner blame (‘How responsible is your partner for what happened’), and how stressful the experience was for participants (‘How stressful was this event for you?’). Additionally, participants were asked, ‘Did you think about ending the relationship after this experience?’ with response options ‘Yes’ or ‘No’. A question also assessed who participants told about the experience, with the following response options (and instructions to circle all that apply): did not tell anyone, male friend(s), female friend(s), sister(s), brother(s), mother, father, counselor/therapist, medical doctor, law enforcement, priest/minister, other, as well as

After these closed-ended questions, women completed three open-ended questions about the most helpful (‘If you told anyone about the experience, which of the sources did you find were most helpful and what made them most helpful?’) and least helpful (‘In a few sentences, please explain which sources you found least helpful and what made them least helpful?’) responses to their disclosure, as well as reasons for nondisclosure (‘If you did not tell anyone about the experience, please explain why you decided not to tell anyone about the experience’).

Quantitative Results

Using the quantitative data, we conducted three t-tests, which are used when there is an independent variable that is categorical (or a grouping variable), and a dependent variable that is continuous (measured on a scale). In this study, we compared individuals who had disclosed the abuse compared to those who did not (Yes/No, independent grouping variable), on the amount of stress and partner blame they experienced (continuous dependent variables). Results from the t-tests suggested that when compared to non-disclosers, disclosers reported higher levels of stress associated with the experience and higher levels of partner blame.

In addition, we conducted a chi-square test, which compares two categorical or grouping variables. In this case, we compared women who did and did not disclose IPV (Yes/No, grouping variable) with whether or not they were thinking about ending the relationship (Yes/No, grouping variable). Results from the chi-square test suggested that disclosers were more likely to think about ending the relationship than non-disclosers.

Qualitative Results

Participants' written responses to open-ended questions underwent an independent content analysis by Professor Edwards and Ms Dardis using the steps of content analysis described in the previous section. Briefly, we both read all of the responses multiple times and identified all categories of responses to each of the questions. From there, we went through the responses and noted the presence or absence of each coding category.

As an example of how codes can change, we had first coded two separate categories: ‘saw both sides of the argument’ and ‘normalized the experience’. But following feedback from journal reviewers that these categories were difficult to distinguish, we went back through the written responses and realized that these categories should be refined. Instead of ‘normalized the experience’, we realized that the proper code should be ‘provided rationalization for the partner's behavior’ (i.e. women minimized the abuse or tried to justify the abuse). Instead of ‘saw both sides’, we realized that this category was better represented by the label ‘provided a neutral perspective’. We relabeled these codes and then went back through all of the qualitative data to recode all of the responses on these two codes.

The most common reasons that disclosers stated people were helpful were that confidants offered good advice (36%), provided the opportunity to vent/talk about it (28%), and provided comfort and other emotional support (20%). Additional helpful responses were that confidants related to the experience (16%), provided rationalization for the partners' behavior (12%), and provided a neutral perspective (8%).

Reasons that disclosers stated people were unhelpful were because others told them to break up with their partner (33%), provided bad advice (27%), did not understand (27%), and joked about the experience (20%).

The most common reason for nondisclosure was that the incident was ‘no big deal’ (80%). Additional reasons included concerns that no one would understand (10%) and concerns about anticipated confidants' reactions (20%). In the published paper, we provided readers with quotes to demonstrate each of the codes (i.e. most helpful, least helpful, and nondisclosure).

Reflection on What We Learned

Whereas the quantitative data provided information regarding the rates and correlates of disclosure, the qualitative data provided us with an understanding, in women's words, the types of responses they found most helpful and least helpful. We also learned from the qualitative research that a response that one woman found helpful (e.g. being told to leave) could be considered unhelpful by another woman, which shows the variability in women's perceptions of helpfulness, something that would have been more difficult to capture with purely quantitative methods. Furthermore, in subsequent studies, we have used women's qualitative responses from this study to modify quantitative survey measures of social reactions to disclosure of dating violence, which is consistent with other trends in research in which qualitative research is often used to guide measure development and future quantitative studies. Taken together, the combination of quantitative and qualitative data in this study provided a more detailed picture of women's disclosure of dating violence than either method alone would have provided and offered important information that has been used in subsequent studies on this topic.

Additional Considerations for Mixed-Methods Dating Violence Research

Talking about dating violence can be very hard, particularly among individuals who have experienced dating violence. Having participants answer anonymously to open-ended questions may be more comfortable for some participants than face-to-face interviews. However, it is important to consider the ethical implications of this research. For example, because we are not conducting face-to-face interviews, we were not necessarily able to understand and intervene when the participant experienced distress. Therefore, it is important for us to understand what impact our surveys might have on individuals who participate in dating violence research.

Professor Edwards has conducted a number of studies to examine participants' reactions to answering both quantitative survey and open-ended questions about dating violence. This research consistently finds that very few participants, generally less than 5%, report being upset by participating in dating violence research. Even among participants who report being initially upset, participants are usually only a little upset and for a short period of time. In addition, most participants who are initially upset after participating in dating violence research also report personal benefits, such as contributing to science, gaining a deeper understanding of their experiences, and catharsis.

In addition to the effect of dating violence research on participants, due to the high level of detail about traumatic experiences obtained from written or spoken narratives of dating violence, conducting qualitative research can be emotionally exhausting for researchers. Conducting this type of research can lead to what is called ‘vicarious traumatization’, in which researchers experience similar reactions to the trauma (e.g. nightmares, emotional distress) as the victims themselves experienced. To prevent vicarious traumatization, trauma researchers should engage in self-care, including

  • eating and sleeping well;
  • getting exercise;
  • engaging in hobbies and pleasant activities;
  • coping with humor;
  • journaling about experiences, thoughts, and feelings;
  • alternating working on the qualitative data collection and analysis with less emotionally charged work;
  • seeking and providing mentoring to process emotional reactions;
  • monitoring lab-mates' and colleagues' emotional distress;
  • educating friends and family about trauma-related issues;
  • engaging in social justice and advocacy work aimed to prevent dating violence and other forms of gendered violence from our society.

Final Thoughts

Given the high rates and negative consequences associated with dating violence, there has been a growing research focus among social scientists to better understand this major public health concern. One of the most promising methods for better understanding dating violence is the integration of quantitative and qualitative methodologies. In this case study, we described one of many types of mixed-methodological research approaches that integrates standardized survey instruments with open-ended survey questions to which participants provided written responses that are subsequently content-analyzed and integrated with the quantitative study findings. We have found this type of methodology informative in our own understandings of dating violence. Moreover, we believe that this and other types of mixed-methods research holds the most promise in informing dating violence prevention, intervention, and advocacy efforts in US society while giving voice to survivors so that their stories remain at the core of this body of research.

Exercises and Discussion Questions

  • What is content analysis, and what are the steps to engaging in content analysis?
  • Despite many qualitative options for research, why might we choose to use content analysis of open-ended questions rather than interviews or other in-depth methods?
  • We found that the mixed-methodological approach added more to our understanding of phenomena than traditional quantitative methods alone. Why was this so, and what were some examples from our research studies?
  • Although we felt that adding open-ended questions helped aid our understanding of phenomena, what are some potential limitations to using open-ended questions or qualitative content analysis in research and how might they be addressed?

Further Reading

Bauer, M. W. (2000). Classic content analysis: A review. In M. W.Bauer & G.Gaskell (Eds.), Qualitative researching with text, image and sound: A practical handbook (pp. 131–151). London, England: SAGE.
Campbell, R. (2002). Emotionally involved: The impact of researching rape. New York, NY: Routledge.
Creswell, J. W., & Plano Clark, V. (2010). Designing and conducting mixed methods research. Thousand Oaks, CA: SAGE.
Gabriel, K. T., & Edwards, K. M. (2007). Supporting a trauma research team in an academic setting: Recommendations from graduate students. Trauma Psychology, 2, 19–21.
Hsieh, H., & Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15, 1277–1288. doi: http://dx.doi.org/10.1177/1049732305276687http://dx.doi.org/10.1177/1049732305276687
Joffe, H., & Yardley, L. (2004). Content and thematic analysis. In D.Marks & L.Yardley (Eds.), Research methods for clinical and health psychology (pp. 56–68). London, England: SAGE.
Krippendorff, K. (2004). Content analysis: An introduction to its methodology (
2nd ed.
). Thousand Oaks, CA: SAGE.
Patton, M. Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: SAGE.
Reeves, P. M., & Orpinas, P. (2012). Dating norms and dating violence among ninth graders in Northeast Georgia: Reports from student surveys and focus groups. Journal of Interpersonal Violence, 27, 1677–1698. doi: http://dx.doi.org/10.1177/0886260511430386http://dx.doi.org/10.1177/0886260511430386
Testa, M., Livingston, J. A., & VanZile-Tamsen, C. (2011). Advancing the study of violence against women using mixed methods: Integrating qualitative methods into a quantitative research program. Violence Against Women, 17, 236–250. doi: http://dx.doi.org/10.1177/1077801210397744

References

Dardis, C. M., Kelley, E. L., Edwards, K. M., & Gidycz, C. A. (2013). A mixed-methodological examination of investment model variables among abused and nonabused college women. Journal of American College Health, 61, 36–43. doi: http://dx.doi.org/10.1080/07448481.2012.750609http://dx.doi.org/10.1080/07448481.2012.750609
Edwards, K. M., Dardis, C. M., & Gidycz, C. A. (2012). College women's disclosure of partner abuse to peers: A mixed methodological study. Feminism & Psychology, 22, 507–517. doi: http://dx.doi.org/10.1177/0959353511422280
Lewis, S. F., & Fremouw, W. (2001). Dating violence: A critical review of the literature. Clinical Psychology Review, 21, 105–127. doi: http://dx.doi.org/10.1016/S0272-7358(99)00042-2http://dx.doi.org/10.1016/S0272-7358%2899%2900042-2
Straus, M. A., Hamby, S. L., Boney-McCoy, S., & Sugarman, D. B. (1996). The revised Conflict Tactics Scales (CTS2): Development and preliminary psychometric data. Journal of Family Issues, 17, 283–316. doi: http://dx.doi.org/10.1177/019251396017003001http://dx.doi.org/10.1177/019251396017003001
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles