Cyberbullying in Higher Education: To Survey or Not to Survey, That Is the Question I Ask of Thee

Abstract

Bullying has extended beyond the schoolyard into online forums in the form of cyberbullying. Cyberbullying is a growing concern due to the effect on its victims. Current studies focus on grades K-12, ages 4 to 17; however, cyberbullying has entered the world of higher education. The focus of this case study was to review and analyze our approach in a past study that identified the existence of cyberbullying in higher education, specifically revealing the existence of students bullying instructors and determining its impact. A total of 346 online instructors from the undergraduate, graduate, and doctoral programs at the school of management at a large online university were surveyed. Of the respondents, 33.8% said they had been cyberbullied by students, 4.4% of respondents were unsure, and 61.8% said they had never been cyberbullied by students. Over 60% of the participants did not know what resources were available or felt that there were no resources available to help them should they encounter cyberbullying by students in the online classroom. Results indicated concerns about reporting cyberbullying, ranging from fear of not getting further teaching opportunities to dealing with it and decreasing the rate of student retention. This case study will review our approach, look at the strengths and weaknesses of the study, and determine how it could be improved.

Learning Outcomes

By the end of the case, students should be able to

  • Determine whether the chosen research design for this study is the most appropriate one
  • Understand the importance of creating an objective survey design
  • Discuss the pros and cons of using a survey design

Project Overview and Context

The Internet has given birth to a new type of bullying called cyberbullying. Bullies can hide behind the computer screen and email, text, or post messages that contain hurtful words that are often rude and highly defamatory (Llewellyn, 2008). Cyberbullying is not only limited to grade school, middle school, or high school but it also appears in postsecondary education (Englander, Elizabeth, & Mills, 2009). Past studies focus primarily on the different types of bullying in the K-12 grades; however, very few studies have examined the existence of cyberbullying in postsecondary education, and a minimal amount of studies have looked at students actually cyberbullying instructors. In 2010, we three researchers became colleagues at a large online university. We had a Program Director in the College of Management and Technology and two adjunct, or part-time, faculty in the same college on our research team. Each of us had extensive teaching experience both on ground and online. We also had various research interests, but one in common was online instruction. We had many discussions about various facets of online teaching. We learned that each of us had experienced being bullied online by students. At our University’s twice-yearly professional development conferences, we discovered that many other faculty had experienced cyberbullying by students as well.

We became particularly interested in exploring the extent that student cyberbullying occurred in the online classroom, and if it did, to what extent and how did it affect instructor performance and morale. Online education has erupted in the past 10 to 15 years, yet we found in our literature review that very few studies examined the existence of cyberbullying in online postsecondary education. There were a minimal number of studies that looked at student cyberbullying of instructors in the online classroom.

We were interested in identifying and examining themes around four research questions:

  • What are the experiences that college faculty in online settings have with cyberbullying from students?
  • If they have experienced cyberbullying from students, how have they handled the situation?
  • If an instructor does not do anything about the problem, why?
  • How should cyberbullying in online education settings be addressed?

Because we lacked significant supporting research on the topic, we wanted empirical evidence from instructors. For this reason, we felt it was important to gather data from a pool of online instructors. We chose to create and implement an online survey because it allowed us to quickly capture data and information. We believed a survey would provide a useful baseline of data which could be used by scholars, practitioners, and others interested in this topic. Surveys are not costly and allow for quick data collection (Sue & Ritter, 2012). Originally, we identified a qualitative method as we did not have much supporting research to guide us and narratives seemed best suited to answer our four research questions. Qualitative designs identify themes in data that are not necessarily in a numerical format. These designs answer the research questions (Creswell, 2007). However, we realized that as we moved forward, a mixed-methods study would work best, rather than a pure quantitative or qualitative study. A quantitative study relies on obtaining quantifiable data. For example, quantifiable data might ask yes or no questions or ask for a specific short answer. Qualitative data rely more on narratives from subjects that can lead to discovering themes. A mixed-methods approach utilizes both quantitative and qualitative data. The mixed-methods approach would allow us to obtain quantifiable data around whether online instructors have been bullied by students. It also would allow us to obtain qualitative data that would assist us in answering the four research questions above. Furthermore, we could then identify themes that would point us to future research needs on the topic. We designed our survey around our four research questions. We also allowed for narrative responses from participants to further help us identify themes. Demographics were included in the survey so that we could further examine and interpret our findings. For example, we asked for years of online teaching experience to see whether that experience was helpful dealing with cyberbullying. In other words, were more experienced online teachers better able to cope with student cyberbullying?

Research Design

Once we decided upon conducting a mixed-methods study, we needed to identify who are subjects would be, how many we needed, what needed to be included in a survey, how we would gather and interpret data, and what direction should future research take.

We were fortunate to have access to a large pool of online faculty at our university in the College of Management and Technology. This was a sample of convenience and included 346 potential participants. We knew the participants taught on all levels (undergraduate, masters, doctorate) and that they would be both full-time and part-time instructors, male and female, and with different levels of experience in teaching online. We included demographic information so we would be able to identify any related trends. For example, does full-time faculty recognize the existence of cyberbullying any better than part-time faculty? Does age or gender play a role in how well cyberbullying is handled by instructors?

We felt that with this large number of subjects, we would gather plenty of data even if we had a low response rate. Identifying themes around cyberbullying could lead to more in-depth studies in the future. Participants had the choice to participate in the survey or not. We needed to decide on whether the survey should be anonymous or confidential. If it was to be anonymous, no one would know who said what. If it was confidential, the researchers would know, but would not share it with anyone. We chose to go with an anonymous survey as we felt it would solicit more honest and open responses. Participants completed the survey on SurveyGizmo.com, and anonymity was ensured. The survey was voluntary so there was no pressure to respond to it.

The survey primarily utilized multiple choice questions for participants to respond to regarding their experiences with cyberbullying. In addition, participants were also given the opportunity to elaborate on their answers by providing narrative examples. There were 17 questions. The first five sought demographic information that might be useful in future research, including gender, type of faculty (part-time or full-time), years of online experience, years of on ground experience, and age range. The final 12 questions assessed their cyberbullying experience (see next section).

Once we had designed our survey, we needed to examine variables that might affect our study. The following section addresses that.

Research Practicalities

There were four practicalities that were navigated during research planning and research processes. First, the participants were selected through convenience as the researchers had access to a significant sample size of online faculty. The second practicality to consider was secondary data, which were not included in the research process because little prior research was available on the topic of cyberbullying of online instructors. By gathering qualitative data from participants, the researchers were able to identify themes around cyberbullying that could lead to more in-depth studies in the future.

Individuals who chose to participate in the multiple choice survey were also given the opportunity to elaborate on answers by providing narrative examples if desired. The third practicality that the researchers considered in the research process was consideration of a gatekeeper. A gatekeeper was not chosen because the data were collected through a survey on SurveyGizmo.com. All data gathered were anonymous, so we did not feel that a gatekeeper was needed to manage the data collected. Of the 346 online instructors surveyed, 68 surveys (20%) were returned. Of those respondents, 58.8% of the participants were male, 41.2% were female, 83.9% were part-time faculty, and 14.7% were full-time faculty. The fourth practicality was the need to ensure ethical actions. Ethical practices were ensured by the guidelines set forth by the American Psychological Association. We readily discussed each of our roles in the research process. We also were concise in determining each role as we reviewed literature, created the survey, distributed the survey, collected the results, and analyzed trends within the data. We followed informed consent rules and ensured confidentiality in the survey. The survey was also voluntary. In addition, no names were included in the data collection process ensuring anonymity of all respondents. In addition, The Belmont Report (1979) was reviewed and applied to ensure we adhered to human participant research regulations.

SurveyGizmo provided us with results from the survey. The data were easily accessed through SurveyGizmo’s application and sorted by each question. We were able to filter and sort the data from various questions and compare the results. We used the data to conduct comparisons of the answers to various questions based on the participants’ demographics to see the similarities and differences. The comments participants made were compiled together per question so we could note the responses and later review them for insight.

Method in Action

There has been very little research to date on the existence and implications of cyberbullying in online higher education settings. Most research found during the literature review process included information in the K-12 sector, which was a limitation; however, the limited research also supported the need for the study. We were interested in examining whether student cyberbullying of instructors in higher education occurred in the online classroom and, if so, what effect it had on instructor performance and morale. The overarching research questions were established to gather data related to the lived experiences of online college faculty (see research questions noted in project overview). After conducting our survey, we realized that the short answer and yes/no questions in our survey actually were quantitative so our method turned out to be mixed rather than pure qualitative. This awareness did not alter our results, interpretations, and conclusions.

The actual survey included a total of 17 multiple choice questions for participants to respond to regarding their experiences with cyberbullying. In addition, participants were given the opportunity to elaborate on their answers by providing narrative examples, which supports a qualitative research method. The first five questions were quantitative in nature as these questions sought demographic information that might be useful in future research, including gender, type of faculty (part-time or full-time), years of online experience, years of on-the-ground experience, and age range. See Appendix 1 for the complete survey.

Although this demographic information was interesting, and may contribute to future connections in data similarities, additional opportunities to elaborate on questions regarding lived experiences of respondents may have contributed more richly to possible trends between respondents. In addition, even though this sample of convenience allowed the researchers to have the possibility of surveying all faculty members in the College of Management and Technology, there was a limitation because it was only conducted within one college. It is not known how faculty members in other colleges within this university would have responded to the survey questions. The published research included limitations such as surveying only one college and gathering feedback from a small percentage of faculty.

Practical Lessons Learned

Owing to time constraints, we did not conduct a pilot study. Hindsight tells us it may have been valuable to do so. Creswell (2012) notes that piloting “helps determine that individuals in the sample are capable of completing the survey and can understand the questions” (p. 390). Feedback from the pilot study may have contributed to different questions, design, or at the very least, confirmation that we were on the right track. Also, this would have allowed us to get feedback from our participants and determine whether an online survey was the most appropriate design tool to use.

We realized early in our research that a mixed-methods survey needed to be carefully worded. Designing objective survey questions can be challenging as we not only wanted to gain specific data but also wanted to ensure objectivity. It is very easy to word questions so that the respondents answer in a way you hope them to, which we obviously wanted to avoid. For example, assuming most instructors had encountered forms of bullying caused our first set of questions to lean toward getting negative answers. We realized that these questions did not allow us to objectively seek an observer’s point of view because we made the assumption that they had been cyberbullied. Hence, we had to consistently re-focus on our research questions to ensure our questions were objectively formatted. Our first lesson learned was to ensure objectivity as much as possible. We had to rephrase the questions many times before we felt they were objective. We also realized that as much as we revised, perhaps survey questions can never be 100% objective.

We felt it very important to provide a definition of cyberbully because bullying can be defined in many ways, and not everyone has the same definition. Our hope was that a shared meaning would allow the respondents to answer the questions using that definition as a reference. Our second lesson learned was realizing the importance of having respondents use the same definition, so they could answer from that point.

A high survey respondent rate is very important in research “so that researchers can have confidence in generalizing the results to the population under study” (Creswell, 2012, p. 390). The higher the rate, the more valid the data. It can be frustrating to get a low respondent rate because it raises the question of how reliable are the data. Our survey rate was 20%, which is average. We sent out the survey a couple of times and asked directors to remind their faculty to take the survey. Even a survey rate of 20% leaves one questioning the data. It made us wonder if only those that had been bullied replied, or were instructors concerned that it was not really an anonymous survey. Our third lesson learned was the realization that not everyone was going to answer the survey, it was out of our control, and all we could do was remind faculty to respond by a certain time and then see what happens.

The researchers felt the mixed survey approach was the best way to obtain data from a large number of online instructors residing across the United States. However, we did contemplate other methods, such as phone interviews. On account of the workforces being spread out, we quickly decided upon the survey method. The fourth lesson learned was to examine all types of ways to obtain our data and to select the best approach for the study.

Our population may have been too limited. It would have been nice to have data from other colleges at the university. Due to obtaining permissions, we only focused on the College of Management and Technology. Therefore, we could not generalize our results and say they were reflective of the whole University. It would have been interesting to have gathered data from several colleges and do a comparison. The question that arose was the following: Is our study too limited to represent the whole University? Looking back, it may have been worth taking more time to gather permissions from other colleges, so we could say the data were reflective of the whole university. So, our fifth lesson learned was the importance of thinking through the respondent population and ensuring the study can be representative of the whole population rather that some of it.

Conclusion

The mixed-methods survey tool has many strengths and weaknesses. Upon reflection, we now realize the importance of evaluating a chosen research method thoroughly before implementing it. Since this study, we have worked together on several other studies and now begin our research by completely evaluating the effectiveness of many tools before selecting the appropriate one. We do a “what went well and what went wrong” evaluation after each research project. The reflective process is a vital aspect of us improving our own skills for the next study. It is important to take time to craft objective questions and review them several times to ensure that a bias doesn’t exist. In addition, it is important to understand who our research participants are. For example, one of us had the experience of creating a survey for an organization designed to identify the level of frontline employee satisfaction. Although the HR department and the executive board reviewed and signed off on the survey and said it was clear and objective, it was found afterward that a majority of the participants did not understand many of the questions. A large percentage of the participants spoke English as a second language. We discovered that we should have had someone familiar with the language and cultures of our sample review our questions and adjust them so they would be understandable to participants. Also, having our survey translated into the primary language of participants would have made the results more meaningful. Finally, we learned the importance of seeking larger and several populations to survey to feel that our collected data clearly represented the feelings of the faculty. Often, a larger sample size is necessary for generalization.

Every research project we undertake in the future will be based on what worked as well as what didn’t work. The beauty of that is that even if something doesn’t work, we can look at it and improve it for the future. If we are willing to do that, then any setback has served a useful purpose. It is easy to get discouraged when something doesn’t work, but the perspective of learning from it and building on it makes it all worth it going forward.

Appendix 1

Cyberbullying Survey
Demographic Information

Please check one of the following:

  • Are you
    • Male
    • Female
  • Are you a
    • Contributing Faculty
    • Core Faculty
  • Total years teaching online:
    • 0–5
    • 6–10
    • 11–15
    • 16–20
    • 21+
  • Total years teaching on ground:
    • 0–5
    • 6–10
    • 11–15
    • 16–20
    • 21+
  • Select the category in which your year of birth appears:
    • 1925–1945
    • 1946–1964
    • 1965–1981
    • After 1981
Survey Questions
  • Based on the National Crime Preventions Council (2010) definition of cyberbullying (“the use of the Internet, cell phones, or other devices to send or post text or images intended to hurt or embarrass another person.”), have you ever been cyberbullied by a student?
    • Yes
    • No
    • Not Sure

    If yes, describe an experience and explain how the situation was handled:

    __________________________________________________________

  • Who addressed the situation?
    • Myself
    • My program director
    • Other
    • No one
  • Do you feel the person who addressed the situation handled it effectively?
    • Yes
    • No
    • Somewhat

    If yes, why? If no, how would you like to have seen it handled?

    __________________________________________________________

  • How many times in your online teaching career have you been cyberbullied by a student:
    • 1
    • 2–5
    • 6–10
    • more than 10
  • Do you think students cyberbullying instructors is occurring?
    • 1
    • 2
    • 3
    • 4
    • 5
    • Not at all
    • Very much so
  • Do you feel there are resources available to help instructors properly handle a cyberbullying situation?
    • Yes
    • No
    • Don’t know

    If yes, what resources are available?

    __________________________________________________________

  • What resources do you think need to be in place to handle a cyberbully?

    __________________________________________________________

  • Have you ever been cyberbullied and not taken action?
    • Yes
    • No

    If yes, please explain why no action was taken: __________________________________________________________

    What are barriers to reporting cyberbullying to the appropriate authorities? __________________________________________________________

  • Have you known of other online faculty that may have been cyberbullied?
    • Yes
    • No

    If yes, please explain the circumstance as you remember it:

    __________________________________________________________

  • Other comments relating to cyberbullying of online faculty:

    __________________________________________________________

Exercises and Discussion Questions

  • The researchers selected a mixed-methods survey design. What other types of methods may have been suitable for this study? Explain.
  • What may have been the implications for this study, if a cyberbully definition was not provided? Could the outcomes have been different?
  • How might the research team have captured a larger pool of respondents?

Further Reading

Brannen, J. (2005) Mixing methods: The entry of qualitative and quantitative approaches into the research process. International Journal of Social Research Methodology, 8, 173184. doi:10.1080/13645570500154642
Creswell, J. W. (2007). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Pearson Education.
Jick, T. (1979). Mixing qualitative and quantitative methods: Triangulation in action. Administrative Science Quarterly, 24, 602611. doi:10.2307/2392366
Sackett, D. L., & Wennberg, J. E. (1997). Choosing the best research design for each question. BMJ : British Medical Journal, 315, 1636.
Smith, G., Minor, M., & Brashen, H. (2014). Cyberbullying in higher education: Implications and solutions. Journal of Educational Research and Practice, 4, 5060. doi:10.5590/JERAP.2014.04.1
Yin, R. K. (2003).Applications of case study research (Applied social research methods series,
3rd ed.
). Thousand Oaks, CA: SAGE.

References

Creswell, J. W. (2007). Educational research: planning, conducting, and evaluating quantitative and qualitative research. Upper Saddle River, NJ: Pearson Education.
Englander, E., Mills, E., & McCoyM. (2009). Cyberbullying and information exposure: User-generated content in post-secondary education. International Journal of Contemporary Sociology, 46, 213230.
Minor, M., Smith, G., & Brashen, H. (2013). Cyberbullying in higher education. Journal of Educational Research and Practice, 3, 1529. doi:10.5590/JERAP.2013.03.1.02
National Crime Prevention Council. (2010). What is cyberbullying? Retrieved from https://www.ncpc.org/resources/cyberbullying/what-is-cyberbullying/
Llewellyn, C. (2008). Cyberbullying of teachers: A growing problem for schools? Teaching Times. Retrieved from https://www.teachingtimes.com/articles/cyber-bullying-teachers.htm
Sue, V. M., & Ritter, L. A. (2012). Conducting online surveys (
2nd ed.
). London, England: SAGE.
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles