Epistemology and Ethnography in Health Systems Research

Abstract

When researchers decide on the most appropriate research method and data collection procedures, they have to consider not only their research questions but also their assumptions about what constitutes evidence. Making explicit the underlying epistemology at the start of a research project can help researchers identify the most apt method and can assist in illuminating the limitations of proposed data collection methods. In this case study, we describe ethnographic research that was conducted to investigate the organization of care for suicide attempters in a hospital. The case study is used to illustrate how epistemology determines method and how epistemology is made visible through data collection procedures. The case study also illustrates how ethnographic research can be used in health systems research.

Learning Outcomes

By the end of this case students should be able to

  • Describe how researchers move from formulating research questions to deciding on the methodology and data collection procedures
  • Explain how epistemology influences the choice of research method and data collection procedures
  • Differentiate between validity, reliability, and trustworthiness
  • Discuss how research findings are limited by the underlying epistemology and data collection procedures

Project Overview and Context

Most research begins with identifying a research problem and framing research questions. But this initial step quickly gives way to difficult decisions about what research method to use, what kinds of data to collect, and how to organize and make sense of the data to yield meaningful answers to the research questions. In making these methodological decisions, a researcher adopts a particular stance toward knowledge and makes certain assumptions about what constitutes evidence, truth, and facts. Epistemology is the study of the origin, nature, methods, and limits of human knowledge. Epistemology is thus concerned with issues of how knowledge is created and evaluated and what assumptions underlie the research process. In this case study, we describe research undertaken by the first author (J.B.) to investigate the organization of care for suicide attempters in a South African hospital. The case narrative is used to explore some of the difficult epistemological decisions researchers have to make about data collection procedures when choosing an appropriate methodology to answer their research questions. The narrative of the research process is presented below as a first-person account as it was experienced by J.B.

Background to the Research Problem

Suicide is a serious public health problem; globally, 800,000 people die by suicide each year (World Health Organization [WHO], 2014), with the number predicted to rise to 1.53 million by 2020 (Bertolote & Fleischmann, 2009). It is estimated that as many as 75% of suicides occur in low- and middle-income countries (WHO, 2014). For every completed suicide, there are an estimated 20 suicide attempts (WHO, 2014). Individuals who attempt suicide are more likely than the general population to re-engage in self-injurious behavior and are 20 to 30 times more likely to die by suicide (Hawton, Zahl, & Weatherall, 2003; Kapur et al., 2005; Owens, Horrocks, & House, 2002). Hospital presentations following suicide attempts are thus an important opportunity to provide targeted interventions to reduce the risk of future incidents of suicidal behavior in this well-delineated group of patients who are at risk of eventual death by suicide (Arensman, Corcoran, & Fitzgerald, 2011).

Very little research has been conducted in low- and middle-income countries to investigate the organization of care for suicide attempters in hospitals where medical resources are scarce. It was within this context that colleagues from the Psychology Department at Stellenbosch University and the Department of Psychiatry and Mental Health at the University of Cape Town set out to investigate the procedures and practices in place to respond to suicide attempters in a large urban public hospital in South Africa. The researchers were interested not only in documenting the quality of care received by suicide attempters but also in assessing what opportunities there might be for brief hospital-based interventions to reduce the risk of future suicidal behavior among this group of patients.

Moving from Research Questions to Research Method

We started by clearly stating our research questions. These were as follows:

  • What procedures and practices are in place to respond to suicide attempters in the hospital?
  • What is the quality of care received by suicide attempters?
  • What opportunities are there for hospital-based interventions to reduce the risk of repetition of suicidal behavior among suicide attempters?

Although we were clear about our research questions, we were initially not certain what research method would best allow us to answer these questions. We knew we wanted to produce what anthropological researcher and scholar Geertz (1973) has called a “thick description” (i.e., a rich, detailed, nuanced description) of the medical care received by suicide attempters in a South African hospital and identify opportunities for intervention, but how would we achieve this and what data would we need to collect to answer our research questions? In the process of trying to make these decisions about what kinds of data to collect, we considered what different types of data might be available and how we might be able to collect and analyze these. This led us to ask ourselves the following questions, which we were able to generate using our knowledge of different kinds of research methods, different kinds of data, different ways of collecting data, and the limitations of particular kinds of data:

  • Should we try to generate hypotheses that could be empirically tested?
  • Should we devise standards by which to define and quantify the “quality” of care received by patients in the hospital?
  • What could we learn by interviewing medical staff about their practices?
  • What could we learn by talking to suicide attempters and asking them about their experience of being admitted to the hospital?
  • If we did use interviews, what would we ask and how could we be sure that hospital staff would accurately describe the procedures or that patients would be objective enough to give accurate accounts of what happened to them?
  • Would informal conversations yield better data than structured interviews?
  • If we did use interviews, how would we decide whom to interview?
  • Would it be more objective and reliable if we asked medical staff and patients to fill out a standardized questionnaire?
  • Should we look for a checklist or measuring instrument to quantify quality of care?
  • What could we learn by collecting video footage of how suicide attempters are treated?
  • Would putting a camera in the hospital change the practices of the medical staff?
  • Would it be possible for us to observe practices in the hospital firsthand and form our own account of how things are done?
  • Would it be helpful to spend time in the hospital, watching and making notes of what we observed?
  • If we did decide to observe practices ourselves, what would we focus on and how would we record our observations in an objective way?
  • How would we organize and make sense of the data we collected?

These were not easy questions to answer. Although we had identified our research questions, it was not immediately apparent what kinds of data we needed to collect and how these could be collected to generate valid, reliable, or trustworthy findings. In research, a distinction is made between validity, reliability, and trustworthiness. Data are said to be valid if they correspond accurately to the real world—in other words, if the data accurately reflect or measure the concept or construct. To decide whether data are valid, you can ask yourself, “Am I measuring what I think I am measuring?” Data are said to be reliable if repeated measures yield the same results. To determine whether data are reliable, you can ask yourself, “If two people measured this concept at different times, would they obtain the same measurements?” Trustworthiness refers to the extent to which the findings that follow from the data can be applied to other contexts. To determine the trustworthiness of research findings, you can ask yourself, “Do these findings apply only to this setting or can they be generalized to other settings?”

It is not always easy for researchers to decide what data will enable them to answer their research questions. It is often equally difficult to determine what kinds of data will be valid, reliable, and/or trustworthy enough to generate meaningful answers to research questions. For example, in deciding whether or not to interview hospital staff, we had to consider whether or not these medical personnel would be honest with us and describe how they respond to suicide attempters. Do people always do what they say and say what they do? In considering whether or not to use video footage, we had to consider the ethics of putting cameras in a hospital and how we would get consent for this. When we considered whether we would observe medical staff providing care to suicide attempters, we had to think about how our presence in the hospital emergency rooms and wards might change medical staff’s practices and how they might consciously or unconsciously perform for us rather than do what they always do. In considering the merits of interviewing suicide attempters, we had to consider whether or not patients’ psychological state might influence their subjective experience of care and what they chose to tell us about their experience. As we considered whether or not to use a psychometric instrument to measure “quality of care,” we had to think about how something as subjective as experience of care might be objectively quantified. These are not only practical considerations; they are also epistemological issues because they require researchers to make explicit their beliefs about what constitutes valid data and what is considered evidence. Epistemology determines method, but similarly epistemology is made visible through method (Carter & Little, 2007).

Data Collection and Identifying Assumptions

It is easy to embark on research without realizing that there are a whole host of assumptions that are made about knowledge when choices are made to collect particular kinds of data. For example, if we decided to use only qualitative data generated by interviewing hospital staff, we would be making the following assumptions:

  • That people are always aware of what they do;
  • That people are able to accurately recall and describe their practices;
  • That what people tell us is an accurate reflection of reality.

Sometimes, it is helpful to make these underlying assumptions explicit at the outset of a research project because doing so can illuminate the limitations in the proposed data collection methods and help to ensure that the data collected will indeed generate answers to the research questions. The problem is compounded by the fact that researchers don’t always know what will be illuminated by the data they plan to collect; how can researchers know for sure at the outset what they will discover in their data? It is one thing to have a research hypothesis, and it is quite another thing to have decided at the start of a research project what findings the data will yield. In this context, it is helpful to keep in mind that “(m)ethod is constrained by and makes visible methodological and epistemic choices” (Carter & Little, 2007, p. 1316). This means that the data we choose to collect and the methods we choose to use to analyze these data reflect our assumptions about knowledge and evidence. But it is also true that the assumptions we make about knowledge and evidence (i.e., our epistemological stance) limit and restrict our choices about what kinds of data to collect and the methods we decide to utilize to analyze and make sense of our data.

To help us solve the problem of what kinds of data to collect, we needed to be very clear about the focus of our research. We went back to the research questions and tried to identify the unit of analysis. In research, the “unit of analysis” is the who or what that is being examined and investigated. Typically, in social science health research, the unit of analysis might be individuals, particular groups, social organizations, and social artifacts. We realized that our research focus was the organization of care for suicide attempters within the hospital and that the unit of analysis would be the institutional culture. This helped us to settle on an ethnographic approach as an apt research method.

Ethnographic Research

Ethnographic research is used to study culture and it can be loosely defined as the study of people in their own environment. The purpose of ethnographic research is to describe and understand how things are done in a particular setting and why they are done in this way. Ethnographers use a range of data collection methods, including participant observation, face-to-face interviewing, and the collection of cultural artifacts, such as documents, reports, and photographs. Ethnographic studies can be very useful in health psychology and health systems research as they have the potential to illuminate how the practices of people in particular settings (such as hospitals and clinics) are profoundly influenced by an array of psychological, social, economic, historic, and political factors. Ethnographic studies can help us to understand how cultural context shapes human behavior.

More specifically, we elected to conduct a critical organizational ethnography. Our study was an organizational ethnography because we were locating our study within a particular organization (the hospital). Our study was a critical ethnography because we were seeking to explore current practices with a view to identifying how the organization of care might be disrupted to improve the quality of care received by suicide attempters and hence advance the agenda of suicide prevention.

By adopting a critical ethnographic methodology, we located our study within the epistemology of constructionism. Constructionism rejects the idea that there is some ultimate objective reality that can be known or discovered through research. Instead, it contends that the world, and hence reality, is something which is locally and iteratively constructed and created by people through internal repositions and personal accounts of their experiences. By assuming the epistemological stance of constructionism, we made the following assumptions about knowledge and knowledge creation:

  • That there is no single objective knowable reality;
  • That there are multiple perspectives and experiences;
  • Knowledge is an expression of the language, values, and beliefs of particular groups and communities;
  • Knowledge is contextual and cannot be separated from the context in which it is created;
  • People inhabit different socially constituted “realities,” and therefore, what is considered as truth may vary quite dramatically across cultures, time, and place and even for the same person in different contexts.

Within these assumptions, we explicitly acknowledged our own subjectivity as researchers and accepted that our research would produce only a partial account of practices in the hospital and that this account would be at best only more or less useful. We accepted that our research would not meet standards of generalizability.

Data Collection Procedures

We realized that to produce a thick and trustworthy description of the institutional culture and the organization of care for suicide attempters within the hospital, we would need to have multiple sources of data which could be used to triangulate our findings. We decided to observe carefully how suicide attempters are treated in the hospital and then talk to hospital staff (doctors, nurses, psychologists, and social workers) about their practices. We decided to collect and analyze hospital documents, official protocols, and policies pertaining to the care of suicide attempters. We also elected to talk to the suicide attempters about their experiences of receiving care in the hospital and ask them about their support needs. We decided to keep field notes of our observations and to use semi-structured interviews with hospital staff and patients. We reasoned that semi-structured interviews would allow us to keep the focus on the organization of care but would also leave sufficient flexibility in the interview for participants to tell us about things we had not anticipated. We also thought that perhaps the interviews would afford us the opportunity to share and check our observations and understandings with participants as a way of validating what we had witnessed.

Once we had obtained the necessary ethical approval and institutional permission to conduct the study, we started to collect our data. Annemi Nel and I (J.B.) collected the data for this ethnographic study via observations in the hospital over an 8-month period. During this time, we attended ward rounds in the hospital, observed day-to-day practices in wards, and maintained a register of all self-harm patients treated in the hospital. In addition, we tracked suicide attempters through the hospital to document which wards they were admitted to, how long their stay in hospital was, and what interventions they received. Detailed field notes were kept of observations. Annemi and I regularly shared and discussed our field notes as a way of making sense of and validating our observations through triangulation. Hospital policies and protocols relating to treatment of self-harm patients were reviewed, and 37 medical personnel were interviewed about their knowledge, attitudes, and practices regarding the treatment of suicide attempters. In-depth semi-structured interviews were conducted with 80 suicide attempters admitted to the hospital. During these interviews, patients were asked to describe their experience of being in hospital and share their support needs and ideas about what they needed to reduce the risk of future self-harm. This process generated a huge amount of qualitative data; we had pages and pages of field notes and hours and hours of transcribed interviews. We also had the register of all suicide attempters (n = 230) admitted to the hospital during the study period, and we knew how long they had been in hospital and what medical care they had received.

Initially, it was difficult to gain access to the hospital and secure interviews. Medical staff were suspicious of our motives and did not trust us as we were outsiders. Medical staff were also busy and often did not have the time to spend being interviewed for our study. It took time and patience to gain access and to form relationships with hospital staff so that they would engage with us and allow us to witness day-to-day practices. We also had difficulty keeping track of the suicide attempters and making contact with them before discharge. The study was conducted in a busy hospital where patients are discharged quickly because of bed pressure and the large numbers of patients presenting at the hospital. We also faced a number of problems about what to do with the contradictions in our data; on occasion, what we observed was not the same as what was reported by patients or medical staff. The biggest difficulty we encountered was how to manage, keep track of, and make sense of the vast amount of data we collected over the 8-month period; the longer we stayed in the system collecting data, the more contradictions we encountered and the more difficult it became to create an accurate and coherent account of what we found. We also encountered some difficulties in deciding how to represent and present our data, and what story to tell about our work and what examples to use to illustrate this.

Discussion

What did our data tell us? The data we collected allowed us to identify how the organization of care within the hospital results in a lost opportunity for suicide prevention and reflected macro-level structural and sociocultural factors which shape the delivery of psychological care within the country’s health care system (Bantjes et al., 2016). The data also allowed us to draw attention to the need to reconsider current practices within the hospital. To this extent, the data were helpful in providing some answers to the research questions. Despite the utility of our data, there were things that the data could not tell us. For example, it could not tell us how generalizable our findings were and to what extent they reflected the experiences of all suicide attempters admitted to the hospital. Our data were collected at a particular time and in a very particular context with very particular data collection methods. It is possible that research done at other times, in other institutions, using other methods might reveal different findings. As such, it was debatable if we had produced reliable, valid, or trustworthy findings. The data did not tell us how many patients had bad experiences of care or whether the care they did receive had any impact on future suicidal behavior. The data did not tell us what needed to be done next or how we should go about disrupting practices in the hospital to facilitate a reorganization of care. Similarly, the data did not help us identify what would be a more appropriate way to organize care for suicide attempters. Of course, no data can do everything; all data have limitations. Nonetheless, it is important for researchers to make explicit the limitations of their data and guard against making exaggerated claims which are not supported by the data they have collected. To realize the limitations of their data, researchers need to be cognizant of the epistemology underlying their research methodology and data collection procedures.

Conclusion

In hindsight, was an ethnographic research method with multiple sources of data collected over a protracted period the most appropriate method for this study? Clearly, the methodology and data collection procedures that were used to study the organization of care for suicide attempters allowed the researchers to investigate and describe practices in the hospital. To this extent, they were a useful first step in exploring questions about the quality of care received by patients and identifying the need to disrupt current practices in the hospital. The method did, however, have limitations, and other data would have illuminated other, perhaps more helpful, findings.

Exercises and Discussion Questions

  • What is the value of ethnographic research and what kinds of research problems might be suitable for this methodology?
  • Use examples to explain the difference between validity, reliability, and trustworthiness.
  • Why is it important for researchers to make explicit the epistemological starting points of their research?
  • Use an example to illustrate how epistemology is made visible through research method and data collection procedures.

Further Reading

Aamodt, A. M. (1991). Ethnography and epistemology: Generating nursing knowledge. In J. Morse (Ed.), Qualitative nursing research: A contemporary dialogue (pp. 4053). Thousand Oaks, CA: SAGE.
Becker, H. S. (1996). The epistemology of qualitative research. In R. A. Shweder, A. Colby, & R. Jessor (Eds.), Ethnography and human development: Context and meaning in social inquiry (pp. 5371). Chicago, IL: University of Chicago Press.
Carter, S. M., & Little, M. (2007). Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative Health Research, 17, 13161328.

References

Arensman, E., Corcoran, P., & Fitzgerald, A. P. (2011). Deliberate self-harm: Extent of the problem and prediction of repetition. In R. C. O’Connor, S. Platt, & J. Gordon (Eds.), International handbook of suicide prevention: Research, policy and practice (pp. 119132). Hoboken, NJ: Wiley.
Bantjes, J., Nel, A., Louw, K. A., Frenkel, L., Benjamin, E., & Lewis, I. (2016). “This place is making me more depressed”: The organisation of care for suicide attempters in a South African hospital. Journal of Health Psychology. Advance online publication. doi:http://dx.doi.org/10.1177/1359105316628744.
Bertolote, J. M., & Fleischmann, A. (2009). A global perspective on the magnitude of suicide mortality. In Wasserman, D., & Wasserman, C. (Eds.), Oxford textbook of suicidology and suicide prevention: A global perspective (pp. 9198). Oxford, UK: Oxford University Press.
Carter, S. M., & Little, M. (2007). Justifying knowledge, justifying method, taking action: Epistemologies, methodologies, and methods in qualitative research. Qualitative Health Research, 17, 13161328.
Geertz, C. (1973). The interpretation of cultures: Selected essays (vol. 5019). New York, NY: Basic Books.
Hawton, K., Zahl, D., & Weatherall, R. (2003). Suicide following deliberate self-harm: Long-term follow-up of patients who presented to a general hospital. The British Journal of Psychiatry, 182, 537542.
Kapur, N., Cooper, J., Rodway, C., Kelly, J., Guthrie, E., & Mackway-Jones, K. (2005). Predicting the risk of repetition after self harm: Cohort study. British Medical Journal, 330, 394395.
Owens, D., Horrocks, J., & House, A. (2002). Fatal and non-fatal repetition of self-harm: Systematic review. The British Journal of Psychiatry, 181, 193199.
Platt, S., Bille-Brahe, U., Kerkhof, A., Schmidtke, A., Bjerke, T., Crepet, P., … & Sampaio Faria, J. (1992). Parasuicide in Europe: The WHO/EURO multicentre study on parasuicide. I. Introduction and preliminary analysis for 1989. Acta Psychiatrica Scandinavica, 85, 97104.
World Health Organization. (2012). Suicide topical overview. Retrieved from http://www.who.int/features/qa/24/en/index.html
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles