Mental Health Consumer Voice Through an Ethical Filter—A Mixed-Methods Case Study in a Rural Setting

Abstract

The focus of this research was to evaluate the impact of a specialized inpatient mental health service in rural South Australia. A mixed-methods design included the collection of qualitative data via phone and face-to-face interviews with consumers, carers, non-government community partners, and interprofessional health workforce affiliated with the service. An online survey derived from the interview data was developed, and anonymous feedback option was collected from consumers and a standardized measure for health status included. Quantitative data included de-identified records for mental health inpatient admission, transfers, inpatient days, diagnosis, and rurality. This research methodology case study summarizes the ethics approval process, which included Aboriginal Health Research Council approval and two additional University Human Research Ethic Committees (HRECs), and considers the methodological implications of the need to protect patient and stakeholder privacy while also attempting to minimize selection bias. The complexities of contract research arrangements and how these may jeopardize collaborative research partnerships will be discussed, as will the conflicting priorities of university and government in regard to the dissemination of research findings, timeframes, and research rigor. This research case study has a rural context; however, the experience and conclusions are generalizable.

Learning Outcomes

By the end of this case, students should be able to

  • Know how to achieve research ethics approval from multiple ethics committees representing indigenous people, public health, and university perspectives
  • Know how to maintain consistency with the academic rigor of the evaluators and the service needs of the commissioner of the evaluation
  • Understand the competing demands of consumer and consumer interest groups, funders of a health service, and the authors’ desire to demonstrate what the new service has achieved
  • Understand interprofessional teamwork competencies and their relevance to a community-based research team
  • Consider the advantages and challenges of including stakeholders and research team members with varied perspectives and priorities

Specialist Mental Health Inpatient Care in Rural Communities

In 2014, Country Health South Australia (CHSA) opened Integrated Mental Health Inpatient Units (IMHIUs) in two rural general hospitals. These units were open facilities, each consisting of six fully self-contained bedrooms and spacious multipurpose living/recreational areas. Specialist acute care at the IMHIUs was designed to be voluntary but may include a period of time under a non-voluntary Inpatient Treatment Order as set out in the Mental Health Act 2009 (Government of South Australia, 2009). The IMHIUs were set up to complement other CHSA community mental health services and smaller country hospitals which provide low-complexity mental health admissions for voluntary consumers and are not intended to replace all consumer transfers to metropolitan hospitals. The IMHIUs were a response to an identified need for specialized community-based mental health care in rural communities.

The IMHIUs aimed to meet acute care needs of country mental health service consumers, strengthen existing community-based services, and enable greater equity of access for rural South Australians. A key benefit of the new facilities was to reduce the need to travel to Adelaide, allowing consumers to stay closer to home and to their local support networks while receiving specialist, inpatient mental health care.

This case study describes the methodological considerations of a health service evaluation conducted by a Psychology academic, a medical (rural general practitioner [GP]) academic, and an Aboriginal researcher working on her own country (Barngarla) and with experience in youth mental health/homelessness and family violence issues. All researchers were employed by The University of Adelaide Rural Clinical School and brought a mixture of psychology, health service delivery, and Aboriginal knowledge to the evaluation. The evaluation was in response to an Expression of Interest (EoI) announced in November 2013 by the CHSA, which is the only rural division of South Australian State Government. The scope of works proposed aligned with the EoI “scope and deliverables” and is reproduced below. The health evaluation principles fundamental to a new service, referred to as the IMHIUs, were used, namely:

  • Process—describing the establishment and initial 12 months of function;
  • Impact—particularly consumer, carer, staff, and management experience of the new service delivery model;
  • Outcome—some of which was described by consumers and which is apparent in changes in separation data from the previous 3 years. (Hawe, Degeling, Hall, & Brierley, 1990)

The Model of Care (MOC) was evaluated by the following:

  • Individual consumer and carer interview and subsequently online questionnaire, offered to all consumers and carers admitted to IMHIU beds.
  • Review of CHSA planning documents and implementation of the initiative informed the MOC expectations.
  • Cost analysis utilized the International Classification of Diseases, 10th Edition (ICD-10) discharge data de-identified by CHSA prior to release, which were provided to the evaluators.

System Impact was evaluated by the following:

  • Interview data with individual and small-group discussions with Community Mental Health staff of CHSA focusing upon process of admission and recovery and rehabilitation outcomes.
  • Interview with stakeholder community services and hospital emergency department (ED) and inpatient services for early trends in change of frequency of consumer hospitalization and transfer to metropolitan mental health services.

The evaluation method was designed to become incorporated into routine processes of CHSA health services, including K10 (Slade, 2001) and EuroQol (Devlin & Krabbe, 2013). The intention was to implement progressive education of CHSA staff once initial testing has confirmed the functioning of the evaluation process. This tested method would become available for implementation in future IMHIU.

Evaluation Versus Research

Research is the systematic investigation into and study of materials and sources to establish facts and reach new conclusions (Punch, 2013). Research differs from evaluation, which is usually focused on improving a specific program (Fain, 2005). Research is commonly described in terms of design, namely, qualitative, quantitative, or mixed methods. The purpose of each of these research designs is different, but generally research seeks to generate new theories or prove existing theories that can be generalized to other populations and settings. Evaluation might be seen as looking backward for the purposes of choosing future pathways. Research questions can complement evaluation efforts, but may also involve multiple sources and seek to be generalizable and extend beyond the foci of a service evaluation.

Evaluation is commonly a required activity for government to justify funding allocations and address quality service standards (Department of Treasury and Finance, 2014). This case study involved a research team, accustomed to not only designing research projects that test theory but also familiar with the operational functions that an evaluation is required to serve. Ideally, an evaluation is designed at the same time a service is designed to allow for sufficient time to consider methods of information collection and ensure that service objectives are measurable or at least can be assessed. In the case of evaluating changes in attitudes or behavior, it is ideal to obtain baseline data, collected prior to or at least from the onset of a new service. The downside of appointing an evaluation team through competitive tender after a service has commenced adds to the risk of losing valuable information, and time pressures result in compromises in methodology and practice.

Research Practicalities

We set out to evaluate one service model by examining two case studies (two rural hospital-based IMHIUs), which at the time were the only examples in operation in South Australia. Case study research is an in-depth exploration from multiple perspectives of the complexity and uniqueness of a particular project, policy, institution, program, or system in a real-life context (Simons, 2009). Stake (1995) describes case study as not a methodological choice but rather a choice of what is to be studied. A case study comprises two elements: (1) a subject of the case study and (2) an object of the study (Thomas, 2011). In this instance, the object is the model of IMHIU care and the subject is the financial and social impact on rural South Australians living with mental illness.

Mixed-methods research is more than the sum of its qualitative and quantitative elements, but rather the strengths of one approach can compensate for the limitations of the other approach (Heyvaert, Hannes, Maes, & Onghena, 2013). In this case, for example, we expected that numeric data obtained from CHSA would inform us about whether the financial and health system resources were altered through the introduction of the IMHIUs. Interviews with stakeholders provided opportunity to explore these findings in greater detail and obtain personal experiences to define cost and benefits. Similarly, interviews also provided us with scenarios and checklists that we were able to incorporate into the online survey. There was a constant cross-checking to see whether our findings, derived from qualitative and quantitative methods, were in alignment or generated new lines of enquiry. Creswell (2013) provides more information about mixed methods in a popular research textbook, now in its fourth edition.

Method in Action

In-depth semi-structured interviews were conducted by E.R. with 33 participants involving 11 consumers post-discharge, 10 carers, six IMHIU staff classified as clinical (nursing, occupational therapy), and six non-clinical staff (support workers, managers). Half of the consumers and carers and two-thirds of the IMHIU staff were female. It should be noted that the majority of the interviews in this case study were conducted by an experienced interviewer (E.R.) who had more than 20 years of experience working and communicating with people with diverse abilities and social backgrounds. Some interviews were conducted by an Aboriginal researcher, and this was determined at the time of referral to participate in the evaluation. If there participant indicated they were Aboriginal or that they preferred to talk with an Aboriginal researcher, they were automatically referred to our Aboriginal team member. Cultural knowledge is a precious resource that enriches a research team’s ability to ask important questions and understand people’s stories. Novice researchers are advised that involving two interviewers to share the interview role can provide alternate exploratory styles and potentially uncover alternate themes. Involving one interviewer can simplify a group-dynamic by reducing the imbalance created by two interviewers and one participant. In the case of home visits or interviews conducted away from public spaces, two interviewers should jointly conduct the interview to maximize researcher and participant safety.

Consumers, prior to discharge from the IMHIUs, were invited by an IMHIU mental health nurse to participate in an evaluation interview and also asked whether they approved of their nominated carer being contacted. Nurses excluded consumers who they thought might be distressed by participation. Semi-structured questions were asked under eight topics:

  • Perceptions of the new service;
  • Admission processes;
  • Inpatient care;
  • Discharge planning;
  • Location and community engagement;
  • Facilities;
  • Involvement in decision-making;
  • Personal experiences and suggestions for service improvement.

Interviews were audio-recorded, transcribed, and coded using NVivo 10 software (QSR International, 2012). The researcher (E.R.) undertook thematic analysis (Braun, Clarke, & Terry, 2014) which involved reading and re-reading typed interview responses, word by word, and then undertaking memo-writing to begin formulating general impressions about participant responses. Memos and responses were further scrutinized before finalizing the dominant themes and participant quotes to demonstrate those ideas within each theme. One person conducted interviews and analyses. It is increasingly common to involve at least one other coder in the analyses process, and this is a recommendation for future researchers embarking upon thematic analysis.

Paradigm of Inquiry

A critical perspective posits that a reality exists that is influenced by social, political, economic, cultural, ethnic, and gender factors, and a researcher and participant are interactively linked; values exist and influence interpretation and therefore findings. As a research team employed by a rural clinical school, we approached this evaluation from a critical standpoint and challenged the service to demonstrate what was working, what was not working, and why (Greene & Caracelli, 1997).

The research team were appointed by a CHSA to independently evaluate the IMHIUs. We were not contracted to provide a valid external evaluation and our expert and objective assessment and not to be subservient to either the Reference Group or the Steering Group. Our research team ethos from the onset was to work in partnership and include the Reference Group committee and its individual members as important stakeholders who provided contextual, cultural, and process-relevant information. We found ourselves having to adapt the way we communicated, as peers or colleagues, to engage staff and managers in a non-judgmental manner to encourage openness and reduce fear of the “evaluators.” At times, it appeared as if this “friendly” and “can-do” manner was on the cusp of being exploited and possibly misinterpreted as if we might be swayed toward reporting only CHSA perspectives. As a research team, we fortunately had a university professor (J.N.) who was also a rural medical practitioner, well regarded by CHSA. J.N. set up private meetings between the research team and the Chair of the Reference Group, and this allowed us to express concerns that CHSA might try to control the evaluation and compromise the evaluator independence. This relationship-building exercise and open communication resulted in Chair support that helped to mediate potential conflict and a “them-us” stand-off.

This mixed-methods evaluation involved the collection of qualitative and quantitative data. Service use data were obtained from CHSA and analyzed to provide a description of the number of IMHIU service users and residential location of consumers. Realist evaluation principles (Pawson, 2006) were incorporated in the qualitative evaluation design. Multiple data sources and methods, including ethnographic observations, semi-structured interviews, online surveys, and document reviews, were conducted over a 9-month period. Data collection strategies were both pragmatic and reflexive, allowing the evaluation team to provide ongoing formative feedback to influence the development of the services. Consistent with realist evaluation methods, an interpretive analysis involving consideration of context–mechanism–outcome relationships (Pawson, 2006) was used to better understand the nature of the new IMHIU MOC, what factors and processes promote positive outcomes for key stakeholders, and what opportunities exist that have relevance to the local context and mental health policy environment. Realist evaluation methods have previously been used to evaluate public health services (Greenhalgh et al., 2009). The evaluation involved sub-group analyses (carers, consumers, staff, referring clinicians, community-based support services), with the purpose of identifying specific mechanisms and impacts of relevance to these groups (Pawson & Tilley, 1997).

We designed and collected information that we thought might reveal what was working and what elements of the service needed adjustment. This line of enquiry in a realist evaluation tradition results in an escalating workload which was insufficiently covered by the original contract, the more thoroughly we investigated impact and operational elements of the service.

Triangulation describes the practice of collecting data from different sources to investigate a type of qualitative correlation. As the name suggests, three sources of data are compared to see whether different people and different types of data depict a synergy. Triangulation can occur with qualitative, quantitative, and mixed methods. For example, interviews with consumers, practitioners, and managers about the morale of a service may generate contrasting perspectives. A mixed-methods version of triangulation might be the collection of interview data, hospital stay data, and a measure of psychological distress to see whether a rural inpatient service promotes recovery quicker than metro inpatient services. As no evaluation method existed in previously existing mental health services in South Australia, we drafted an interview guide and discussed interview comments iteratively to refine and include emerging themes, and this formed the basis of the online quantitative survey. Triangulation was used to determine whether the IMHIUs had a positive impact on the local communities in which the services were located, as well as to identify strategies for improvement.

Understanding Context and Culture

There is a long-standing debate about whether researchers and evaluators should keep a distance from the service that they are investigating. Realist evaluation and action research challenged this perspective and suggest advantages of a researcher spending time to relate to key stakeholders and better understand the culture within which the service operates (Pawson, Greenhalgh, Harvey, & Walshe, 2005). We took the role of observers by attending staff meetings, spent time watching inpatient diversional therapy activities, and observed shift handovers and admission practices. Taking time to get to know staff and appreciate the day-to-day operations and culture of the two services being evaluated was useful when interpreting data. An onsite presence may also have aided researchers build relationships with nursing staff, which assisted with the research process including recruitment.

Recruitment of Consumers and Carers

Privacy legislation and research ethical standards can limit the sampling and recruitment methods for evaluators and researchers. In this case, the need to reduce bias and include participants who fully represent the population of IMHIU consumers was tempered by the need to protect confidentiality and avoid overburdening someone with a known mental illness. The recruitment planning debate was enriched by the voiced perspectives of service providers who advocated for the consumer and duty of care (Department of Health, n.d.) with researchers who sought academic rigor and reliable findings.

After negotiation, recruitment methods involved IMHIU nursing staff at the time of a patient discharge inviting the consumer to participate in the evaluation. This process involved a standard verbal spiel as well as provision of a hard copy information sheet. The treating nurse had the role of determining whether the consumer was well enough to participate in a research interview without being distressed. This is clearly flawed as this essentially provided the nursing staff, whose services were in part being evaluated, authority to exclude consumers from an opportunity to voice their positive or negative comments of the service. To counteract the selection bias, posters about the evaluation were displayed in the IMHIUs inviting individual referral directly to the research team or online survey. In the survey, consumers had an opportunity to provide their own contact details for follow-up communication by the research team.

In an effort to promote the constructive advantage of evaluation, the research team met with all staff to answer questions and discuss the aims and timeframes. J.N. and E.R. attended the two participating IMHIU sites at various times to maximize opportunity for staff from different shifts to better understand the service improvement motives of the evaluation. It was at this time that the independence of the evaluators appeared most important. Although the research team was in partnership with the CHSA, they were in a position of being able to provide anonymity and were separate from the employment hierarchy. We hoped to impart the importance of recruitment and diversity of thought and empower nursing staff to recruit generously without fear of personal retribution, in the event of a consumer providing negative feedback.

Meeting personally with nursing and managerial staff at the onset of evaluation allowed us to identify individuals most interested in the evaluation process and merits of research. Champions were identified and regular contact was maintained to endure staff were routinely reminded of the need to recruit and ethics processes for this. As a result of evaluation, an orientation pack was developed that included a link to the consumer and carer feedback survey.

Aboriginal Community

The health of rural Aboriginal people is significantly worse than the general population, and prominent conditions in Aboriginal communities include mental illness, substance misuse, and grief and loss consequent upon their poor health. The two rural communities with an IMHIU have an Aboriginal health service, and these health services were engaged for contribution to this evaluation. This evaluation contains carer, consumer, non-governmental organization (NGO) service providers, and CHSA staff perspectives. Evaluation of Aboriginal people’s experience was sought, and this engagement included the Aboriginal Health Research Ethics approval and inclusion of Ms. Emma Richards, an Aboriginal researcher as part of the evaluation team. Ms. Richards communicated with Aboriginal health service providers in the two regions and conducted interviews with consenting participants. This networking role involved the promotion of the evaluation and invitation for people interested in participating to make contact with Ms. Richards. Cultural safety was a primary consideration, and although participants were asked, there was deliberately no pressure for any person to disclose their Aboriginality.

Ethics Protocols

Approval was obtained for the research from the Aboriginal Health Research Ethics Committee (AHREC) and the SA Health Human Research Ethics Committee. Site-Specific Assessment approval was also required from SA Health; this is separate from their ethics requirement and is confirmation that data, which we required, were available from SA Health records.

The rights of people to consent to participation in research and the obligation to avoid harm or distress are of supreme importance. In Australia, the National Health and Medical Research Council governs research ethics and mechanisms that ensure university researchers consider all aspects and potential consequences of research and evaluations (National Health and Medical Research Council, Australian Research Council, & Australian Vice-Chancellors Committee, 2007).

It is now a worldwide practice for research to be reviewed by ethics committees. All research involving university staff and students must report to a university ethics committee and also seek feedback and endorsement from committees affiliated with stakeholders involved in the research project. In South Australia, proposals to conduct health-related research involving Aboriginal people or communities in South Australia need to be submitted to AHREC even if approval has been or will be obtained from the researcher’s institution (Aboriginal Health Council of South Australia, 2017). A widely accepted standard is for any research related to Aboriginal people or issues to include at least one person who identifies as Aboriginal and has genuine connections with relevant communities. Our team included Emma Richards, a Barngarla woman who has published work related to communication with Aboriginal people (Richards, Newbury, & Newbury, 2016). Feedback from the AHREC was consistent with the ethos of equity and access for all. Amendments included the incorporation of an opportunity for evaluation participants to meet with an Aboriginal researcher, clearly stated in words and a recruitment flow chart. Greater detail was required in regard to an algorithm to determine when and how an interview would be suspended and/or abandoned in the event of participant distress.

The AHREC challenged us to consider the language and comprehension needs of people with low literacy. It is uncommon for researchers to assess the literacy skills of participants, and assumptions are made that challenge the principles of informed consent and inclusion of participants with varied perspectives (Montalvo & Larson, 2014). People with psychiatric illness may have impaired comprehension due to their illness or their prescribed medication (Revheim et al., 2014). University-educated health practitioners and researchers can underestimate the challenges of people with English as a second language or low literacy to engage in shared decision-making (Muscat et al., 2016).

Aboriginal people are one of the most marginalized groups in Australian society with poorest health (Department of the Prime Minister and Cabinet, 2017) and low literacy levels (Johnson et al., 2014). AHREC feedback re-iterated the need for clear communication, and as a result, any uncommon words found in consent forms and surveys were replaced with plain language to ensure usability and comprehension. Some words were replaced or accompanied with icons to simplify the information collection process and improve the likelihood of valid responses.

Steering Group and Reference Group Functions

It is common for government to establish both a Steering Group and a Reference Group to oversee a project. The Steering Group have the role and function to provide governance, and the Reference Group report to the steering group and provide expertise to ensure objectives are met (Country Health SA Local Health Network, 2016a, 2016b). In this case study, the Reference Group included members external to the government, unlike the Steering Committee, which were all government employees. Our research team attended the monthly Reference Group meetings, and these provided a mechanism for researchers, government, and NGO service providers to share ideas and communicate interim findings. This formative evaluation (Dehar, Casswell, & Duignan, 1993) resulted in modifications to practices. For example, some staff reported concern about security measures and de-escalation of aggressive events, and this resulted in a staff and management review of policy and simulation training investigations.

A strength of collaborative interprofessional practice (IPP) is that diverse experience and opinions can produce better decision and consumer outcomes (D’Amour, Ferrada-Videla, San Martin Rodriguez, & Beaulieu, 2005). The flip side is that a project that involves multiple stakeholders also has multiple agendas and perspectives about priorities, methods, and interpretations, which can produce more debate and delay progress. The reference group included people who had been involved and invested professionally in the design and implementation of the IMHIUs. The research team regularly grappled with how to present feedback and proposals in a constructive way and avoid alienating or offending. IPP competencies are increasingly included in university curriculum, and they have relevance to the functions of a community-based research team. Communication and effective conflict resolution skills are most valuable when involved in collaborative research.

IPP competencies provide a framework for teamwork and are well described by Thistlewaite et al. (2014) and include the following:

  • Values and ethics. Transparent discussion of shared values and priorities, identification of discrepancies and differences between team members and stakeholders, and acknowledgement of cultural knowledge and commitment to personal safety. Example: Mental Health is associated with stigma and shame by some people. Our team needed to acknowledge this and ensure that we protected the privacy and wellbeing of all participants and stakeholders.
  • Role clarity. The research process includes many different tasks and a team is assembled to complete these. The allocation of roles and responsibilities and the documentation of these are important. A research team will provide supervision and encouragement for people to ask for help and to work on new tasks with a more experienced practitioner. Regularly meet with the team to review a project plan which includes milestone activities, dates, who is responsibility for each task, and what each team member will do to support completion.
  • Respectful communication. The way we talk, write, and behave around other people provides the setting for either constructive or destructive communication. People from different disciplines can have unique terminology and approaches to problem solving. Similarly, participants and people of the general public will have their own language and communication style, and research teams must adapt to these. In the era of email and text messaging, it is important to remember that personal communication is interactive, and meeting face-to-face and talking with a person are important when building relationships.
  • Conflict resolution. The bringing together of a diverse group of stakeholders provides potential for a well-considered piece of research. Diverse teams also provide potential for conflict. Efforts to understand the priorities of each team member and common goals can help a team to acknowledge disputes and seek compromise. A respected, calm leader is a great asset when avoiding and managing conflict.
  • Reflective practice. Professional codes of conduct require us to reflect on our performance and constantly identify skill strengths and deficits. Reflection may involve journaling, supervision, mentoring, and regular awareness-raising activities that highlight how and why you are part of a team. The more we know, the more we become aware of the many things that we do not know.

Comparative Analyses of Concepts and Practices

When conducting and analyzing semi-structured interviews, we explored differences in opinion and experience of stakeholder groups. The approach we took was to interview people from each of the stakeholder groups (consumers, health practitioners, managers, NGO partners) in waves, analyze the interviews, and then let this analysis inform the next wave of interviews. We identified issues that were of concern to each group and then incorporated questions about these in future interviews, with all stakeholders. One finding was that the concept of “communication” was different for carers and staff. For some, communication meant being actively involved in decision-making; some placed higher value on relationship building and the feeling of being greeted warmly and valued, and for others, it meant making sure that case-notes were kept up-to-date. One influential finding was that some staff assumed a friendly greeting and regular reporting of decisions to carers as good communication, although some carers expected communication to include shared decision-making and involvement in care decision before they were made.

Consumer Satisfaction and Health Outcomes

Mental health services that are person-centered and promote integration and a continuum of care implement practices that make every person’s journey inclusive, informative, simple, affordable, and non-threatening. Evaluating whether a service is achieving this is complex. Consumer satisfaction measured via survey is a commonly used method; however, other factors need to be considered (Davis et al., 2008). Satisfaction is unable to encompass the range of feelings, values, and experiences a person has when health care is provided (Manley & McCormack, 2008). Values underlying the care relationship have been identified as equality (experience-based knowledge being as valuable as professional-based knowledge), partnership (accountable, committed, and active partners in care), and interdependence (mutual respect, trust, and acceptance) (Edvardsson, Fetherstonhaugh, Nay, & Gibson, 2010; Petroz, Kennedy, Webster, & Nowak, 2011). Therefore, the experiences of interactional and interpersonal elements of care provided are key issues in quality for patients. We included survey and interview methods to explore consumer and carer satisfaction at a deeper level than would have been achieved by survey alone.

Power, Collaboration, and Contract Research

The research team, who had been appointed in a competitive merit-based process, entered the evaluation arena with some authority. We approached the evaluation with a participatory action “headset” as facilitators of a process (Coughlan & Coghlan, 2002) with recognized expertise and perceived objectivity. We had not been involved in the design, setup, or management of the pilot services, where as many of the other reference and steering group members had been.

The ideological underpinnings of this work are draw upon critical realism (Archer, Bhaskar, Collier, Lawson, & Norrie, 2013), realist evaluation also known as evaluation science (Pawson, 2013), and pragmatism (David L Morgan, 2014). Critical realism endorses the use of a variety of quantitative and qualitative research methods and prioritizes the need to consider methodological approach, ontology (beliefs about the nature of reality), and epistemology (beliefs about the nature of knowledge) (Tashakkori & Teddlie, 2010; Zachariadis, Scott, & Barrett, 2012).

Qualitative researchers are expected to be reflective and acknowledge their relationship with the research topic and any personal factors that may influence how a project is designed and written up (Tong, Sainsbury, & Craig, 2007). As we proceeded with the evaluation and commenced preparing the final report, it became clearer that this work had a social justice agenda and aligned with pragmatism paradigm (Hesse-Biber, 2012; Morgan, 2007). Like critical realism, mixed methods are endorsed; however, pragmatism is less concerned with the need to consider ontology or epistemology but elevates the importance of axiology, beliefs about the role of values or ethics in conducting research, and how these might influence research.

Action research is a method that can embrace critical realist premises as it is concerned, not only with program evaluation but also with producing change (Houston, 2010). Action research is characterized as research in action, rather than research about action and a repeating cyclical process of planning, data gathering, evaluation, and action (Coughlan & Coghlan, 2002). The common expectation of evaluation is to consider what has happened in a service and how this compares with pre-existing objectives. This evaluation, however, incorporated features of Action Research (AR), including regular, cyclical, consultation with the reference group who had some influence on future practice and the next wave of evaluation actions. Some actions were adopted mid-way through the evaluation, for example, consumers reported difficulty finding and operating a computer with Internet access. This impaired their ability to complete service evaluations, and as a result, a recommendation was made to install a password-protected computer in the IMHIU that nursing staff and consumers could access and for psychosocial workers to introduce discharged consumers to free community-based computers in local libraries. These issues were rectified as a result of cyclical data collection–reference group discussion–evaluation–planning–action cycle.

The cyclical nature of action research was possibly not well suited to contract evaluation of a project with a fixed budget and relatively short timeframe. This highlights the practical limitations of contract evaluation and research when compared with scholarly research. Research commissioned by policy-makers embody a tension between academic values, such as focus, rigor, accuracy, and comprehensiveness and service values such as timescale, fitness for purpose, value for money, and credibility with decision makers (Greenhalgh et al., 2005). Increasingly, academic researchers engage in work that informs policy and alignment between university, industry, and government requirements.

Contract Research Dilemma

Community–research collaborations, including evaluations, are increasingly recognized as a means to produce evidence that can impact policy and the health outcomes (Dent et al., 2016; Israel, Schulz, Parker, & Becker, 2001). We posit that the nature of partnerships is altered when one stakeholder, in this case the government, is paying another, the research team. The relationship becomes one of customer-service provider as opposed to research partners. It is not disputed that researchers need funding to complete quality work (Greenhalgh, Jackson, Shaw, & Janamian, 2016); however, when the funding is assigned to a specific task, that is, an evaluation, the expectation and partnership balance may be altered. In practice, our experience is that contract research funding often inadequately covers all research team costs, and an offset for this is the realized expectation of access to data and publication outcomes. We did not feel the pressure to alter our findings; however, we faced dilemmas that challenged our need to publish and disseminate our work for academic and public interest. We uncovered a mismatch of motives for research similar to that reported by Bergman, Delevan, Miake-Lye, Rubenstein, and Ganz (2017). As academic researchers, our desired outcomes are to adhere and document methodological rigor and publish widely while service providers and government, at times, seek private reports that may or may not be released. The efforts to rigorously evaluate and research an issue is undermined if the findings are not the property of the research team, but instead owned by the government. An open discussion about outputs and expectations of all stakeholders at the onset and throughout a research project is recommended.

Practical Lessons Learned

  • Researchers should be involved from the onset of service implementation to have input into evaluation design. The collection of information at baseline, before an intervention service is implemented, allows for a clearer picture of impact. In this case, one author (J.N.) prepared the contract research tender after the IMHIU had commenced and before recruiting the other research team members. The ethics approval process required modifications, but the research design remained essentially as per contract.
  • Reference groups provide a routine way for researcher–community collaboration and provide mechanism for evidence-informed action. Stakeholders and researcher must function as interprofessional teams. IPP competencies of conflict resolution, communication, role clarification, and reflective practice are essential to understand and practice.
  • Ethics reviews are imperative and provide valuable feedback, and time must be allowed in project timeframes for this process. University researchers are bound to obtain ethics approval. Ethics committees are key stakeholder, and our research team had to communicate their feedback, which was obligatory, in a constructive, non-apologetic way. Our research team had to mediate and address differing priorities of CHSA and two ethics committees while also staying true to research rigor and publication productivity requirements expected by any university. Ethics protocols must be well considered and where possible co-authored by all stakeholders. Time must be set aside to collaboratively address ethics committee feedback as a standard practice.
  • Recruitment and sampling are more than a numbers game. Involving consumers with histories of mental illness for service evaluation is a way of providing a voice; however, privacy and beneficence need to be considered. The dissemination of findings to all participants and stakeholders is easily under-achieved, especially when projects take longer than planned and employment contracts expire. Sufficient time needs to be allocated to ensuring the research team communicates research outcomes in a way that is meaningful to a diverse audience.
  • The literacy and comprehension of some people are low and researchers must adapt material to be inclusive. We would consider including a teacher or communication specialist to review our research materials.
  • Contracts and partnership agreements should clearly detail research publication expectations that project outcomes of all stakeholders.

Conclusion

This case study presents some methodological issues relevant to partially funded service evaluation conducted by a university research team. Mixed methods were used to avoid reliance upon one type of knowledge. In the nature of Action Research, opportunities to communicate with stakeholders were factored into the project, and this mechanism, along with comprehensive ethics reviews, aided in the planning and implementation of the project. The three key elements of the research—(a) consumer and carer experience, (b) consumer and staff satisfaction, and (c) economic and operational performance data—were triangulated (Hesse-Biber, 2012) to investigate whether the lives of rural people with mental illness have improved through the establishment of rurally located IMHIUs.

Evaluation and research are a means to provide marginalized people with a voice, and great effort must be given to ensure language barriers and low literacy are considered at every stage of the research process. Dissemination of findings is one area that we found challenging. However, universities require publications in peer-reviewed scholarly journals, where our partners, or in the case study, our client, required a report of ministerial consideration that may or may not be released for public scrutiny.

Exercises and Discussion Questions

  • What recruitment methods are likely to maximize mental health consumer involvement but still protect the privacy and wellbeing of those who wish to remain unknown or may be unsettled by an invitation to participate in research?
  • How do we give a voice and involve people in research who have low literacy and English comprehension?
  • What are the ethical and practical dilemmas for researchers who are paid by a service to evaluation consumer satisfaction and service outcomes?
  • What is mixed methods, and why does pragmatism and critical realism align with these?

Further Reading

Braun, V., Clarke, V., & Terry, G. (2014). Thematic analysis. In P.Rohleder & A.Lyons (Eds.), Qualitative research in clinical and health psychology (pp. 95114). Basingstoke, UK: Palgrave Macmillan.
Dent, E., Hoon, E., Kitson, A., Karnon, J., Newbury, J., Harvey, G., … Beilby, J. (2016). Translating a health service intervention into a rural setting: Lessons learned. BMC Health Services Research, 16(1), 62. doi:10.1186/s12913-016-1302-0
Montalvo, W., & Larson, E. (2014). Participant comprehension of research for which they volunteer: A systematic review. Journal of Nursing Scholarship, 46, 423431.
Morgan, D. L. (2014). Pragmatism as a paradigm for social research. Qualitative Inquiry, 20, 10451053.

Web Resources

Aboriginal communities and Research: https://www.nhmrc.gov.au/health-ethics/ethical-issues-and-further-resources/ethical-guidelines-research-involving-aboriginal-

Evaluation resources: http://www.betterevaluation.org/en

Community-based primary health research resources: http://www.phcris.org.au/

References

Aboriginal Health Council of South Australia. (2017). Research overview. Retrieved from http://ahcsa.org.au/research-overview/ethical-review-ahrec/
Archer, M., Bhaskar, R., Collier, A., Lawson, T., & Norrie, A. (2013). Critical realism: Essential readings. New York, NY: Routledge.
Bergman, A. A., Delevan, D. M., Miake-Lye, I. M., Rubenstein, L. V., & Ganz, D. A. (2017). Partnering to improve care: The case of the Veterans’ Health Administration’s Quality Enhancement Research Initiative. Journal of Health Services Research & Policy. Advance online publication. doi:10.1177/1355819617693871
Braun, V., Clarke, V., & Terry, G. (2014). Thematic analysis. In P.Rohleder & A.Lyons (Eds.), Qualitative research in clinical and health psychology (pp. 95114). Basingstoke, UK: Palgrave Macmillan.
Coughlan, P., & Coghlan, D. (2002). Action research for operations management. International Journal of Operations & Production Management, 22, 220240.
Country Health SA Local Health Network. (2016a). Community mental health model of care reference group. Adelaide: SA Health, Government of South Australia.
Country Health SA Local Health Network. (2016b). Community mental health model of Care Steering Committee Terms of Reference. Adelaide: Government of South Australia.
Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: SAGE.
D’Amour, D., Ferrada-Videla, M., San Martin Rodriguez, L., & Beaulieu, M.-D. (2005). The conceptual basis for interprofessional collaboration: core concepts and theoretical frameworks. Journal of Interprofessional Care, 19(Suppl. 1), 116131.
Dehar, M.-A., Casswell, S., & Duignan, P. (1993). Formative and process evaluation of health promotion and disease prevention programs. Evaluation Review, 17, 204220.
Dent, E., Hoon, E., Kitson, A., Karnon, J., Newbury, J., Harvey, G., … Beilby, J. (2016). Translating a health service intervention into a rural setting: Lessons learned. BMC Health Services Research, 16(1), 62. doi:10.1186/s12913-016-1302-0
Department of Health. (n.d.). Standard 1. Rights and responsibilities. Retrieved from http://www.health.gov.au/internet/publications/publishing.nsf/Content/mental-pubs-i-nongov-toc~mental-pubs-i-nongov-st1
Department of the Prime Minister and Cabinet. (2017). Closing the gap—The Prime Ministers report. Retrieved from http://closingthegap.pmc.gov.au/sites/default/files/ctg-report-2017.pdf
Department of Treasury and Finance. (2014). Guidelines for the evaluation of public sector initiatives, Part A: Overview. Retrieved from http://www.treasury.sa.gov.au/__data/assets/pdf_file/0004/1768/ti17-guidelines-part-a.pdf
Devlin, N. J., & Krabbe, P. (2013). The development of new research methods for the valuation of EQ-5D-5L. European Journal of Health Economics, 14(Supp. 1), S1S3.
Fain, J. A. (2005). Is there a difference between evaluation and research?The Diabetes Educator, 31, 150155.
Government of South Australia. (2009). Mental health act. Retrieved from https://www.legislation.sa.gov.au/LZ/C/A/Mental Health Act 2009.aspx
Greene, J. C., & Caracelli, V. J. (1997). Defining and describing the paradigm issue in mixed-method evaluation. New Directions for Evaluation, 1997(74), 517. doi:10.1002/ev.1068
Greenhalgh, T., Jackson, C., Shaw, S., & Janamian, T. (2016). Achieving research impact through co-creation in community-based health services: Literature review and case study. The Milbank Quarterly, 94, 392429.
Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., Kyriakidou, O., & Peacock, R. (2005). Storylines of research in diffusion of innovation: A meta-narrative approach to systematic review. Social Science & Medicine, 61, 417430. doi:10.1016/j.socscimed.2004.12.001
Hawe, P., Degeling, D., Hall, J., & Brierley, A. (1990). Evaluating health promotion: a health worker’s guide. Sydney, New South Wales, Australia: MacLennan & Petty.
Hesse-Biber, S. (2012). Feminist approaches to triangulation: Uncovering subjugated knowledge and fostering social change in mixed methods research. Journal of Mixed Methods Research, 6, 137146.
Heyvaert, M., Hannes, K., Maes, B., & Onghena, P. (2013). Critical appraisal of mixed methods studies. Journal of Mixed Methods Research, 7, 302327. doi:10.1177/1558689813479449
Houston, S. (2010). Prising open the black box critical realism, action research and social work. Qualitative Social Work, 9, 7391.
Israel, B. A., Schulz, A. J., Parker, E. A., & Becker, A. B. (2001). Community-based participatory research: Policy recommendations for promoting a partnership approach in health research. Education for Health, 14, 182197.
Johnson, G., Dempster, N., McKenzie, L., Klieve, H., Fluckiger, B., Lovett, S., … Webster, A. A. (2014). Principals as literacy leaders with indigenous communities: Leadership for learning to read—‘Both ways’. Kingston: Australian Primary Principals Association.
Montalvo, W., & Larson, E. (2014). Participant comprehension of research for which they volunteer: A systematic review. Journal of Nursing Scholarship, 46, 423431.
Morgan, D. L. (2007). Paradigms lost and pragmatism regained. Journal of Mixed Methods Research, 1, 4876. doi:10.1177/2345678906292462
Morgan, D. L. (2014). Pragmatism as a paradigm for social research. Qualitative Inquiry, 20, 10451053.
Muscat, D. M., Shepherd, H. L., Morony, S., Smith, S. K., Dhillon, H. M., Trevena, L., … McCaffery, K. (2016). Can adults with low literacy understand shared decision making questions?A qualitative investigation. Patient Education and Counseling, 99, 17961802. doi:10.1016/j.pec.2016.05.008
National Health and Medical Research Council, Australian Research Council, & Australian Vice-Chancellors Committee. (2007). National statement on ethical conduct in human research (updated May 2015). Retrieved from https://www.nhmrc.gov.au/_files_nhmrc/publications/attachments/e72_national_statement_may_2015_150514_a.pdf
Pawson, R. (2013). The science of evaluation: A realist manifesto. Thousand Oaks, CA: SAGE.
Pawson, R., Greenhalgh, T., Harvey, G., & Walshe, K. (2005). Realist review—A new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy, 10(Suppl. 1), 2134.
Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches. Thousand Oaks, CA: SAGE.
Revheim, N., Corcoran, C. M., Dias, E., Hellmann, E., Martinez, A., Butler, P. D., … Javitt, D. C. (2014). Reading deficits in schizophrenia and individuals at high clinical risk: Relationship to sensory function, course of illness, and psychosocial outcome. American Journal of Psychiatry, 171, 949959.
Richards, E., Newbury, W., & Newbury, J. (2016). How to communicate effectively with Aboriginal youth. Australian Doctor. Retrieved from https://www.australiandoctor.com.au/clinical/therapy-update/the-ripple-effect
Simons, H. (2009). Case study research in practice. Thousand Oaks, CA: SAGE.
Slade, A. G. (2001). Interpreting scores on the Kessler Psychological Distress Scale (K10). Australian and New Zealand Journal of Public Health, 25, 494497.
Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: SAGE.
Tashakkori, A., & Teddlie, C. (2010). SAGE handbook of mixed methods in social & behavioral research. Thousand Oaks, CA: SAGE.
Thistlethwaite, J. E., Forman, D., Matthews, L. R., Rogers, G. D., Steketee, C., & Yassine, T. (2014). Competencies and frameworks in interprofessional education: a comparative analysis. Academic Medicine, 89, 869875.
Thomas, G. (2011). A typology for the case study in social science following a review of definition, discourse, and structure. Qualitative Inquiry, 17, 511521.
Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care, 19, 349357.
Zachariadis, M., Scott, S., & Barrett, M. (2012). Methodological implications of critical realism for mixed-methods research. MIS Quarterly, 37, 855879.
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles