The purpose of this case study is to provide a summary of our experiences as we attempted to address the health community’s need for effective community health worker training programs. The evaluation models at the time of our study were hindered by insufficient data and methodological challenges that made identifying effective community interventions burdensome. Our approach differed in that we sought to primarily focus on the relevance, comprehensiveness, and quality of data we needed to appropriately assess community health worker training programs rather than focus on the training components themselves. Our approach included three stages: First, we conducted a concept synthesis to identify the common community health worker core competencies; then, we developed a theoretically based measure to assess the competencies; and last, we tested the psychometric properties of our new instrument. This case study will discuss the challenges we encountered and the reasoning behind decisions that were made during the course of the project.
By the end of this case, students should be able to
- Have a stronger understanding of how to approach the evaluation of existing training programs
- Understand under which conditions conducting a concept synthesis is appropriate
- Be familiar with the value of community-based participatory research and the effort involved in this approach
- Know when using a retrospective pre-testing/post-test can be useful
- Understand the meaning of psychometric properties and importance of testing for such properties of newly developed instruments
Our work began in response to the observation that in areas with limited health funding and resources (i.e., rural areas of Mississippi), communities relied heavily on the development of lay health workers to combat health disparities, there were insufficient numbers of health care providers, and there was an ever-increasing focus on community participatory approaches as the means to solving health problems. These community health workers (CHWs) were trained by local agencies to provide designated health services and information to primarily underserved populations. The CHWs also acted as valuable liaisons between community members and the formal health care delivery system. As the demand and numbers of CHWs grew, the need for researchers to further evaluate the effectiveness of the training programs became more evident as CHWs were expanding their reach to include multiple health disparity conditions and in more complex capacities.
At the time of our study, researchers’ evaluation of such training programs were general and lacked theoretically based models and psychometrically tested instruments. Although most studies have evaluated the impact of CHW-based interventions in specific conditions (e.g., Mukherjee & Eustache, 2007; Thompson, Horton, & Flores, 2007), little is known regarding the experience of training from the perspective of those trained. In addition, studies’ consistency in which core competencies were used in the trainings and their associated definitions was deficient, likely stemming from the inconsistency in the CHW descriptions. For example, more than 30 names have been used synonymously for CHWs (e.g., community health advisor [CHA]) in the literature, and CHWs are often defined by their program role.
Our challenge was to develop a better evaluation model for health-related training programs. To do this, we approached the study with the intent of surveying the existing literature, which resulted in us conducting a concept synthesis to identify the common set of CHW core competencies. Next, we found that we had to generate our own conceptually and psychometrically sound measure of those core competencies to appropriately assess the effect of the CHW training program on CHWs’ perceptions of competency attainment, knowledge acquisition, and training experience. Last, to ensure integrity of our new instrument, we needed to test its psychometric properties or the instrument’s ability to measure complex psychological phenomena (e.g., caring, guidance) that are often inferred because they are not directly observed. Our efforts ultimately culminated in an instrument that could be used as a consistent measurement or evaluation tool of CHW training to contribute to the best possible outcomes from the training and for the communities served.
Core competencies were first addressed by researchers in the landmark National Community Health Advisor Study (1998). This multimethod national study of 150 community health advisors programs identified ways to strengthen services delivered by CHWs throughout the United States. The study’s results were intended to serve as guidelines for CHW programs, policy makers, and health care providers. The study also provided recommendations for CHW programs to adopt and refine the identified CHW roles and competencies. The researchers thought that a CHW’s possession of these roles and quality core competencies would increase the likelihood of successful CHW role performance. Unfortunately, the definitions, terms, and assessment of the competencies lacked consistency in the subsequent literature. We also found a lack of training evaluations using theoretical models, sound instruments, and minimal information on the CHWs’ experience of training. Our realization was problematic as we initially thought common understanding of the CHW core competencies existed, and all we needed was to find the most appropriate instrument to measure training effectiveness.
We decided that by conducting a concept synthesis, we could fill this gap in the literature by focusing our efforts to gather information on CHW competencies and their measurement. Concept synthesis, defined as a creative process that generates new ideas by examining data, has been deemed useful when (a) there is little or no concept development, (b) concept development is present but with no impact on theory or practice, and (c) observations of phenomena are available but not yet classified (Walker & Avant, 2010). In addition, we approached the concept synthesis using the hybrid model of theory development (Rogers & Knafl, 2000). This model is a literary approach to concept synthesis that may lead researchers to previously undiscovered concepts, which involves systematic literature searches and reviews. The hybrid model was appropriate for our study as it combines both theoretical and empirical elements in three phases—theoretical, fieldwork, and analytical—so that we can review and categorize relevant literature, incorporate reality-based field experiences to further delineate the emerging concepts, and analytically compare these findings for discrepancies to develop the final concepts.
In the theoretical phase, we became saturated in the available CHW and relevant associated literature by conducting electronic searches for approximately the past 50 years (1970-2008) using Medical Literature Analysis and Retrieval System Online (MEDLINE), Cumulative Index to Nursing and Allied Health Literature (CINAHL), EBSCOhost, Public MEDLINE (PubMed), OVID, Education Resource Information Center (ERIC), and Google Scholar as well as a manual search of the bibliographies of articles retrieved. We theoretically defined CHWs as persons indigenous to the community who possess unique strengths, shared responsibility, and specific knowledge regarding their community’s problems and possible solutions to deliver health information, monitoring, and health care access. Key terms we searched included CHW training, CHW evaluation, and CHW core competencies and synonymous terms for CHWs, including CHAs, lay health advisors/workers, promotoras, and peer health advisor, among others, and our search expanded into the fields of nursing, business, and social work.
Our initial search yielded over a thousand different articles. We scaled this size down to half by filtering through the initial articles and superficially examining the abstracts to determine the article’s relevance to our main focus of CHW competencies. We then reduced the size further by excluding conceptual and opinion literature which resulted in approximately 250 potential articles that were reviewed more extensively. We fully evaluated these remaining articles to determine whether the articles were conceptual or theoretical in nature or were empirical evaluations of CHW training. We noted studies that had specified research questions and had clearly delineated methods and measures focused on CHW competencies or skills. These studies also focused on activities that CHWs engaged in as exemplars of core competencies, knowledge, or attitudes. By the end of this multiple filtering process, we had narrowed our search to 40 studies relevant to defining core competencies and 17 studies relevant to evaluation of CHW training programs. We had developed our working definitions of emergent concepts through a process of (a) selecting a concept, (b) reviewing and summarizing the literature, (c) dealing with the meaning and measurement, and (d) choosing a working definition. Once we had reached saturation with the literature, we classified the information into flexible categories and examined these categories to see whether they could be combined (see Rogers & Knafl, 2000; Walker & Avant, 2010).
The second phase, fieldwork, overlapped with the theoretical phase. We coupled saturation with the literature with exposure to the CHW role through making CHW contacts, completing CHW training, attending the national CHW conferences, and socializing with CHWs. CHWs most often emerge from community-based participatory research (CBPR) projects; therefore, the CBPR approach was an appropriate strategy for this study. We took part in ongoing activities to build trust and partnerships in the local community and followed a CBPR process, including (a) setting the stage, (b) negotiating entry, (c) selecting cases, and (d) collecting and analyzing data. Setting the stage for the study involved us contacting potential community partners and informing them of the study’s purpose and value while ensuring they felt comfortable with their involvement in the study. The dialogue from these interactions with the partners allowed us access to our population of interest so that we could identify participants for our sample and gather information. We documented our conversations, observed behaviors, and relationships with the CHWs in our research journals and study audit trail.
We conducted the final analytical phase by comparing the findings from our literature review with those of our fieldwork to look for discrepancies as well as clarify and refine our findings with experts. We documented our impressions from the fieldwork in journals and used this information to substantiate and add context to findings from the literature review. We gathered this varied body of knowledge to formulate and conceptually define five main CHW core competencies and their key indicators. The five core CHW competencies we identified from the concept synthesis are leadership, translation, guidance, advocacy, and caring. In addition, the concept synthesis revealed to us the various aspects of training needed to enhance these innate competencies for CHWs (Rogers & Knafl, 2000; Walker & Avant, 2010).
CBPR is the most suitable research approach for our study because of the broad-reaching nature of instituting societal or group change. Specifically, CBPR is most useful over conventional approaches when there are questions relating to program implementation challenges or program effects on beneficiaries (Zukoski & Luluquisen, 2002). CBPR involves all stakeholders combining their possible resources and knowledge to bring about community-wide change. Specifically, CBPR is a partnership approach that engages all stakeholders in the planning and development of evaluation design and implementation. Through CBPR, the CHWs have helped determine the focus of the evaluation within their own cultural and socioeconomic environments (Whitmore, 1998). CBPR provides the added benefits of including local knowledge, verification from stakeholders, and increased communication and relationships with key players (Jackson & Kassam, 1998). Among researchers, a commonly accepted value in CBPR is that if the problem lays in the community, then so does the solution.
We conducted CBPR through the following phases: building partnerships, building trust, assessing the community, determining the intervention, implementing the intervention, and evaluating the intervention. Partnerships included all stakeholders, such as academic institutions, private sector organizations, local leaders, and community members. These partnerships allowed resources, including fiscal and cognitive-affective, to be pooled for us to address issues of funding and the research questions important to the community’s health. Early in the study, we contacted all stakeholders, including the community residents, local leaders, local health care providers, the Mercy Delta Express Project, and the University of Mississippi Medical Center’s School of Nursing, to discuss the project and establish buy-in (see Story, 2008). Later in the study, the contacted stakeholders shifted to include CHWs nationally, and we contacted them through their annual professional conference and personal referrals.
Communities often mistrust researchers by assuming they do not have the community’s best interest in the forefront (Minkler & Wallerstein, 2003). Some communities have experienced what has been termed the helicopter approach where researchers land, collect the data they need to complete their funded projects, and then fly away leaving no lasting community benefits. The primary way to change these perspectives involves us building trust and assuring long-term resources. We established trust by repetitive contacts (e.g., face-to-face, phone, and email) with stakeholders and keeping promises with those stakeholders (see Story, Wyatt, & Hinton, 2010). Once we established partnerships and trust, we assessed the community to determine community-identified needs and strengths. In the case of this study, the community determined that a CHW program was desirable. Community-based research may be limited if there is insufficient buy-in by the community. In addition, CBPR requires a large time commitment to accomplish, which community members, researchers, and academic institutions may be unwilling to obligate. Other community barriers to involvement include the stakeholders’ lack of trust, lack of interest, and lack of valuing the research. For us to achieve mutual researcher–community member involvement for a successful community intervention, we carefully considered the community’s history, culture, context, personal meaning, and geography as well the unique attributes of the researchers/partners (Ahmed, Beck, Maurana, & Newton, 2004).
Once we identified via concept synthesis the theoretical commonalities of CHWs in the literature, we combined this literature with the information we gathered from our CBPR fieldwork and identified what we believed were the five main core competencies of CHWs. We then developed an instrument to assess the competencies (i.e., leadership, translation, guidance, advocacy, and caring) and called it the CHW Core Competency Retrospective Pretest/Posttest (CCCRP). We were able to look back to the landmark National Community Health Advisor Study (1998) and find an appropriate instrument template (see Ingram, Staten, Cohen, Stewart, & deZapien, 2004) to work with and adjust toward our own purposes.
We, along with experts in the field, scrutinized the questions from the original instrument used in the 1998 National CHA Study for fit with the five core competencies we identified through the concept synthesis. We determined that the items adequately represented the newly specified competencies. Specifically, the items asked participants about developing trust, handling group differences, knowing health services, knowing illness and management, accessing resources, making referrals, building relationships, performing activities, facilitating group process, making presentations to community members, resisting intimidation, and managing conflict. However, based on our concept synthesis, we had to regroup these items differently to accurately reflect how we thought them to be associated with the core competencies.
We further developed the CCCRP to (a) include specific project aims such as Know Your Numbers (KYN) and hypertension (HTN) knowledge attainment and management; (b) lower the reading level to enhance understandability of the items to a Flesch–Kincaid readability score of 5.3; (c) include one or more items that comprehensively measured the five core competencies and reflected their theoretical definition; (d) use simple, straightforward, clear language; (e) limit each item to a single thought; (f) limit the use of words such as only, merely, and just; and (g) use either mostly positive or negative statements (see Polit & Beck, 2011). Table 1 provides a listing of the core competencies and a comparison of the instruments’ items. Flesch–Kincaid is a test that measures reading difficulty in terms of grade level. A score of 5.3 estimates the reading difficulty of the instrument at a fifth-grade reading level.
|Table 1. Item comparison for Ingram instrument and CCCRP.|
CHW core competencies
Assessing an audience and using strategies that are appropriate for that audience
Ability to assess and target an audience
Assess and direct others about how to control and prevent high blood pressure
Directing community and individual activities
Ability to perform activities designed for families
Do activities planned for people with high blood pressure
Ability to perform activities designed for communities
Do activities planned for communities about high blood pressure
Supervising conflict resolution and problem solving
Ability to handle individual differences in a group setting
Handle people’s differences within group to talk to them about high blood pressure
Using popular education and adult learning concepts
Use popular education and adult learning approaches to talk with others about high blood pressure
Explaining health information or services
Knowledge of diabetes self-management
Know about how people can take care of their own high blood pressure
Knowledge regarding diabetes
Know about high blood pressure and know your numbers
Ability to build and enhance a family’s understanding of diabetes prevention
Build and improve others’ understanding of high blood pressure prevention
Explaining information related to language differences
Talk about what you know about high blood pressure and know your numbers in a way that is understandable to others
Knowledge and ability to connect individuals to health care services
Knowledge of the health and social service systems of the community you are working with
Know about the health and social services of your community that could help people be healthier
Ability to identify and access community resources related to diabetes
Find and get community resources to help people with high blood pressure
Knowledge and ability to make health recommendations
Ability to make appropriate diabetic referrals
Make the right high blood pressure recommendations
Ability to refer individuals and/or families to the different components of the Border Health
Ability to build relationships between group members
Build relationships between people in a group
Know about high blood pressure prevention, including weight control, dietary changes, and physical activity
Supporting individuals or communities to assume responsibility for their own health
Ability to build and enhance a family’s ability to support the diabetic patient and his or her special needs
Build and improve others’ ability to take control of their own health
Ability to develop a group’s capacity to address and solve problems together as a whole
Improve a group of people’s ability to deal with and work out problems together as a whole
Supporting despite pressure from individual groups
Ability to make presentations to community leaders
Make presentations to community leaders
Supporting despite difficult situations or individuals
Ability to resist intimidation from community leaders
Withstand pressure from community leaders
Ability to withstand intimidation from a hostile/difficult client
Withstand pressure from a hostile/difficult people
Ability to manage conflict within a group
Deal with tension within a group of people
Communicating easily with others
Ability to develop rapport with an entire family
Talk easily with others about high blood pressure
Gaining trust of others
Ability to gain and develop trust with an entire family
Get trust of friends, family, and neighbor to talk with them about high blood pressure
Demonstrating concern for others
Demonstrate concern for others’ control of high blood pressure
Demonstrate concern about the effect high blood pressure has on others
CCCRP: CHW Core Competency Retrospective Pretest/Posttest; CHW: community health worker.
From our understanding of the Ingram instrument and comparisons with its items, we further operationalized the core competencies and made our own modifications of the items on the CCCRP. We defined and assigned the core competencies in the CCCRP as follows:
- Leadership core competency was operationalized by responses to items that inquired about the CHWs’ confidence in their ability pre- and post-training to (a) use popular education and adult learning approaches, (b) assess and direct others in controlling and preventing HTN, (c) conduct activities planned for hypertensive individuals and communities, and (d) improve a group’s ability to problem solve.
- Translation core competency was operationalized by responses to items that inquired about the CHWs’ confidence in their ability pre- and post-training to (a) manage people’s differences to talk with them about HTN, (b) be knowledgeable about HTN, and (c) be able to build others’ knowledge about HTN.
- Guidance core competency was operationalized by responses to items that inquired about the CHWs’ confidence in their ability pre- and post-training to (a) know and locate community services that could make people healthier, (b) know about HTN prevention strategies, (c) make HTN recommendations, and (d) build relationships within a group.
- Advocacy core competency was operationalized by responses to items that inquired about the CHWs’ confidence in their ability pre- and post-training to (a) empower others to take control of their own health, (b) make presentations to community leaders, (c) withstand pressure from community leaders and hostile/difficult people, and (d) address tension within a group.
- Caring core competency was operationalized by responses to items that inquired about the CHWs’ confidence in their ability pre- and post-training to (a) communicate and demonstrate concern with others about their HTN and (b) gain trust of others to talk with them about their HTN.
Our resulting instrument comprised 24 items duplicated for pre- and post-training on a 5-point Likert-type scale of certainty. This scale directs participants to rate their certainty about performing a skill using a numerical scale of 1 (not very sure) to 5 (very sure). Marking a number in between 1 and 5 allowed participants to indicate their varying certainty levels within being not very sure and sure. We decided to use the format of a retrospective pre-testing/post-test. This format is used as a tool or type of testing when participants require an understanding of concepts or terms to assess the pre- and post-training change appropriately (Campbell & Stanley, 1963; Lamb, 2005). This format asks respondents about their level of understanding about a concept after an intervention. In one single administration after the intervention, respondents are asked about their understanding of a concept prior to and after the intervention. This particular format helps limit the participants either overestimating or underestimating knowledge or skills due to a lack of understanding of the content (i.e., response shift). Administering the pre-test at the conclusion of the training decreases the response shift because the training should increase understanding of the instrument’s content. This format also has the advantage of us collecting pre-training and post-training data at a single administration which decreases the risk of participant fatigue and frustration from multiple data collections. However, a limitation of this format is the possibility of participants wanting to show a learning effect.
The retrospective pre-test/post-test has been used successfully by researchers to measure the effectiveness of education workshops and skill training sessions (see Raidl et al., 2004; Timmerman, Anteunis, & Meesters, 2003) and is commonly used in nursing research. Often, nursing research involves testing the effects of an educational intervention where a participant’s (e.g., student, nurse, health care worker, patient, etc.) comprehension of key concepts is essential to the instrument’s ability to measure those concepts, such as with this study. Therefore, we concluded that a retrospective pre-test/post-test method was well suited for our study.
We then reviewed the CCCRP for face and content validity through the use of a panel of experts. Face validity refers to whether the instrument looks like it is measuring the researchers’ desired concepts. This type of validity is important for researchers to establish because the instrument should appear to be measuring what it is said to be measuring so as to not confuse the participants (Polit & Beck, 2004). Content validity refers to the degree to which the instrument’s items are relevant and representative of the various dimensions of the concept. Often, this measure of validity is based on judgment, and the use of an expert panel consisting of at least three experts in the content field is an acceptable way to test content validity (Polit & Beck, 2004). Our panel included experts in the CHW field, experts in quantitative and qualitative research, and the developer of the original instrument template. The panel reviewed the CCCRP instrument to ensure we did not include referents to the past, factual statements, irrelevant statements, long items, universal statements, or double negatives. Following instrument construction, we completed several rounds of validation, pilot testing, and reliability procedures.
We tested the CCCRP further as an evaluation of CHW training outcomes with a small group of African American CHWs (n = 5) and then a larger group (n = 45) from our CBPR group (see Story et al., 2010). We analyzed the data collected during the pilot studies to determine the reliability, otherwise known as the consistency, with which the CCCRP measures core competencies in terms of stability and internal consistency (see Polit & Beck, 2004). Stability is the extent to which results are similar across two separate administrations of the CCCRP, whereas internal consistency represents the degree to which the subparts of the CCCRP measure the same concept (Polit & Beck, 2004). We only gave those participants who returned the instrument during the initial administration of the pilot test, Phase I, the instrument 2 weeks later for a follow-up administration of the instrument, Phase II.
We evaluated stability by examining the test–retest reliability of the instrument. We correlated the responses from the retrospective pre-test and the post-test CCCRP items from Phase I and Phase II using Pearson’s correlation coefficient. Pearson’s correlation coefficient represents the accuracy of the instrument as a numerical value of the strength of association between a participant’s two scores (one from Phase I, the other from Phase II). These coefficients, values, range from +1.0 to −1.0 (±1.0 representing a perfect relationship, 0 representing no relationship). We calculated the Pearson’s correlation coefficient for each set of pre-test and post-test measures to determine the magnitude of the test’s stability over time (Polit & Beck, 2004). We found the instrument stability from Phase I to Phase II to be satisfactory for both the pre and post measures of the CCCRP even though the pre measures were slightly less than the desired .70 (Pearson’s r coefficient = .59, .85).
We determined the internal consistency of the CCCRP to assess the degree to which the subparts of the instrument, the overall and individual core competencies, measured the same concept (Polit & Beck, 2004). Cronbach’s alphas (i.e., numerical values representing an estimate of reliability for a set of items on a psychometric test) were calculated by us for the total instrument and for each of the CHW core competency subscales (leadership, translation, guidance, advocacy, and caring) pre and post, from Phase I and Phase II. Overall, the CCCRP demonstrated good internal consistency in Phase I pre, Phase I post, Phase II pre, and Phase II post measures (α = .98, .96, .98, and .97, respectively). Cronbach’s alphas for individual core competencies were excellent:
- Leadership—Phase I (pre α = .94; post α = .87); Phase II (pre α = .88; post α = .85)
- Translation—Phase I (pre α = .89; post α = .79); Phase II (pre α = .95; post α = .94)
- Guidance—Phase I (pre α = .89; post α = .85); Phase II (pre α = .90; post α = .93)
- Advocacy—Phase I (pre α = .90; post α = .85); Phase II (pre α = .89; post α = .93)
- Caring—Phase I (pre α = .87; post α = .89); Phase II (pre α = .93; post α = .92)
The CCCRP was derived from a concept synthesis, and we carefully compared the indicators with previously developed items intended to measure similar concepts. Once we finalized the items, we assessed validity, the degree to which an instrument measures what it is intended to measure. We achieved content validity using of a panel of content experts, and recommendations were incorporated in the final instrument. We found the CCCRP to be an adequately reliable measure of CHW core competencies in internal consistency and test–retest (Table 2). The psychometric properties of the CCCRP were confirmed by us, and no changes were made to the CCCRP instrument as a result of these preliminary analyses.
|Table 2. Cronbach’s alpha for CCCRP individual core competencies and overall instrument internal consistency.|
CHA core competency
CCCRP: CHW Core Competency Retrospective Pretest/Posttest; CHA: community health advisor.
It was important for us to test the psychometric properties of the CCCRP because we wanted confirmation that the instrument was truly assessing the competencies of CHWs. Often times researchers create instruments to fit their specific needs and definitions of constructs. This customization by researchers becomes problematic when others begin using that instrument and the results cannot be replicated. Researchers are then at a loss for the reasons why. It could be due to the demographics of their specific sample participants, the order of administration, the fit of the instrument to the construct measured, or a number of other reasons. This leads to wasted time, effort, and resources for the researchers who are getting results that conflict with their hypotheses and yet have no avenue to explain the inconsistencies. Therefore, it is important that we test the soundness of the CCCRP for the sake of due diligence in our particular sample and whether it can be considered for possible extension into other samples and contexts as long as the same construct (as we operationally defined it) is being measured.
The limited sample sizes of our pilot studies precluded us from determining the factor structure of the CCCRP data. A factor structure represents a visual mathematical model of if, how, and to what extent the concepts in the instrument are related to each other or some other unobserved variables. Therefore, we continued with our psychometric assessment of the CCCRP by obtaining a larger and more diverse sample of CHWs who varied in gender, age, ethnicity, program types, and geographical location in an attempt to discover the underlying properties of our new instrument. This larger sample originated from CHWs attending a national conference, CHW listservs, and referrals. We purchased an exhibitor space at the national conference and recruited participants as they registered for the conference. Email links to the CCCRP were distributed by us via CHW listservs and to participants who were referred to us. We attempted to minimize duplicate responses by requesting participants complete the CCCRP once.
We reached a sample of 142 participants who had recently received CHW training and completed the CCCRP. The general rule of thumb is a ratio of 10 participants per item to enhance the likelihood of replicating the same factor structure with another sample. This rule would mean we would have needed a sample of 240 to converge on a stable factor structure for an instrument with 24 items. However, others have argued that in some cases smaller samples are sufficient (Kline, 2005). For example, Pedhazur and Schmelkin (1991) are more lenient with their requirement of 50 participants per factor. Factor analysis is a common statistical method used in the social and behavioral sciences to understand how observed behaviors or responses are related through possible underlying structures. It is a method that provides an explanation of the correlations between our variables in terms of a factor or latent variable, one that is not directly observed. There are several Sage Methods Cases that can provide further information on factor analysis (see Bofah & Hannula, 2014; Zhang, Gao, Bi, & Yu, 2014), exploratory factor analysis (EFA; see Oller, 2014), and confirmatory factor analysis (see Buchanan, Valentine, & Schulenberg, 2014).
In our study, we did not know how many factors would appear, but we did know that a sample size of 142 should be adequate to conduct an EFA. An EFA is a method using statistics to discover the underlying factor structure, or relationships, between a set of variables. In our situation, performing a confirmatory factor analysis would have been an acceptable next step for us if we used the five identified CHW competencies as the theoretical factor structure to test a hypothesis about the variables’ relationships; however, we felt inclined to employ a more flexible approach of performing an EFA to discover the best fitting and simplest model to the data. We felt that although our concept synthesis did provide us with CHW competencies, it did not provide us with enough guidance for a clearly defined or specified model for evaluation with the data. We wanted to know how the CHW competencies would load on the factors and which ones, if any, shared a conceptual meaning.
After we cleaned the data for missing values and outliers, we checked normality using the Kolmogorov–Smirnov test, a statistical test to compare our sample responses with the distribution of a generic sample using probability distribution. We then made corrections to our data (for non-normality) by transforming them for a substantial negative skew. Our finding of this skew was not surprising as we anticipated some level of growth in certainty after the CHW trainings and the response patterns were in line with the retrospective pre-test/post-test format. The assumptions for linearity, a linear relationship between the independent and dependent variables, and homoscedasticity, consistency in errors, for the EFA were met, and we had a final sample of n = 137. Interestingly, the Kaiser–Meyer–Olkin measure, a test of factor analysis suitability, was .922, and considered superb and well above the .6 threshold, indicating our sample size was sufficient for an EFA (Hutcheson & Sofroniou, 1999). We used principal axis factoring, a type of factoring to find the least number of factors in the model, with a promax rotation, a data compression technique to simplify the interpretation of the analysis, on the post-training items to determine the factor structure of the CCCRP and identify the factors that emerged.
For us to select the appropriate number of factors to include in the model, we considered the evaluations of eigenvalues for the factors (see Tabachnick & Fidell, 1996). Eigenvalues are numerical values that represent the amount of variance accounted for by the variables included in each factor. We also consulted the scree plots and factor loadings until we found a model with good fit to the data. We suppressed item loadings less than .4 and removed cross-loaded items with differences of .2 or larger (6 items total). When we looked for eigenvalues greater than 1.0, a two-factor structure appeared with the first factor accounting for 57.74% of the variance and the second factor comprising 7.97% for a total of 65.71%, exceeding the 60% threshold of the variance in the model explained (see Hair, Black, Babin, Anderson, & Tatham, 2006). The two-factor solution had communalities all above .5, an interfactor correlation of .694, and a resulting 18-item post-training instrument once cross-loaded items were removed (Table 3). The first factor labeled understanding through caring had the highest loadings on items related to control over health and knowledge in health promotion and recommendations. The second factor labeled leading under pressure related to items that dealt with social intelligence and resilience.
|Table 3. Factor loadings for rotated factors for post-training measures.|
Factor 1: Understanding through caring
Get trust of friends, family, and neighbor to talk with them about health issues
Assess and direct others about how to control and prevent health issues
Know about health promotion, including weight control, dietary changes, and physical activity
Talk easily with others about health issues
Know about how people can take care of their own health promotion
Build and improve others’ ability to take control of their own health
Build and improve others’ understanding of health promotion
Make the right health recommendations
Handle people’s differences within group to talk to them about health issues
Use popular education and adult learning approaches to talk with others about health issues
Talk about health information you know in a way that is understandable to others
Demonstrate concern about the effect health issues have on others
Demonstrate concern for others’ control of health issues.
Do activities planned for communities about health issues
Factor 2: Leading under pressure
Withstand pressure from a hostile/difficult people
Deal with tension within a group of people
Withstand pressure from community leaders
Make presentations to community leaders
After we discovered the two-factor structure, we also conducted internal consistency analyses on the remaining 18 items included in the post-training EFA to assess the reliability of the CCCRP now that items had been removed. Our computed Cronbach’s alpha was evaluated to be very good (Α = .958). A value as high as this may suggest that the CCCRP is one-dimensional given how much of the variance the first factor, understanding through caring, was able to explain (i.e., 57.74%); however, through multiple model fittings, the second factor, leading under pressure, consistently emerged via inspection of eigenvalues and cumulative variance. After we considered all the evidence, we remained confident that the data supported our finding that a two-factor structure is the most appropriate and accurate model for the CCCRP.
In this study, we identified a gap in CHW training evaluation and attempted to address it. Through a concept synthesis and extensive psychometric testing, we developed a sound instrument to measure CHW core competency attainment from training. This instrument has important implications for CHW projects. Communities employing a sound instrument to ensure CHW core competencies are developed can increase their likely success of the subsequent CHW intervention. The use of sound evaluation models by communities is also a critical piece of securing and sustaining funding that these projects typically rely on for implementation.
We learned several lessons from this multiphase study. First, combining multiple sources to gather information and serve as our resource pool was an ideal method to obtaining rich, well-rounded data. The deliberate steps we took to identify a set of common CHW competencies, develop a measure for those competencies, and establish the psychometric properties of that measure were critical in providing a complete CHW training evaluation. Researchers employing this approach would be prudent in their attempts to measure a phenomenon or evaluate other types of training. Another lesson we learned is more specific to working with CHWs. CHWs are often volunteers and are contributing to projects among their many other competing demands. Researchers should compensate CHWs for their time to convey respect, support the spirit of CBPR, and increase the likelihood of positive project outcomes. In addition, researchers should invest time and attention to building relationships and including the community during the initial planning to ensure the relevance of the project and promote community buy-in. Researchers’ facilitation of the community’s buy-in to the study is essential for their project success.
Researchers should note that involving the community of interest at all phases is an important part of the CBPR process. For instance, we presented incremental findings of this study back to the CHWs through their national conference. The CHWs were invited to confirm the interpretations of our study and give feedback to us. In addition, we freely gave the resulting instrument to the CHWs for use in their own projects. By doing so, we conveyed appreciation for their contribution as well as furthered the CHW evaluative science by providing a theoretically based and psychometrically tested instrument to evaluate CHW training. The CCCRP also provided CHWs with a method to explore how to strengthen their service and abilities to the community and the ability to assess their effectiveness to secure new and reoccurring funding.
With regard to psychometric testing, we struggled to get the sample sizes we thought we needed to run an EFA. However, after we put a hefty amount of effort into recruiting participants at the national conference, on the listservs, and via referrals, we felt we had exhausted our most fruitful options and had given our best effort. We decided that moving on with the EFA would at the very least be informative to any underlying structure even though we did not have the ideal sample size. Fortunately, our sample size was sufficient. Had we been in the situation where the data were inconsistent or failed to fit within the scope of the outlined competencies, we would have continued with devoting another year to participant recruitment and data collection.
When testing the assumptions of the EFA, we found that our data failed the test of normality and that a transformation was necessary to meet this assumption. We needed to transform our data for a substantial negative skew. At first sight, this realization was disheartening as this was an early indicator to us of a possible atypical feature of our data. It was only when we went back to the raw data and began to triple check our data entry did it become clear to us that this violated assumption of normality was not due to incorrect data entry but instead to the unique nature of the retrospective pre-test/post-test approach. In fact, the skew we observed in the response data fit with the expected pattern of responses for the retrospective pre-test/post-test as found in other studies using the approach.
Overall, our experience from this study taught us much about the value and meaningfulness of the approaches we selected for our study’s purpose. Had we known the amount of effort required for success and the challenges we would have faced beforehand, we may have hesitated in proceeding forward. However, we feel content with our decisions to proceed as the results of our study produced a sound tool and the opportunity that CHWs, their communities, and stakeholders needed to enhance the effectiveness of their own programs and efforts. This was a situation in which the cost to the researchers in regard to time and effort was high, but the overall value to the CHWs, their communities, and the CHW literature easily outweighs our costs. The large potential for our efforts to benefit the communities directly is a consequence that is coveted by most, if not all, researchers who desire for their work to meaningfully contribute to their field or discipline. Our final lesson was that the potential success of fully engaging in important and meaningful work provided rewards that could be enjoyed by all those affected, notwithstanding any challenges that presented themselves.
- In our discussion of this case study, there are several moments in which we remark on the amount of effort that instrument development requires. Specifically, recall the three phases involved in conducting a concept synthesis. Is each one of these phases necessary to come to a comprehensive conclusion? Provide discussion for your position considering both sides of the argument.
- CBPR is a useful approach for identifying community needs and the mechanisms in which to address those needs. The outcomes of the success of this approach have various potential long-term benefits for the community and its stakeholders. However, if the approach is unsuccessful, what are the immediate and long-term consequences that researchers should consider?
- There were a few moments during the psychometric testing of the CCCRP that our testing of the instrument did not meet the general rules of thumb or guidelines for specific tests. We rationalized our decisions and proceeded. Fortunately, in our specific case, we had a positive result. If your own data would have also failed to meet these guidelines, would you proceed? What importance should researchers place on guidelines for statistical testing? What implications does this have for instrument development?
- What importance does psychometric testing of a new instrument have to the validity of the instrument? Should you still use your own developed instrument if you have pilot tested but have yet to establish its psychometric properties?