Skip to main content

The Scoping Review Method: Mapping the Literature in “Structural Change” Public Health Interventions

Case
By: , , , & Published: 2017 | Product: SAGE Research Methods Cases in Health
+- LessMore information
Search form
No results
Not Found
Download Case PDF

Abstract

This case discusses how we used scoping review methodology to map the literature in an emergent area of research, “structural change” public health interventions. Scoping reviews are similar to systematic reviews in both scale and rigor; both of these literature review methodologies are comprehensive approaches to reviewing the literature on a topic. However, while a systematic review attempts to answer a specific, targeted research question, a scoping review is designed to map and categorize all of the literature on a broad topic. For this reason, it is an excellent method to employ in emergent research areas, in which researchers have not yet conducted systematic reviews or otherwise attempted to record the entirety of a scholarly conversation. In this case report, we discuss advantages and disadvantages to the methodology, as well as the lessons we learned from our experience, and our recommendations for researchers who utilize this method. We encountered challenges including time limitations, finding a balance between a search strategy that was neither too narrow nor too broad, and adjusting the search throughout the process to accommodate new vocabulary terms as we discovered them.

Learning Outcomes

By the end of this case, students should be able to

  • Understand the process of the scoping review methodology, including the major ways that it differs from that of a systematic review and other major literature review methodologies
  • Identify existing frameworks for scoping review methodology
  • Recognize the limitations, advantages, and disadvantages of conducting a scoping review as compared to other literature review methods, including the systematic review; given these limitations, determine whether this is the best methodology to use to review a given topic
  • Describe practical challenges and success strategies for conducting a scoping review
Project Overview and Context

This scoping review project emerged out of the Health Education & Behavior Special Issue by Lisa Lieberman and Earp (2015) that focused on structural change interventions. “Structural changes” or “structural approaches” are defined as “modifications to the physical, social, political, and economic environment in which people make health-related decisions” (Lieberman, Golden, & Earp, 2013). The field of public health has been interested in structural changes, also called “policy, systems, and environmental” changes or “policy and environmental” changes for over a decade; however, few studies have examined how best to evaluate these types of complex interventions. We asked ourselves: when researchers attempt to alter the health environments on multiple levels (e.g., community, family, individual), how can we best evaluate and capture the impact of these interventions? Thus, our team was tasked with the challenging question of uncovering the landscape of evaluation strategies as a means to identify best practices and the future of evaluation in this field. Since both the use of the term “structural approach/change” and the evaluation of these in a systematic manner were relatively new, we turned to scoping review methods to help answer this question. This review was conducted from fall 2015 to spring 2016.

What Is a Scoping Review?

The scoping review—as defined by Arksey and O’Malley (2005)—is a literature mapping process that allows a researcher to examine the “landscape” of the literature based on a particular question of interest. Rebecca Armstrong, Hall, Doyle, and Waters (2011) note that researchers can have several objectives in conducting a review, including (1) exploring the extent of the literature, (2) identifying boundaries and parameters of a review, and (3) identifying gaps in a body of literature. Often, the purpose of the scoping review is to lay the groundwork for a more rigorous systematic review.

Hilary Arksey and Lisa O’Malley (2005) offer what many researchers call the “first” or “original” framework, a five-step process to conducting scoping reviews. Danielle Levac, Colquhoun, and O’Brien (2010) built upon their framework to offer more detailed guidance and recommendations. These steps include (1) identifying the research question, (2) identifying relevant literature, (3) selecting studies, (4) charting the data, and (5) collating, summarizing, and reporting results. We found Danielle Levac and colleagues’ refinement of the Arksey and O’Malley framework and practical “lessons learned” to be valuable in the practical application of the framework.

For example, in the first stage of identifying the research question, Levac and colleagues provide detailed recommendations that researchers take time to clearly outline the concept, target population, and health outcomes to narrow the focus of the study. Furthermore, the authors recommend that the study team agree on the outcome of the scoping review at this stage (e.g., a list of typologies) so that the study goal is clear to all. As described below, we applied these recommendations to our search by taking several iterations to outline our search strategy, define search terms, and identify outcomes prior to conducting the search. In addition, our outcomes table was refined over the course of several meetings and by all study members prior to data extraction.

Most scoping reviews that we encountered reference the Arksey and O’Malley framework as the foundation for their methods. One recent paper by Micah Peters et al. (2015) outlines another framework for conducting scoping reviews with a methodology developed by members of the Joanna Briggs Institute (JBI) and Joanna Briggs Collaborating Centers. Their guidelines follow a similar path to those aforementioned but offer additional considerations, such as the context of the review’s research question. For the sake of comprehensiveness, we recommend reviewing the frameworks of both Levac and colleagues and Peters and colleagues prior to conducting a review.

How Is a Scoping Review Different From a Systematic Review?

Scoping reviews, also known as scoping studies, are often compared to systematic reviews as their methodologies are similar—as noted by Arksey and O’Malley (2005) and Peters et al. (2015). Both scoping and systematic reviews provide a comprehensive overview of the literature on a given research topic. Both types of reviews follow well-defined methodologies, which are described transparently so that a reader could theoretically replicate the process to produce the same results. However, while systematic reviews attempt to provide an answer to a fairly specific, targeted research question, scoping reviews are usually much broader. The research questions that guide scoping reviews are often structured as follows: “What does the literature say about x population or intervention in y context?” Systematic reviews also assess the quality of studies and may be restricted to only one type of study (e.g., randomized controlled trials), while scoping reviews are less restrictive, including all literature on a topic regardless of quality or study type.

Another major difference between systematic and scoping reviews is the time required to complete each type of paper. The Cochrane Collaboration (2011) proposes 12 months as an example of the timeline for a systematic review, although the time required may vary. Because the goal of a systematic review as stated in the Cochrane Handbook for Systematic Reviews of Interventions (2011) is “to collate all evidence that fits pre-specified eligibility criteria” to answer a research question, researchers must do due diligence to ensure that no stone has been left unturned. This often means searching in several bibliographic databases and individual journals for every scholarly article published on the topic, looking for gray literature in resources such as conference proceedings and clinical trial registries, and following citations from included studies. These steps help to ensure comprehensiveness in a literature search and also add time to the process. In a scoping review, researchers may be working within a more abbreviated timeframe and make decisions as to which of these resources to search. Due to our time limitations, our own literature search was restricted to the bibliographic databases PubMed, EMBASE, and Web of Science.

How Is a Scoping Review Different From a Literature Review?

A scoping review is a type of literature review. Since the goal of what is generically called a “literature review” sounds similar to that of a scoping review—to give a comprehensive overview of the literature on a topic, identifying key themes and authors—it is helpful to understand at the outset of a scoping review how it is different from a “typical” literature review.

  • Methodology—Scoping reviews follow transparent and systematic methodology; their authors record and report on every step of the process. By contrast, a traditional literature review may follow any search method the author wishes, and the methodology is not reported in the review itself.
  • Narrative—Literature reviews are often called narrative reviews because they synthesize content into a narrative for the reader. Scoping reviews more often publish findings in tables or in an aggregated format.
  • Bias—The transparency, systematic methods, and comprehensiveness of a scoping review demonstrate an attempt to avoid bias. The lack of a transparent and systematic methodology in the traditional narrative review leaves it open to bias through the possible omission of key sources.
Other Types of Literature Reviews

Maria Grant and Andrew Booth (2009) provide a helpful description of 14 common types of literature reviews, their methodologies, and perceived strengths and weaknesses of each type. Some of these other types of literature reviews include the following:

  • Rapid review—Similar to scoping reviews, these reviews are completed within a shorter timeframe than systematic reviews. Rapid reviews address a narrowly focused research question, as opposed to mapping all of the literature on a topic, as in a scoping review.
  • Meta-analysis—When applicable, the authors of a systematic review combine the quantitative results of included studies and analyze these combined results as a single dataset. The combination of multiple smaller research studies into a larger dataset often provides a more accurate statistical analysis. It is not appropriate to apply meta-analysis within a scoping review since studies must be comparable in objective and methodology in order to merge their data; scoping studies by design aggregate a more heterogeneous collection of articles.
Advantages, or When It Is a Good Idea to Use a Scoping Review Methodology

Scoping reviews are best employed when there is limited literature to inform the research question of interest. In our case, while we knew that structural approaches were not new to public health, we were fairly certain that few papers directly examined evaluation of such initiatives or used that specific terminology. When there are a limited number of studies of comparable methodology or of sufficient quality, a scoping review is a comprehensive and evidence-based methodology that can be used to answer the broader question, “What do we know about _______?” It can be thought of as more rigorous than a narrative review but less structured than a systematic review—somewhere in between the two.

Disadvantages, or When It Is Not a Good Idea to Use a Scoping Review Methodology

Since scoping reviews often lay the groundwork for a systematic review, it is typically not a good idea to conduct one if a systematic review already exists on the topic. One major disadvantage to the scoping review is that due to the broad nature of the review question, the findings may be similarly broad, requiring additional steps on the part of the authors to synthesize and draw useful conclusions from them. In addition, while a scoping review is usually conducted within a finite timeframe, it is still a major undertaking and a large amount of work. We would not recommend this methodology for solo researchers or those working within an extremely limited timeframe (<6 months). In these cases, a narrative review may be a more practical choice of methodology.

Finally, as previously mentioned, scoping reviews do not assess the rigor or quality of studies. The literature in our review ran the gamut with respect to quality. We view this as a limitation of the scoping review methodology and note it as such in our case. We would recommend conducting a systematic review if assessing rigor and quality is a priority.

In the following sections, we describe lessons learned through our process of scoping review, through the five-stage process described above. While the five stages make the scoping review process appear linear, Arksey and O’Malley note that the stages are “not linear but iterative” (p. 22). Our scoping process was indeed iterative and to some extent circular, upholding this observation. We first tested the search strings, reviewed the articles yielded, and then went back to the search strategy to investigate why some known papers were identified and not others. This led to revisions to the search strings, re-reviewing of articles yielded, and “rinse-and-repeat.” Collecting feedback from the subject experts on our team, as well as external expert colleagues, helped to identify problem areas as they arose, allowing us to readjust our search strategy before the data extraction and analysis phases were complete. Lessons learned from these steps are described next.

Practical Lessons Learned: Challenges in the Process
Stage 1: Creating a Well-Defined Research Question

The subject of a scoping review should be broad (usually asking “What does the literature say about _____?”), and Arksey and O’Malley warn against creating an overly specific or leading research question. At the same time, the step of clearly defining the research question should not be skipped over completely. What is the context or population of interest? Which aspects, outcomes, or variables are critical to the review, and which are not? Reflecting on these questions before searching can save time in reviewing article abstracts, as this information translates directly to the database search string. For example, we decided to search for evaluation studies of structural change approaches dealing with the general topics of health promotion and wellness. More specific phenomena such as tobacco and obesity prevention were ruled too specific and not included in our research question. By making this decision at the outset of our review, we were able to exclude these terms from our search string, avoiding the task of reviewing many unrelated abstracts.

Stage 2: Identifying Relevant Studies
Recruiting the Right Team

According to Levac and colleagues, one critical factor in determining relevant studies and creating manageable boundaries around the search is to assemble a capable team with the right expertise. Team members’ abilities and expertise will directly impact efficient and successful execution of the study. First and foremost, it was essential to have a content specialist librarian (in this case, health sciences specialist) act as consultant to the team. Due to the many required iterations of search strategies, the librarian helped to navigate the many complex twists and turns in the search. A librarian or other search expert can be invaluable to the process due to their expertise in library databases, search strings, and literature review methodology. Second, we consulted an external peer who is a content expert in structural change to review our search strategy and preliminary findings. While this expert had a similar content background to our study team members, their external review of our process and findings was invaluable to identifying key words and studies that we had missed.

Stage 3: Study Selection
Search Strategies and Emerging Vocabulary

We encountered perhaps our greatest challenge with the key words used in our search strategy. As structural change research has evolved, so has the nature of how structural change approaches are named and discussed in the literature. In the new and rapidly evolving fields in which scholars conduct scoping reviews, they are likely to encounter a similar diversity of vocabulary. As expected, there was not a single, agreed-upon key word that we could search and be confident that we had identified all structural change literature. In total, more than 15 distinct terms have been used since 2005 to describe the structural change approach. Certain essential articles were missed in the first iteration of our search because some of these variations were missing from the search strategy. It was impossible to know these terms were missing until the search results themselves introduced us to their usage. That is, we couldn’t use in the search what we didn’t yet know. This type of scenario is common in scoping studies, as reviewers become increasingly informed on their topic as the research progresses. Arksey and O’Malley describe a similar, post hoc method of devising and applying criteria “based on increasing familiarity with the literature” (p. 26).

The input of subject experts and alternate means of identifying literature can be extremely helpful in identifying any gaps in preliminary database searches. A certain study may prove elusive in a database search, but a colleague with knowledge of the topic may know of it, or handsearching (manually searching a journal’s table of contents) may turn it up instead. After consulting with subject experts and all members of our team, multiple articles were identified as research that theoretically should have appeared among the search results because they were highly relevant.

To improve our search string, we compared it with the database records of the missing studies identified by experts and investigated why these studies weren’t retrieved in our search. In most cases, the authors had used a slight variation of our search terms—for example, “environment change” rather than “environmental change.” We added these terms to our search string and ran the search a second time. Doing this greatly improved our results. If we had determined that it was too late in the process to do this or that adding the new terms to the string would not improve search results, we nevertheless could have included these articles in the review, simply by saving them in a separate group as “expert-identified studies.”

Determining Reasonable Inclusion and Exclusion Criteria

Some important decisions we made along the way related to inclusion and exclusion criteria. Levac and colleagues’ recommendation to “combine a broad research question with a clearly articulated scope of inquiry” provides helpful guidance for developing strong criteria without sacrificing comprehensiveness in the categories of literature reviewed. As our search strategy table (Table 1) lists, we limited to publication years 2005-2016 due to knowledge that the federal government had commenced funding for structural change initiatives roughly around 2008; thus, we felt 2005 was a safe starting point for our search. We also limited to peer-reviewed, US-based studies. There has been research conducted on structural change internationally, and many evaluation studies are disseminated via gray literature/reports. However, applying these inclusion criteria allowed us both to create manageable boundaries for our search and to ensure that our research findings would be applicable for researchers in the field. In this way, it was important that our team had content knowledge of the research question in order to inform the inclusion/exclusion criteria in a nuanced way.

Table 1. Search strategy summary for the scoping review.

Inclusion criteria

  • Publication years 2005-2016
  • Peer-reviewed publication
  • English language
  • Focus on evaluation or process evaluation of structural change
  • Studies conducted in the United States

Exclusion criteria

  • Gray literature, editorials/commentaries, letters, conference abstracts
  • Theoretical/conceptual papers on structural change
  • Focus only on monitoring structural change (and not evaluation)

PubMed search string (final version)

((“evaluation studies as topic”[mesh] OR (“evaluation”[tw] AND “studies” [tw] AND “topic” [tw]) OR “evaluation studies as topic” [tw] OR evaluate[tw]) OR evaluating[tw] OR evaluation[tw] OR “evaluation studies”[Publication Type] OR “process evaluation”[tw] OR “program evaluation”[mesh] OR “program effectiveness”[tw] OR “pilot projects”[mesh] OR “health impact assessment”[mesh]) AND (“structural change”[tw] OR “structural approach”[tw] OR “structural approaches”[tw] OR “environmental approaches”[tw] OR “policy approach”[tw] OR “policy approaches”[tw] OR “systems approach”[tw] OR “systems approaches”[tw] OR “environment approach”[tw] OR “environment approach”[tw] OR “environmental intervention”[tw] OR “environmental interventions”[tw] OR “environmental change”[tw] OR “environmental changes”[tw] OR “environment change”[tw] OR “environment changes”[tw] OR “built environment”[tw]) AND (“2005/03/21”[PDAT]: “2016/03/17”[PDAT])

Time Limitations and Missing Articles

All literature searches are approached as a balance between high precision (narrow) and high recall (broad) searches (see example in Table 2). Narrow searches will retrieve fewer articles overall, and a higher percentage of them will be relevant. Broad searches retrieve many more results, making it less likely to have missed relevant articles, but increasing the time it takes to review them.

In a systematic review where the goal is total comprehensiveness, searchers favor high recall in order to eliminate the possibility of missing anything. As a result, they may review tens of thousands of article abstracts, extending the timeframe of their studies to a year or longer. Most researchers conducting scoping reviews also want to be comprehensive in their searches, but are operating under much tighter time restrictions and therefore seek a better balance between precision and recall.

Our first search strategy was designed in favor of precision due to time restrictions, as well as the aforementioned issue of simply being unaware of many of the synonyms in use for “structural change.” Defining a specific research question and inclusion/exclusion criteria at the outset saved time in the search by allowing us to create a targeted search strategy. The initial search string we created yielded 119 results, 16 of which were relevant to the topic. After consulting with a peer expert in structural change research, it was determined that a significant percentage of relevant literature had not been retrieved with this search—that is, the search was too precise.

Table 2. Precision versus recall search examples.

High precision (narrow) search strategy example

High recall (broad) search strategy example

“sugar-sweetened beverages” AND taxes AND BMI

(sugar-sweetened beverage* OR soda OR soft drink* OR SSB OR SSBs) AND (tax* OR policy OR policies OR legislat* OR law* OR regulat* OR program* OR surcharge*) AND (BMI OR body mass OR weigh* OR overweight OR overnutrition OR obes*)

We revised the search strategy, incorporating newly discovered vocabulary terms and improving recall. After the second search that yielded 1,609 results, an additional 52 relevant articles were identified. This extended the timeline of our review by roughly 1 month. However, given that we discovered a significant amount of literature to add to our study, the two-stage iterative search process was an instrumental factor in the success of our review.

Iterative Process—Knowing When to Stop

As mentioned by Arksey and O’Malley, the scoping review process is iterative, not linear. After the second stage of our search, we observed firsthand how our search was improved as we familiarized ourselves with the literature via the scoping process. As with many literature reviews, it is tempting to revise a scoping review and add to collected data many times, and difficult to know when to stop searching. However, scoping reviews are conducted rapidly—or at least within a finite timeline—unlike systematic reviews, which typically take much longer to finish. Grant and Booth (2009) point out that the scoping search may be complete or only partially complete, depending on the timeline and scope of inquiry. Seeking a balance between the dual needs of conducting a comprehensive literature search and working within the parameters of the project at hand is essential to successful completion of a scoping review. Consequently, we recommend setting a deadline after which new studies will not be incorporated into the review. This deadline may be stated in the review’s methodology to ensure transparency.

After having searched multiple scholarly databases, scanned the bibliographies of identified articles for additional relevant studies, and consulted with at least one subject expert (and an information expert such as a research librarian, if possible) to identify possible oversights, it can be reasonably concluded that a sufficient number of articles have been identified to properly map the literature. If deemed beneficial, the search may be revised and re-run a second or third time, but this process is not meant to be repeated indefinitely. While conducting our search in two stages was critical to identifying all relevant literature, if we were to repeat this process indefinitely, we would see diminishing returns and a decreasing number of relevant articles. Lisa Arksey and Hilary O’Malley recommend including a list of articles that have been identified—but not reviewed—in an appendix, as they may be of interest to readers, even if they do not meet the specific criteria of the scoping review.

Stage 4: Charting the Data
Teamwork Is Key!

Teamwork was key for the detailed stages of data extraction and charting. As a team, we developed the data charting form and had several discussions about which variables to extract and why. Like the many terms encountered that describe our topic—structural approaches—myriad terms were also used across the studies to describe study design, methods, and tools. For example, while some explicitly stated in the title that they had conducted a “process evaluation,” others did not use this term but described their project with references to “dose, reach, fidelity” and other distinctly process evaluation elements. As a team, we decided how to deal with both larger decisions and detailed issues like this. For data extraction, we followed a process similar to qualitative coding. Two team members independently extracted data from two studies and then met to compare similarities and discrepancies in their data entry. From here, further revisions to the table were made. Next, one team member reviewed all studies and extracted detailed data into a table exclusively for the study team’s records; intermittently, a second team member also reviewed the set of studies and quality-checked the data entry. Once all studies had been checked for accuracy in our detailed table, we created a summary table for publication. This process of meeting and redefining variables and determining what to include in data extraction happened throughout this stage and the next. Again, while it is described as a relatively linear process here, each of these steps was highly iterative.

Stage 5: Collating, Summarizing, and Reporting Results
Return to Step 1

While reporting results appears to be the last stage in the framework, in fact, as a team we used this opportunity to return to the first step and review the research question to ensure that our summary findings table effectively answered our original questions. We then held several team meetings to review the summary findings table and members offered interpretations of the numbers and their implications for our study findings and recommendations, including discussion questions: (1) Where are gaps in the literature? (2) Where are there still challenges? (3) Where are opportunities to advance the evaluation literature? One member took notes of the team’s discussion and circulated a “preliminary findings” document that was revised iteratively over the course of several weeks.

Consulting Stage: Highly Recommended

While the original Arksey and O’Malley framework lists this step as “optional,” Levac and colleagues assert that it is an “essential component” of scoping reviews. Our experience affirms that consulting both librarians and peer content experts can add significant value to a review. As we have noted, seeking consultation should happen across all stages of the review; in particular, ours were most valuable during the formation of the search strategy and in the earlier stages of collecting studies. However, we also turned to experts to provide feedback on our emerging/preliminary findings to assess whether they were novel and relevant enough to advance the field in a meaningful way.

Conclusion

Scoping reviews are of particular applicability in emergent research areas where formal systematic reviews have not yet been conducted and/or the size and scope of the literature are unknown. They may be restricted by time and a lack of depth in the assessment of included studies. However, this methodology has tremendous potential in its ability to examine and map the literature on a given topic through an informative, iterative research process. Scoping reviews not only identify the scope of the literature and gaps therein, as others have noted, but can also identify vocabulary terms that would aid in maximizing recall/sensitivity in a systematic review search. This underlines their utility as a precursor to a formal systematic review, or possibly to several systematic reviews, if multiple categories of literature that are identified through the scoping review demonstrate potential for closer investigation.

Exercises and Discussion Questions
  • How would our methodology change if we had conducted a systematic review rather than a scoping review?
  • What advantages does a systematic review provide that a scoping review does not?
  • What are the disadvantages of employing scoping review methodology?
  • How did our case demonstrate that the scoping review is an iterative and not a linear process?
Further Reading
Colquhoun, H. L., Levac, D., O’Brien, K.K., Straus, S., Tricco, A.C., Perrier, L., … Moher, D. (2014). Scoping reviews: Time for clarity in definition, methods, and reporting. Journal of Clinical Epidemiology, 67, 12911294. doi:http://dx.doi.org/10.1016/j.jclinepi.2014.03.013
O’Brien, A., & McGuckin, C. (2016). The systematic literature review method: Trials and tribulations of electronic database searching at doctoral level. In SAGE Research Methods Cases. doi:http://dx.doi.org/10.4135/978144627305015595381
Pham, M. T., Rajić, A., Greig, J. D., Sargeant, J. M., Papadopoulos, A., & McEwen, S. A. (2014). A scoping review of scoping reviews: Advancing the approach and enhancing the consistency. Research Synthesis Methods, 5, 371385. doi:http://dx.doi.org/10.1002/jrsm.1123
Web Resources
Scoping Reviews. (2016). HLWIKI Canada. Retrieved from http://hlwiki.slais.ubc.ca/index.php/Scoping_reviews
References
Arksey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8(1), 1932. doi:http://dx.doi.org/10.1080/1364557032000119616
Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). “Scoping the scope” of a Cochrane review. Journal of Public Health, 33(1), 147150. doi:http://dx.doi.org/10.1093/pubmed/fdr015
Daudt, H. M. L., van Mossel, C., & Scott, S. J. (2013). Enhancing the scoping study methodology: A large, inter-professional team’s experience with Arksey and O’Malley’s framework. BMC Medical Research Methodology, 13, 48. doi:http://dx.doi.org/10.1186/1471-2288-13-48
Grant, M. J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26, 91108. doi:http://dx.doi.org/10.1111/j.1471-1842.2009.00848.x
Higgins, J. P. T., & Green, S. (Eds.). (2011). Cochrane handbook for systematic reviews of interventions, version 5.1.0. The Cochrane Collaboration. Available from http://handbook.cochrane.org/
Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5, 69. doi:http://dx.doi.org/10.1186/1748-5908-5-69
Lieberman, L., Golden, S. D., & Earp, J. A. L. (2013). Structural approaches to health promotion: What do we need to know about policy and environmental change? Health Education & Behavior, 40, 520525. doi:http://dx.doi.org/10.1177/1090198113503342
Lieberman, L. D., & Earp, J. A. L. (2015). The evidence for policy and environmental approaches to promoting health [Special issue]. Health Education & Behavior, 42(Suppl. 1).
Peters, M. D., Godfrey, C. M., Khalil, H., McInerney, P., Parker, D., & Soares, C. B. (2015). Guidance for conducting systematic scoping reviews. International Journal of Evidence Based Healthcare, 13, 141146. doi:http://dx.doi.org/10.1097/XEB.0000000000000050

Copy and paste the following HTML into your website