Skip to main content

Longitudinal Research: Transition of Adult Basic Education Students to College

Case
By: Published: 2017 | Product: SAGE Research Methods Cases Part 2
+- LessMore information
Search form
No results
Not Found
Download Case PDF

Abstract

The Adult Transitions Longitudinal Study was a 5-year study tracking the educational outcomes of adult students participating in an adult basic education-to-college transition program. This case describes three key principles for guiding this longitudinal study and the strategies the research team used to enact each of these principles. These three principles are as follows: (a) get the data, (b) get high-quality data, and (c) keep the data organized. Although simple, the Adult Transitions Longitudinal Study team discovered the usefulness of these three principles in making decisions during the course of a long and complicated study. The goal of this case is to help future researchers to navigate the unique aspects of longitudinal studies more successfully, as well as to estimate the additional resources needed to stay true to the principles.

Learning Outcomes

By the end of this case, students should be able to

  • State three key principles of data collection and management when conducting a longitudinal study
  • Describe three strategies for tracking longitudinal study participants over time
  • Explain why researchers will need extra resources to track and collect data from participants over the years of the study
Project Overview and Context

The benefits of attending college are now well documented in the research. For example, median income for a full-time working college graduate in 2011 was more than US$42,000 per year, compared with only US$21,000 for full-time workers with no more than a high school diploma (Julian & Kominski, 2011). Even “completing an associate’s degree appears to be associated with a 15 to 27 percent increase in annual earnings” (Kane & Rouse, 1999, p. 77). The difference in earnings between no college and a degree can equal US$1 million more over the course of one’s working life (Carnevale, Cheah, & Hanson, 2015). Yet, research indicates that older adult college students are more likely than traditional, high school seniors not to enroll in college, to drop out once enrolled, or to “stop out” of college (drop out but then re-enroll; Drekmeier & Tilghman, 2010). How can we help adults who dropped out of high school and then got their GED (General Education Development, or high school equivalency diploma), adult immigrants to the United States, or adults with a high school diploma who didn’t attend college right away, to enroll and succeed in college so they can get the benefits of higher earnings over a lifetime?

The goal of the Adult Transitions Longitudinal Study (ATLAS), funded by the Nellie Mae Education Foundation and implemented by the University of Massachusetts Amherst and World Education/New England Literacy Resource Center, was to measure the educational outcomes of adult students participating in an adult basic education (ABE)-to-college transition program. College transition programs aim to prepare low-skilled non-traditional adult basic education students—those who are at risk of being required to take remedial English, writing or math classes in college—with the academic, financial planning, college knowledge, and study skills needed to succeed in college. However, no study had followed such adults longitudinally for multiple years after their participation in such a transition program to gauge their educational outcomes as well as the factors that support or hinder them in enrolling, persisting, and succeeding in post-secondary education.

The ATLAS research team consisted of this author as principal investigator (PI) and a graduate student research coordinator—along with a series of 10-hr/week graduate research assistants and two short-term faculty consultants, over the 5 years of the study—following 227 adult students who entered one of 11 New England–based ABE-to-college transition programs in fall 2007 or spring 2008 and who agreed to participate in the ATLAS study. We collected information over 5 years about these adult students’ educational outcomes, including whether—by the end of the study—they had (a) enrolled in college, as gauged by completing three transferable college credits or by entering college within a year of participating in a 15-week college transition course; (b) persisted in college, as gauged by the number of semesters of college completed, and by their overall educational trajectory (never enrolled, enrolled but dropped out, or enrolled and still enrolled, even if “stopped out or graduated by the end of the study); and/or (c) succeeded in college, as gauged by the total number of college credits they had earned, or by whether they achieved a “tipping point” of 30 non-developmental college credits (the number of credits research indicates will give non-traditional college students the momentum to continue).

Based on the limited existing research into the factors influencing enrollment, persistence, and success among adults in this population, we developed hypotheses about whether these adults were helped or hindered to attend college based on their goals, their individual characteristics (such as age, literacy level, and financial status), the supportive people or programs in their lives, and/or the obstacles in their lives (such as poor health, lack of academic skills, and working full time while in college).

With data about these adult students over 4 years, including information from the adults themselves and from the transition programs in which they participated, we identified their longer-term college outcomes and the factors that helped them or hindered them from participating in and succeeding in college. The results from this study will help other adult basic education transition programs to design more appropriate programs for such non-traditional adult students, thereby increasing the likelihood that more adults will attend college.

Research Design

The difference between longitudinal studies and other types of research is that the researchers follow the “subjects” or participants in the study for a longer period of time, usually many years. Researchers also usually collect data from participants more than once, in multiple “waves” of data collection. ATLAS was not an experimental study, as the transition programs could not enroll enough participants for us to create randomly assigned control and experimental groups of subjects. Rather, it was a “panel” study, where we tried to follow and collect data from the same original group of 227 adult students each year until the end of the study. For this study, we collected quantitative questionnaire data through four annual questionnaires with all participants we could reach between 2008 and 2011, and we also collected more in-depth qualitative data from yearly interviews with a subsample of 24 randomly selected participants from among the whole sample of 227.

The questionnaires, which we did in person with participants in the first year (Year 1), in person/by phone/by computer in Years 2 and 3, and in person or over the phone in Year 4, included more than 100 questions each year about participants’ progress toward or in college and their demographic and individual characteristics. The in-depth interviews, conducted in person or over the phone each of the 4 years, asked subsample participants to tell us in more detail about whether and to what extent they were reaching their college goals as well as the obstacles or supports affecting them. As a participant’s literacy skills could be an important factor in whether they went to or stayed in college, we administered a basic literacy test during Year 1 and again in Year 4. We also acquired college transcripts for all ATLAS participants who ever enrolled in college, and we used data that the 11 college transition programs sent to us about the program and about participants’ attendance in and/or completion of the 15-week transition program, so that we could determine which transition program features were important in supporting adult student participants to enroll, persist, or succeed in college.

Practical Lessons Learned

Every research initiative should select a small number of core principles that can serve as an easy-to-remember touchstone for all members of the research team. When questions arise about how to conduct the study, or how to solve emerging problems, the team should return to these guiding principles to judge any strategy or solution: does the solution or strategy meet one or more of these principles? If not, the solution may not be well targeted to contribute to the overall study.

We—the ATLAS research team—used three key principles that we now recommend for guiding successful longitudinal studies:

  • Get the data;
  • Get high-quality data;
  • Keep the data organized (for confidentiality and analysis).

The following sections describe lessons learned about each of these three key guiding principles for longitudinal studies.

Get the Data

Although this principle sounds obvious if one is doing research, as research is nothing without data, the nature of a longitudinal study means that the researchers must gather data multiple times over several years, trying to reach as many original participants as possible. This is not an easy feat, even when people reside in one region of the country (the six New England states, in this case). The reasons are varied but important: participants may move, may grow tired of answering questions, may be embarrassed if (in this case) they did not achieve their goals, or may have a myriad of challenges or problems in their lives to deal with and not feel they have time or serenity to complete a questionnaire or interview. We also know that people who experience poverty, as many in our sample did, are more likely to face turbulence in their lives, such as losing or changing jobs, dealing with unsupported health issues, working more than one job, and facing logistical issues in transportation or child care, any of which may require them to move or just be less available to respond to our yearly requests to complete a questionnaire.

We strongly encouraged every participant to complete all four questionnaires. However, like most multi-wave studies, we could not obtain 100% participation for any of the follow-up questionnaires (Year 2, 3, or 4). The lowest questionnaire completion rate was in Year 2, when we had not yet refined the art of effectively tracking participants and/or successfully obtaining their involvement. We had 227 participants in our original questionnaire sample (Year 1). By the second year, only 149 participants completed the Year 2 questionnaire. Sadly, one of our participants passed away during the second year of the study, leaving 226 participants with whom we could follow up; therefore, the completion rate for Year 2 was 65.9%. After improving our tracking methods and the way in which we solicited participation, we had much higher completion rates for the Year 3 questionnaire, with 189 participants (83.6%), and for the Year 4 questionnaire, with 208 participants (92%).

Overall, 133 participants completed all four questionnaires, and 195 participants completed at least three questionnaires. Eight participants completed the Year 3 questionnaire in 2010 but not the Year 4 questionnaire in 2011; therefore, we have follow-up questionnaire data from 2010 or later from 216 of 226 participants in total. Thus, for 95.6% of our participants, we have questionnaires providing details about many aspects of their lives for at least 3 years since they first began attending the college transition program.

How did we enact the principle of “getting the data”? We employed a number of strategies for encouraging participants to stay interested in the study and in maintaining contact with us, and for tracking participants’ whereabouts from year to year. We learned about some of these strategies from colleagues who ran the Longitudinal Study of Adult Learning at Portland State University (Oregon), and others we generated on our own.

Strategies for Encouraging Participants to Stay in the Study

The first and most important strategy is to pay participants a stipend for their time. We as researchers were paid for our time; why shouldn’t participants receive some compensation for the time they give us in completing a questionnaire or being part of an interview? In the first year, this amount was US$25 for completing the first face-to-face interview; those who were also part of the subsample being interviewed were paid an additional US$15. The stipend then rose by US$15 each year, given only to those participants who completed the questionnaire and/or interview. In the final year, when we needed to interview them personally and asked them to come to a program reunion where we could administer the follow-up literacy test, we gave participants US$100 for those who both did the questionnaire and took the test.

Another enticement to give us data each wave (Years 2, 3, and 4) was a lottery for those participants who completed that year’s questionnaire. We told participants that, if they completed the questionnaire by a specific date, we would put their name into a “hat” as eligible for a US$250 prize. We then selected two names and paid those two participants. This strategy served not only to bring in a few more completed questionnaires but also to give participants an incentive for doing it sooner.

Another strategy was to offer participants multiple ways to complete the questionnaire in Years 2 and 3. We put the questionnaire online and sent participants a link; they could compete the questionnaire over multiple visits to the site without losing information. We offered to conduct the questionnaire over the phone for those participants who did not have a computer or who did not want or were not able to do the questionnaire via the Internet. Participants could set up a time to call our toll-free phone number or a time for a member of the research team to call them on the phone, be asked the questions verbally, and have the researcher enter the data electronically. (For those on cell phones, we offered to pay them US$10 to cover their minutes.) Finally, we went door to door across New England to pick up as many of the remaining participants as possible. Through these efforts, in Year 4, we completed 208 questionnaires in total: 189 full questionnaires (60-90 min) by phone or in person, three questionnaires independently by participants online, and 16 participants taking an abbreviated version of the questionnaire over the phone.

We felt strongly that we needed to maintain regular contact with participants during the course of the study and not wait until it was time to collect data each year. We had numerous strategies for keeping in touch with and building ownership in the study among the participants. We sent out a yearly newsletter to all participants (at either the latest email or physical address we had for them), giving them an update on the study, notifying them of the dates of the next wave of data collection, announcing the lottery, and telling them over and over how we thought this study—which would not be possible without their participation—could and would help other adult students like them, in the future, to have access to high-quality ABE-to-college transition program services and, hopefully, go on to college. A significant number of participants did report to us that they might have dropped out of the study if they hadn’t heard from us so often about the value of their time and participation in the research. Others expressed gratitude to us that someone felt their lives and efforts were important enough to study, as well as surprise that there was enough interest in adults “like them” to conduct research about their educational goals and challenges.

Strategies for Tracking Participants’ Whereabouts

Keeping the most up-to-date contact information for participants was a top priority for our research team. If we couldn’t reach or find participants from year to year, we could not collect data from them. We felt that every person who dropped out was a significant loss to the study and would reduce the power and value of our results.

The first strategy we used was to fill out a contact sheet with the participant during the first, face-to-face data collection. In addition to the participant’s home and work (if he or she was working) phone, email, and physical address, we also asked each participant to give us the name, phone, email, and home address of three people—spouse or domestic partner, family member, fellow employee, counselor, neighbor, church member, or someone else close to them—who would always know how to reach the participant. We discouraged participants from naming boyfriends or girlfriends or other friends, feeling that these may be likely to change over 5 years. These three people were the single best resource for finding participants whose primary contact information changed once or more during the course of the study. We put all of this information into a secure Access database, managed and populated by one of the research team members.

We gave out “refrigerator magnets” and sent twice-yearly postcards to participants with the name of the study, the email address for the team, our toll-free phone number, and our website address, encouraging them to contact us at any time if they moved, got a new phone number, or got a new address. One of the research team members was assigned daily to check the phone voice mail and the email in-box, and to get back to participants within 24 hr. Many participants did use these avenues to contact us when their information changed. We also benefited from the mobile phone era; as most people now have cell phones with numbers that don’t change even when they move, it is much easier now to track participants by phone than it would have been even 10-15 years ago when most people used landlines with numbers that changed when they moved house.

When it was time again to collect data, we sent out an announcement via email or regular mail, letting participants know how they could complete the questionnaire and how we could help. For some participants, this was all they needed to either visit the website and complete the Internet version of the questionnaire or to call us to set up a time by phone or in person where we could talk with them to complete the questionnaire. Others needed reminding or even coaxing. This inevitably involved one or more phone calls to non-responding participants. Each week, the PI, research coordinator, and three research assistants met to tally up the completed questionnaires, and then to divide up and assign the remaining participants to one of the five-person research team. Every team member was responsible for calling his or her assigned participants every day until they could set up a time to do a phone or face-to-face questionnaire with the person. Phone interviews were scheduled early in the morning, on Saturdays and Sundays, late at night, and so on whenever the participant was available.

It cannot be stated strongly enough the importance of having research team members who are willing to call—and keep calling—study participants on the phone. Some team members were reluctant or shy to do so; others (thankfully) were tenacious and saw it as a challenge to contact or find participants. The research coordinator for the last half of the study (Laura Gluck) was masterful in this regard; she called some participants almost 100 times. When she felt the participant was using caller-ID to avoid her calls, she bought temporary “burner” phones with different phone numbers so they wouldn’t recognize her number. She tracked down parents, sisters, brothers, grandparents, and others to trace the participant; in fact, at one point, she used for-pay Internet search apps to try to find particularly difficult-to-locate participants, but gave it up after one day because the information they provided was not as good or as recent as her own. The contact information for three people who would always know where to find the participant was particularly valuable, even though that too changed over time. She used innovative tactics to engage relatives and friends long enough to win their confidence about the study’s legitimacy and persuade them to give her the participant’s most current contact information. For example, when calling someone, she would immediately identify herself and then say that she would like to contact the participant because she had money to pay them (the stipend for completing a questionnaire or interview); this would usually catch the attention of the significant other long enough to find out more about who she was and what the call was about. If she reached participants who did not want to participate in the study any longer, she would not take “no” for an answer; she continued to talk to them about the importance of being in the study until they usually acquiesced. If they still flatly refused, she would give them the option to do the shorter questionnaire, and some agreed. Even when participants still refused, she would call them back the next year and see whether they had changed their minds, or help them to change their minds. This persistent approach to tracking and finding participants was invaluable in ensuring that we found and collected at least two waves of data from more than 95% of our original participants after 5 years.

However, using these multiple strategies to get the data does increase the cost of the study. There was always one graduate research assistant whose entire job was to track participants and keep the database up to date. (Note: we found Access to be cumbersome and not useful for us as a contact database platform; in the end, we felt that Excel would have been sufficient, much easier to set up and use, and less time-consuming to keep current.) We spent additional money on newsletter production and mailing, postcards, and magnets. We racked up the miles driving to remote areas of New Hampshire and Maine to find and interview people face to face. The lottery cost US$500 for each of 3 years, which was minimal—we felt—for the return it delivered in participants’ completing questionnaires on time. Staff time to continually call participants, as well as to conduct 60-90 min interviews, was significant. Therefore, longitudinal study researchers need to consider carefully, and negotiate with funders, the budget needed just for tracking participants over time. Skimping on the budget will result in failure to achieve the objective of “getting the data.”

Get High-Quality Data

Again, this principle seems obvious: no one wants poor data, such as incomplete answers or answers indicating the participant did not understand the question. However, non-longitudinal studies have the benefit of interacting with participants only once, so participants are less likely to become tired of or resistant to participating in the study. Providing data in a longitudinal study is a significant commitment, even when one is somewhat compensated for one’s time. Having to answer similar questions each year gets old after three or four times. Thus, we needed to find a balance between asking all of the questions we wanted or needed to ask in order to answer our research questions while not making the questionnaires or interviews overly long so that participants skipped questions or failed to complete them. In addition, over 5 years, especially when implemented through a university with graduate student assistants, data collectors will change as students graduate and new students come on board. Thus, longitudinal study researchers will need to spend significant time training multiple research team members well and thoroughly, over the years, so that the data collected is high quality and complete.

One strategy to get high-quality data was to do as many questionnaires in person or by phone, with the Internet version as a less-desirable option (for us). In addition to the first wave of data collection, the last wave (Year 4) was extremely important for gathering information about our dependent variables (enrolling, persisting, or succeeding in college). To get the best quality data, we sought to conduct every questionnaire by phone or in person for these two waves. We devised an abbreviated, 30-min version of the final questionnaire for those participants who refused to complete a full hour or 90-min questionnaire, and we offered this option in the last wave just to get at least the most basic data we needed. We figured that getting some high-quality data was better than getting no data if a participant did not want to spend the time doing the questionnaire.

To address the issue of rotating research team members, we developed—at the end of Year 1—a manual for new graduate assistants. The manual described ATLAS policies and included interviewing techniques and tips, background information, tools, links, and other information team members needed to know. We coupled the requirement of reading through this manual with a full-day orientation for new team members when they started, where we could emphasize important points, answer questions, and check new members’ understanding of their roles and responsibilities.

However, the critical factor in training research team members is top-quality data collection. We developed a process for intensive training on how to conduct a questionnaire in person or by the phone. This training included information about how to (a) sense if the participant didn’t understand the question, and how to explain the question and the purpose behind asking that question, (b) probe for more information if the participant’s answer was not clear or definitive, and (c) establish rapport with the participant.

This intensive training process included multiple steps. First, we asked the new team member/data collector to read through the ATLAS manual for all the tips we had collected and follow the links to published information about conducting effective interviews. Based on that background, the new data collector would next read over the study’s hypotheses, then read through the questionnaire-in-use, and write questions in the margins. Third, the new data collector would meet with the research coordinator to go over those questions. The research coordinator would, at that time, give additional tips, based on conducting previous questionnaires. Next, an experienced member of the research team and the new data collector would go through the current questionnaire, question by question, discussing the purpose of each question: why were we asking it, and how would it help us to answer our hypotheses or to learn more about dependent or independent variables? It is vital that any data collector/interviewer understand the rationale behind any question, so that if a participant wasn’t sure what the question was aiming at, the interviewer could then explain in more detail the role of that question in the study and the point of the question. Without this, we discovered, interviewers did only a poor-to-fair job of ensuring that we got information we needed if and when participants did not immediately understand a question. (And don’t think that ample piloting, as we did, will make every question crystal clear to every participant!)

Next, the new data collector would conduct a walk-through of the questionnaire, using another, experienced research team member as the “participant.” The “participant” then would give feedback about problems that he or she noted. If needed, the new data collector would then administer another “test” questionnaire with a different member of the research team. Finally, if the research coordinator and other team members deemed the new data collector ready, he or she would conduct a real interview with a participant, followed by the research coordinator reviewing the results of the interview to ensure that the new data collector had collected full and appropriate answers.

Of course, we were not fully successful at getting high-quality data on all points. However, without adequate training, data collectors may collect poor data, resulting in unusable data for one or more waves or participants. Once again, intensive training such as this demands more time and thus funding in the budget’s salary line.

Keep the Data Organized (for Confidentiality and Analysis)

The best data do a study no good if they are lost or mismanaged or participants put at risk if private information is handled haphazardly. Our strategies for “keeping the data organized” included immediate backing up of questionnaire data on a second computer and a back-up disk. We created new back-up drives containing all accumulated data every 6 months, which the PI stored at her home. We also printed a copy of each questionnaire and interview, to put into the participant’s numbered file folder, kept in a locked file cabinet in the PI’s locked office. We gave each participant a unique code number, based on their research cohort (fall 2007 or spring 2008), the transition program they attended, and a number for them (e.g., F07-08-13). In addition to printed questionnaires, files included participants’ signed informed consent, contact sheet (including their social security number, if they agreed to give it to us), and paper versions of their literacy tests and any correspondence they sent us or told us over the phone. (Social security numbers were never entered into any computer, but only kept on paper in their file.) When questions arose during cleaning the electronic data, we could then easily find and double check the data on paper.

With multiple waves of data, it’s important to code and organize data each year, with an eye to its use in the final analysis. We coded the answers to open-ended questions as soon as the bulk of the questionnaires were completed each year and then quickly entered these codes into the participant’s electronic file. The annual questionnaires include three types of questions. One type were unique questions, asked only once during the study (e.g., demographic questions in Year 1). The second type were questions repeated each year (e.g., health problems in the past year, work status, and college status); these we compiled and standardized at the end of the study so that we could use the information from participants who did not complete all four questionnaires. The third type were “bookend” questions, asked only in the first and last waves (such as questions about college goals), so we could see differences in these factors from beginning to end; answers to these questions were also compiled at the end of the study. Coding and cleaning the questionnaire data each year made it easier at the end to merge data across years and test our hypotheses.

I was taught that, in a mixed-method study, one type of data (qualitative or quantitative) takes the “lead” or is the primary source of data. In our study, the quantitative data from the four annual waves of data collection was primary. We analyzed that data first, to determine the key predictors of college enrollment, persistence, and success. Using that information, we then could go through the qualitative data from the interviews and code for those factors or predictors, to understand why such factors played an important role in participants’ transition to and trajectory in college. Knowing which data is primary in a mixed-method longitudinal study means that the team can devote more of their resources—time and equipment—to cleaning, managing, and storing that data first.

Conclusion

The three principles of get the data, get high-quality data, and keep the data organized may be core principles for any research study, but we feel that they are particularly important for longitudinal studies where the scope of the study means that keeping track and enlisting the support of participants over time (for getting the data), training multiple data collectors as they come and go (for getting high-quality data), and having procedures for immediately backing up and securely storing the data (for keeping the data organized) are critical. Without these guiding principles, the research team might not have developed, tested, and then maintained successful strategies for ensuring that we could manage the good data we got for an expensive, time-consuming, and unique study.

Exercises and Discussion Questions
  • Based on the size and nature of the sample you might have in a longitudinal study on your own topic, what types of difficulties might you face in tracking participants, year after year?
  • What type of strategies would you use to keep track of participants in your study?
  • If you were negotiating with a potential funder of your longitudinal study, what arguments would you make for the extra resources your team would need to track, find, and collect data from your sample?
  • What are the implications of these three principles for determining roles and responsibilities of the multiple staff needed to successfully implement a longitudinal study?
Further Reading
Menard, S. (2002). Longitudinal research (
2nd ed.
). Thousand Oaks, CA: SAGE.
Reder, S., & Bynner, J. (2009). Tracking adult literacy and numeracy: Findings from longitudinal research. New York, NY: Routledge.
Sefton-Green, J., & Rowell, J. (2015). Learning and literacy over time: Longitudinal perspectives. New York, NY: Routledge.
Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York, NY: Oxford University Press.
Web Resources
Gayle, V., & Lambert, P. (2008). Longitudinal data analysis for social science researchers: Quantitative longitudinal research (References/Resources). Retrieved from http://www.restore.ac.uk/longitudinal/refs/reading_lda_08.pdf
Holland, J., Thomson, R., & Henderson, S. (2004). Feasibility study for a possible qualitative longitudinal study (Discussion Paper). Retrieved from http://www.restore.ac.uk/inventingadulthoods/feasability_study.pdf
Longitudinal Study of Adult Learning. (n.d.). Available from http://lsal.pdx.edu/reports.html
Milam, J. (2012). Longitudinal studies: Context, measures, construction and tools. Retrieved from http://www.ticua.org/collaborations/sm_files/Longitudinal%20StudiesMilamFinalRevision.pdf
References
Carnevale, A. P., Cheah, B., & Hanson, A. R. (2015). The economic value of college majors. Georgetown University Center on Education and the Workforce, McCourt School of Public Policy, Georgetown University. Retrieved from https://cew.georgetown.edu/wp-content/uploads/The-Economic-Value-of-College-Majors-Full-Report-web-FINAL.pdf
Drekmeier, K., & Tilghman, C. (2010). An analysis of inquiry, nonstart, and drop reasons in nontraditional university student populations. Paper presented at the Sixth Annual National Symposium on Student Retention, Mobile, AL. Retrieved from http://www.insidetrack.com/wp-content/uploads/2013/06/adult-student-research-paper-chris-tilghman-kai-drekmeier.pdf
Julian, T., & Kominski, R. (2011). Education and synthetic work-life earnings estimates (American Community Questionnaire Reports). Retrieved from http://www.census.gov/prod/2011pubs/acs-14.pdf
Kane, T. J., & Rouse, C. E. (1999). The community college: Educating students at the margin between college and work. Journal of Economic Perspectives, 13, 6384.

Methods Map

Longitudinal research

Copy and paste the following HTML into your website