Methodological Strategies to Examine the Data Quality and Respondent Experience of Self-Reporting Disability in Organization


Working adults often do not disclose if they have a disability in the workplace. In fact, many employees do not even complete the requested disclosure form if it is voluntary. Although keeping such information concealed can avoid negative social consequences such as negative reactions from co-workers and disability-based discrimination, there are also costs to both the worker and the organization. Workers forfeit legal protections from discrimination and needed accommodations to perform at work. Organizations suffer from inaccurate records of their workforces and miss opportunities to create practices and policies to better support their workers. This case study describes data collected during a study examining factors that influence employees with disabilities’ reactions to disability disclosure requests in the workplace. We demonstrate the use of an online survey design using a crowdsourcing platform. We also elaborate on the procedures and decision criteria used to assure data quality. In this module, you will learn what decisions may be made to assure data quality from online data collection, considerations when conducting research with a hard-to-reach population, and considerations when asking participants sensitive questions in research.

Learning Outcomes

By the end of this case study, readers should be able to:

  • Make decisions to assure data quality from online data collection
  • Recognize challenges in conducting research with a hard-to-reach population
  • Apply considerations when asking participants sensitive questions in research

Project Overview and Context

Having accurate employee disability information allows organizations to identify opportunities to better support employees through policies and practices that might take the disabilities present into consideration. Additionally, some organizations—specifically those that are federal contractors—are expected to maintain 7% of their workforce as workers with disabilities (Rehabilitation Act, 1973, as amended in 2014). However, employers struggle to acquire accurate data. Past research has shown that individuals are reluctant to disclose stigmatizing information about themselves. Disability status is one example of personal information that is frequently concealed at work.

Under the Americans with Disabilities Act (ADA, 1990, as amended in 2008), a disability is an impairment that interferes with one or more major life activities or functions. There is a wide variety of disabilities and ability differences that may qualify as a disability in some contexts. Moreover, the disabilities may be concealable or invisible to observers. In fact, some survey data suggests that the majority of disabilities represented in the US white-collar workforce are invisible or concealable with some effort (Sherbin et al., 2017). This means that for most workers, disclosure of the disability is required for anyone to know about it. Even if the disability is visible, however, workers must disclose it to employers to gain protection and accommodations. Thus, disclosure of a disability, visible or invisible, is a critical first step toward gaining benefits and potentially preventing losses due to ambiguous or unexpected behavior that could be explained by a disability.

Despite the benefits of protections and qualifying for accommodations, social stigma concerns can lead employees to conceal their disabilities at work. Social stigma involves a negative evaluation from others based on some feature that is devalued (Crocker et al., 1998). The negative evaluation can be driven by sources such as negative emotions in observers. For example, physical health conditions and addiction might cue disgust in some observers. Mental illness is frequently associated with fear, especially if it is severe. Physical disability is often met with pity as observers feel the warmth and positive regard (e.g., want to help) but simultaneously treat the person as having low levels of competence.

The stigma attached to disability also might make employees concerned about their privacy or the general appropriateness of organizations asking for such information (Tourangeau & Yan, 2007). For example, an employee might be especially worried about personal health information being released or shared with others around the organization. Employees might also view the collection of personal information as inappropriate in the workplace.

To address these concerns, organizations might be able to cue a sense of safety by establishing an inclusive work environment that reduces social stigma. Inclusive work environments encourage feelings of belongingness and value for employees being authentic at work (Jansen et al., 2014; Shore et al., 2018). The goal of our research was to examine whether inclusive work environments might encourage workers to feel more comfortable with being measured and formally disclosing a disability in the workplace.

Section Summary
  • Organizations may not have accurate estimates of the number of workers with disabilities or ability differences.
  • Working adults might be reluctant to complete surveys about personal information such as disability status at work.
  • The workplace environment contributes to low rates of survey completion and self-identification among workers with disabilities.

Research Design


Data were extracted from a larger survey data set of 951 working adults from the United States. The survey was distributed through Cloud Research (, which interfaces with Amazon’s Mechanical Turk (MTurk) to recruit eligible participants. From the larger sample, we used 160 cases that (1) received the disability disclosure form and (2) reported a health or medical condition that qualified as a disability under the ADA (see the Measures section).

Survey Measures

Participants received a commonly used disability disclosure form, a validated self-report measure for disability status, measures of their reactions to the disclosure form, and measures of workplace inclusion.

For the disclosure form, we used the US Department of Labor’s Voluntary Self-Identification of a Disability form (VSID; Form CC-305; OMB Control Number 1250-0005; Expiration 1/31/2020). The VSID is intended to measure ADA-defined disability rates in the workplace by asking respondents to voluntarily indicate: “Yes, i have a disability (or previously had a disability),” “No, i don’t have a disability,” or “I don’t wish to answer.” See Appendix A for a sample VSID.

To identify participants who had a condition that qualified as a disability, we relied on a checklist of 18 examples of qualifying disabilities (e.g., blindness) that were included as part of the ADA description on the VSID form. Three additional items asked participants to report in general if they had a physical, psychological, and/or cognitive limitation. Participants who selected at least one of the specifically listed disabilities on the checklist, at least one of the general disability categories, or both were included in the study analyses.

Following the VSID form, participants rated their current emotions on five positive (alert, inspired, determined, attentive, active) and five negative descriptors (afraid, nervous, hostile, upset, ashamed) on a scale ranging from 1 (very slightly or not at all) to 5 (extremely; Thompson, 2007; Watson & Clark, 1994).

Then, participants rated the sensitivity of the disclosure request using three self-reported items. Absent any known scale for measuring the perceived sensitivity of questions, we designed survey items grounded in past work by Tourangeau and Yan (2007) to measure appropriateness (“To what extent do you feel that the information being requested on the Voluntary Self-Identification of a Disability Form was inappropriate or appropriate?”; −3 = Very inappropriate; +3 = Very appropriate), concerns about privacy (“To what extent do you feel that being asked to complete a Voluntary Self-Identification of a Disability Form is a violation of your privacy?”; 0 = Not at all a violation of my privacy; 3 = An extreme violation of my privacy), and concerns about others’ reactions to their responses on the VSID (“To what extent was the response you provided on the Voluntary Self-Identification of a Disability Form influenced by your concerns about how others might react to that information?”; 0 = Not at all influenced by others’ reactions; 3 = Highly influenced by others’ reactions).

Two additional measures were included to reflect perceptions of the work environment as inclusive or stigmatizing for disability. Inclusion was measured with the 16-items Perceived Group Inclusion Scale (Jansen et al., 2014), with items rated on a scale from 1 (Strongly disagree) to 5 (Strongly agree). A lead in (“The organization that I work for…”) preceded items—eight from the belongingness subscale (e.g., “…gives me the feeling that I belong”) and eight from the authenticity subscale (e.g., “…encourages me to be who I am”).

Participants then completed a measure of anticipated disability stigma by rating (1 = Completely disagree; 7 = Completely agree) 12 items describing consequences for disclosing a disability at work (e.g., “I would lose my job” and “I would not be promoted”). The items were adapted from , who originally created them for sexual orientation disclosure.

Data Analysis and Results

The data were analyzed using correlations and partial correlations. First, we examined correlations among work environment variables (anticipated stigma and inclusion) to determine associations with reactions to disability measurement. We then repeated those analyses while controlling for the respondent’s age using partial correlations. In the preliminary analysis, we found that respondent age was correlated with work environment variables and the reactions to measurement variables. Controlling for respondent age ruled out an alternative explanation that the associations with reactions are potentially due to respondent age rather than the work environment.

Results showed that, while controlling for participant age, work environment variables were associated with emotional reactions and sensitivity appraisals. Anticipated stigma had a positive association with negative affect after viewing the VSID form. Inclusion had a negative association with negative affect and a positive association with positive affect after viewing the form. Thus, inclusive work environments (and a low anticipated stigma) were associated with more positive reactions to the disclosure request. Inclusion was also positively associated with the perceived appropriateness of the disclosure request, whereas anticipated stigma was associated with more concerns about privacy and negative reactions from others.

Section Summary
  • A sample of working adults who reported disabilities completed a disability disclosure form and then provided reactions to the experience.
  • The inclusiveness of the work environment improved emotional reactions to the measurement process.
  • Inclusion also was associated with the perceived appropriateness of the measurement, whereas anticipated stigma was associated with more concerns about privacy and negative reactions from others.

Research Practicalities

Practical Considerations

There are several practical considerations when conducting survey research using online samples, especially from hard-to-reach populations. As noted above, we relied on an online crowdsourcing site to collect our sample and distribute our survey. This forfeited some degree of control over the sampling process as we worked through, and thus trusted, a third party to connect us to participants who met our eligibility requirements.

We required interested participants to be employed at least part-time in order to be eligible for the study. To check on this eligibility, we included our own employment screener at the beginning of the survey to be sure those who completed the survey were part-time or full-time employed (by self-report).

We also trusted that the participants were all individual, human participants. When using online crowdsourcing platforms, there is the possibility for a single person to control multiple accounts (although not permitted) and for surveys to be completed by computer programs (“bots”). We did our best to include features to guard against this, or at least give us evidence if suspicious responses were occurring in our data. As an example, we looked for duplicate IP addresses in the data to flag as suspicious cases. We also included one attention check item and one integrity check item. The attention check item was embedded into one of the survey measures and asked participants to “Please select option 4, ‘Neither agree nor disagree’.” The integrity check item was presented at the end of the survey and asked participants to “Please select the statement that best describes your role in this survey,” with “I am a human who read the questions and responded as best as I could” being the passing response option. If participants failed one or both of those items, there was reason to be concerned about their responding behavior.

We also had to carefully consider how we would select participants with disabilities when it is common for working adults to not disclose disability. Indeed, this was the very issue that inspired our research. We did not rely on the VSID as a formal disclosure form to define disability status. Rather, we asked participants about specific health or medical conditions they might have without attaching the label “disability” to those conditions. We also asked about physical, psychological, and cognitive limitations, or impairments. Past research in our lab has found that working adults with concealable disabilities rarely use the term “disability” to describe themselves. Instead, participants in our research used the term “impairment” or named the specifically diagnosed health condition (Santuzzi et al., 2019). We learned from that research that the language we use around disability is very important to if and how working adults will disclose their disabilities at work.

Compensation for participants was also a consideration. The amount to compensate for research is a continued source of debate, as some researchers argue for minimum wage as fair compensation, whereas others argue that such practices violate the voluntary nature of participating in research (i.e., making participation coercive). We compensated each participant $0.50 in exchange for completing a survey that took on average 12.5 minutes to complete. The study description told participants that the survey was expected to require 20 minutes to complete. We did not receive complaints about the compensation, but the stated amount of compensation could have affected which participants did or did not participate in our study.

Ethical Considerations

Research using online surveys in general comes with some ethical considerations. The anonymity experienced by taking a survey online as compared to face-to-face in a research lab is generally considered good for reducing participant discomfort (although it might increase measurement errors as participants become less attentive). The anonymity is especially useful in a study like ours because we were asking sensitive questions about personal information and questions that ask about the quality of respondents’ work environments. If participants feel they are being monitored or that someone can connect their identity to their data, they might feel distressed. They also might distort their responses to avoid negative consequences. We supported the use of an online survey for our research because it increased our reach to a broader sample and allowed participants to experience anonymity while completing sensitive questions about themselves and their work environment.

Although the experience of anonymity was a goal for participants, we were transparent in the informed consent information prior to the study that the data were not completely anonymous. We collected IP addresses to check the data quality prior to analysis (i.e., screening for duplicate IP addresses). The IP address is considered identifying information; therefore, the data collection was better classified as confidential rather than anonymous. Researchers have a choice to collect IP addresses or not when creating online surveys, and the choice has implications for whether the responses to the survey can be claimed as “anonymous” or “confidential.”

Given the sensitive nature of the key survey questions (disability status) and the population of interest (workers with disabilities), there were additional ethical considerations beyond the standard ethical issues in online survey research. Our research intentionally and directly measured reactions to disability disclosure requests, with the goal of informing survey research practices as well as organizational measurement. With sensitive questions, respondents might have felt uncomfortable and even distressed as they decided how to respond. Sensitive questions prompt concerns about the general appropriateness of the questions, the privacy of their personal information, and/or concerns about negative reactions from others based on how they choose to respond to the questions. An ethical goal of the research is to reduce distress among participants as much as possible. Toward that end, we only measured what we absolutely needed in order to minimize any negative experience. For example, we chose to use single items for sensitivity appraisals to keep the reaction experiences as brief as possible. However, using single-item measures may lower reliability compared to multi-item measures.

Related to the online nature of the data collection, it is difficult to assess the degree to which participants experienced distress or discomfort while completing the measures. We relied on pilot testing and feedback from colleagues on early drafts of the survey to try to predict and resolve any triggers of unexpected distress before distributing the survey to an online sample of participants. This practice aimed to meet the ethical goal of minimizing harm to participants while maximizing the knowledge gained from the research conclusions.

Resources and Skills Needed

The key resources required for this type of research include survey development skills, software, time for data collection, and money for compensating participants. All surveys were created and formatted using Qualtrics, a software application that was provided by our university. We established a free user account with Cloud Research to collect the data. We also had training and experience in creating reliable and valid self-report measures (standard in most doctoral training programs in industrial and organizational psychology). As quantitative methodologists, we also had the needed skills to manage the large data set of responses in order to create scores for each of our measures and compute the data analyses required to address our research questions.

This research also required both time and money as resources to support the project. The data collection required approximately 2 weeks (12 days). The timing could have been affected by the compensation offered and the fact that we only allowed participants who lived in the United States and worked either part-time or full-time for an organization to participate. Participants were also required to have completed at least 100 tasks in MTurk and have an approval rating of at least 90%.

We also required funding to complete the data collection. In addition to the compensation for participants ($0.50), we had to pay fees to Cloud Research and MTurk to use the participant pool, which added an additional $0.46 per participant).

Section Summary
  • Collecting data via online survey platforms requires some planning for how to evaluate data quality.
  • Studying sensitive populations or using sensitive questions cues additional ethical concerns to protect respondents’ information and psychological well-being.
  • In addition to survey development and data analysis knowledge, this type of research may require planning for the time to collect the data and securing funding to cover participant compensation and any fees charged by the online sampling platform.

Method in Action

What Worked

We believe our project was successful in collecting a large number of participants in a relatively short period of time. We also were successful in utilizing a disability disclosure form that is frequently distributed in workplace settings across the US The form is publicly available for use via the website of the US OFCCP. We did secure written (email) permission from leadership in the OFCCP to use the form for our research purposes.

Along with the successes, however, we certainly learned a few lessons in the process that might affect our subsequent survey designs and data management strategies. Also, the number of participants who reported health conditions that could qualify as disabilities under the ADA was similar to recent organizational research reports (approximately 30% of workers).

Challenges and How We Managed Them

Our biggest challenges occurred in the data management phase of the project. We had all of the data—now what? We were prepared for messy data and the need to look for invalid cases (e.g., “bots”). However, there is no consensus in the literature on clear criteria to use for evaluating the quality of data collected in online surveys.

Before any cleaning occurred, a total of 1003 respondents opened the survey link. We first examined errors of omission—participants who did not provide enough data to include in the analysis. Of those, six only completed the informed consent form but then quit the survey without providing any data for survey items. An additional eight cases completed the initial screening for employment status but provided responses that disqualified them (i.e., a response other than part-time or full-time employed). An additional 23 cases quit the survey after the employment screener and/or completed less than half of the survey. This yielded 966 remaining cases for further consideration.

In a second round, we evaluated the data quality with a focus on errors of commission due to careless responses or insufficient effort. We discussed several different strategies to flag invalid cases, including cases with duplicate IP addresses, cases that completed the survey in less than 5 minutes, failed attention and integrity checks, and signs of careless response in the data.

Cases with duplicate IP addresses indicated that the study was completed more than once on the same device. Partially duplicate IP addresses were also problematic when the first three segments of the IP address matched because this indicated that the study was completed more than once using multiple devices from the same host network. In either case, duplicate IP addresses could simply be the result of multiple individuals from the same household completing the study. However, there is the risk that the same individual (or “bot”) repeated the study, which would compromise data quality. Accordingly, cases with fully duplicate IP addresses were grounds for omission and resulted in our exclusion of nine cases.

To minimize the chance of excluding valid cases from multiple individuals in the same household, we decided to exclude partially duplicate IP addresses in connection with one other flag (failed integrity check, overlapping start-to-finish time, and/or completed in less than 5 minutes). This resulted in an additional two cases with partially duplicate IP addresses that also met at least one of the conditional flags.

Four more cases were excluded due to failing both the attention check item and the integrity check item. After this process, we ended up with 951 cases that we believed were valid for analysis. From this cleaned data set, we extracted the cases that received the VSID form and met the criteria for having a disability.

We also had to make some judgment calls about disability status. Some participants did not indicate having one of the listed health conditions or a physical, psychological, or cognitive impairment in the survey. However, they indicated in an open text box that they had some health condition that could qualify as a disability under the ADA definition. Twenty-three new cases were included in the data set after the open-ended responses were coded by one of the authors to determine if they fit the definition of disability. Some of the qualifying conditions reported included cancer, epilepsy, diabetes, and depression.

Our data management decisions may have affected the conclusions of the study. By excluding cases, we might have reduced statistical power and had fewer statistically significant associations as a consequence. Our goal in excluding cases was to maximize the validity of our estimates by including only cases that we felt confident about validity. However, there is a risk that we unintentionally excluded valid cases with our exclusion criteria.

Section Summary
  • An online data collection approach is effective for obtaining large amounts of data from specific populations relatively quickly.
  • Data management and data quality present challenges to online survey research without clear rules to follow.
  • Decisions about what data are considered valid versus invalid may affect the results of the survey.

Practical Lessons Learned

Lessons Learned

One key lesson learned relates to the sampling of participants. It was challenging to identify which cases were valid and which were not, for the reasons described previously. This study gave us an opportunity to try out the many strategies recommended in the literature for how to screen invalid cases in online surveys. We were surprised to find that different strategies might yield very different cases being flagged for exclusion. In the end, we decided to triangulate across several strategies, under the expectation that cases that are flagged with more than one strategy are most likely invalid. However, we are hopeful that continued research in the area of careless responding behavior and insufficient effort detection in surveys will refine best practices and criteria for identifying invalid cases such that the criteria yield more consensus on which cases should be flagged.

We also wondered if the questions we used to identify disability status were sufficient, or if some participants with disabilities were inadvertently omitted. Although we put careful thought into designing questions to capture health information without requiring a disability label, there might be conditions that participants did not mention for several reasons. Past literature has suggested that health conditions that have ambiguous symptoms, are not salient to the person, or do not seem relevant to the specific context might not be reported by participants (Santuzzi et al., 2014). This might mean a participant who did not see their health condition as relevant to the survey or did not experience interfering symptoms while taking the survey might not report the condition during that time. For this reason, we are somewhat concerned that participants with such experiences might not have been included in our study.

What We Would Do Differently Next Time?

Although this study provided some “proof of concept” for continued research and to inspire conversations about applications in organizational measurement, it is difficult to generalize from our sampling source (MTurk) to how the measurement experience would unfold when the measures are tied to the employing organization. We will need to replicate the findings in an employing organizational context to feel confident that the effects hold in that context.

Also, since the time of the data collection for this study, the research team has implemented more accessibility options in online surveys, such as the use of image text. If we replicated this survey in the future, we would update the survey design to maximize the use of the available accessibility features.

Section Summary
  • Continued research needs to be done to identify criteria for excluding invalid cases from online survey research.
  • In developing surveys, researchers should consider the psychological processes involved in responding to items requesting personal or sensitive information.
  • To validate the findings, a similar survey study should be conducted in an organizational setting to account for the work-related pressures respondents experience when providing personal information to their employers (as compared to researchers).


Context and Sample

Moving forward, research on this topic can build on our work and the lessons we have learned to refine the methods used and allow for further development of research questions that contribute to theory and practice. Results may be more generalizable if the measurement experience is tied to the employing organization. Although the study asked participants to reflect on their employing organization and respond as if it were going to their employers, it is not the same stakes as actually sending the disclosure form to the employer.

Also, the timing of the measurement may matter. Completing the VSID and deciding to disclose a disability might cue different experiences among workers during the interview phase as compared to later in the job tenure.

We also could have benefited from a larger sample of workers with disabilities. The data were originally collected without stratifying the sample, thus allowing natural representation rather than forcing equal proportions across groups. However, this led to a limited sample size for the second phase of the study, which focused on reactions only among those who had a disability. A larger sample size would allow for more precise estimation and provide opportunities to test more complex quantitative associations among variables.

Research Design

The online survey seemed to create a realistic tool for examining the associations among reactions to the measurement of disability status. It is common for organizations to use self-report surveys for measuring employee characteristics and experiences. The VSID, in particular, is a frequently used form for disability disclosure in US organizations. However, as noted above, survey research has its limitations, particularly with regard to measuring a low base rate and sensitive population.


Continued research may consider additional measures to unpack the experiences of employees when completing measures such as the VSID. With more precise psychological experiences identified, an efficient strategy may be developed to improve employee experiences in the workplace.

Lessons from the reactions of workers to the measurement process suggest workers have different types of concerns when being measured on sensitive questions such as disability status. Different concerns (privacy vs. social desirability) would cue different types of interventions to reduce the negative experience. Without knowing the nature of negative reactions, organizations would not have a precise direction for fixing the concern (and the poor-quality measurement that results from concerns).

Data Management and Analysis

Future research could also build on our lessons regarding data management decisions for quality assurance. Establishing a priori definitions of undesirable cases is an important step toward maximizing data quality. These help researchers develop eligibility requirements for filtering out unwanted participants during recruitment and decision rules for invalidating cases after data collection. Not all undesirable data are anticipated, however, and decisions about what data to exclude happen after data are collected. In such cases, decisions should be made with consideration of other factors that affect the validity of the conclusions that can be drawn from the specific research.

Classroom Discussion Questions

  • What is the best way to ask about someone’s disability in an online survey? Consider the goal of maximizing valid disclosures and minimizing false disclosures.
  • What will the challenges be when trying to replicate the online survey in an organizational setting? Consider both practical and ethical challenges.
  • Given the challenges described when conducting the research on experiences of workers with disabilities in an online survey, what research method would you propose as an alternative? Consider both the benefits and costs or challenges of this new approach.

Multiple-Choice Quiz Questions

One primary source of invalid cases in online survey research is:

The key advantage of using a partial correlation to test the association between variables is

One major challenge in conducting research on working adults with disabilities is

Further Reading

Crocker, J., Major, B., & Steele, C. M. (1998). Social stigma. In D. T.Gilbert, S. T.Fiske, & G.Lindzey (Eds.), The handbook of social psychology (Vol. 2, pp. 504553).
Follmer, K. B., & Jones, K. S. (2018). Mental illness in the workplace: An interdisciplinary review and organizational research agenda. Journal of Management, 44(1), 325351.
Santuzzi, A. M., & Cook, L. (2020). Stereotypes about people with disabilities. In J. T.Nadler & E. C.Voyles (Eds.), Stereotypes: The incidence and impacts of bias (pp. 243263). Praeger Publishing.
Santuzzi, A. M., & Keating, R. T. (2020). Managing invisible disabilities in the workplace: Identification and disclosure dilemmas for workers. In S.Fielder, M.Moore, & G.Wright (Eds.), The palgrave handbook of disability at work (pp. 331349). Palgrave Macmillan.
Santuzzi, A. M., Keating, R. T., Martinez, J. J., Finkelstein, L. M., Rupp, D. E., & Strah, N. (2019). Identity management strategies for workers with concealable disabilities: Antecedents and consequences. Journal of Social Issues, 75(3), 847880.
Santuzzi, A. M., & Waltz, P. R. (2016). Disability in the workplace: A unique and variable identity. Journal of Management, 42(5), 11111135.
Santuzzi, A. M., Waltz, P. R., Finkelstein, L. M., & Rupp, D. E. (2014). Invisible disabilities: Unique challenges for employees and organizations. Industrial and Organizational Psychology, 7(2), 204219.
Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859883.
von Schrader, S., Malzer, V., & Bruyère, S. (2014). Perspectives on disability disclosure: The importance of employer practices and workplace climate. Employee Responsibilities and Rights Journal, 26(4), 237255.

Web Resources

Americans With Disabilities Act of 1990, 42 U.S.C. § 12101. (1990).
ADA Amendments Act of 2008, Pub. L. No. 110-325, 122 Stat. 3553. (2008).
Erickson, W., Lee, C., & von Schrader, S. (2017). Disability statistics from the american community survey (ACS).
Office of Federal Contract Compliance Programs. (2017). Voluntary Self-Identification of Disability (Form CC-305, OMB Control Number 1250-0005).


ADA. (2008). ADA amendments act of 2008.
Baldridge, D. C., & Swift, M. L. (2016). Age and assessments of disability accommodation request normative appropriateness. Human Resource Management, 55(3), 385400.
Baldridge, D. C., & Veiga, J. F. (2006). The impact of anticipated social consequences on recurring disability accommodation requests. Journal of Management, 32(1), 158179.
Follmer, K. B., & Jones, K. S. (2018). Mental illness in the workplace: An interdisciplinary review and organizational research agenda. Journal of Management, 44(1), 325351.
Jansen, W. S., Otten, S., van der Zee, K. I., & Jans, L. (2014). Inclusion: Conceptualization and measurement. European Journal of Social Psychology, 44(4), 370385.
Office of Federal Contract Compliance Programs. (2017). Voluntary self-identification of disability (Form cc-305, omb control number 1250-0005).
Ragins, B. R., Singh, R., & Cornwell, J. M. (2007). Making the invisible visible: Fear and disclosure of sexual orientation at work. Journal of Applied Psychology, 92(4), 11031118.
Rehabilitation Act. (1973). Rehabilitation Act of 1973, 29 U.S.C. § 701 et seq.
Rehabilitation Act. (2014). Section 503.
Santuzzi, A. M., & Cook, L. (2020). Stereotypes about people with disabilities. In J. T.Nadler & E. C.Voyles (Eds.), Stereotypes: The Incidence and impacts of bias (pp. 243263). Praeger.
Santuzzi, A. M., Keating, R. T., Martinez, J. J., Finkelstein, L. M., Rupp, D. E., & Strah, N. (2019). Identity management strategies for workers with concealable disabilities: Antecedents and consequences. Journal of Social Issues, 75(3), 847880.
Santuzzi, A. M., & Waltz, P. R. (2016). Disability in the workplace: A unique and variable identity. Journal of Management, 42(5), 11111135.
Santuzzi, A. M., Waltz, P. R., Finkelstein, L. M., & Rupp, D. E. (2014). Invisible disabilities: Unique challenges for employees and organizations. Industrial and Organizational Psychology, 7(2), 204219.
Sherbin, L., Kennedy, J. T., Jain-Link, P., & Ihezie, K. (2017). Disabilities and inclusion: US findings. Center for Talent Innovation.
Shore, L. M., Cleveland, J. N., & Sanchez, D. (2018). Inclusive workplaces: A review and model. Human Resource Management Review, 28(2), 176189.
Thompson, E. R. (2007). Development and validation of an internationally reliable short-form of the positive and negative affect schedule (PANAS). Journal of Cross-Cultural Psychology, 38(2), 227242.
Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133(5), 859883.
von Schrader, S., Malzer, V., & Bruyère, S. (2014). Perspectives on disability disclosure: The importance of employer practices and workplace climate. Employee Responsibilities and Rights Journal, 26(4), 237255.
Watson, D., & Clark, L. A. (1994). The PANAS-X: Manual for the positive and negative affect schedule-expanded form. [Unpublished manuscript].
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles