Skip to main content icon/video/no-internet

Ability tests are assessment instruments designed to measure the capacity of individuals to perform particular physical or mental tasks. Ability tests were developed in the individual differences tradition of psychology and evolved from early tests of general intelligence. Most major ability tests assess a range of broad ability factors that are conceptually and empirically related to general intelligence (or g, also referred to as general cognitive ability). Ability tests are frequently used in settings such as schools, military organizations, business and industry, hospitals and rehabilitation centers, and private practice. Several ability tests with strong evidence of reliability and validity are currently available and are commonly used for purposes such as educational screening or diagnosis, personnel selection and classification, neuropsychological assessment, and career guidance and counseling.

Historical Overview

The first successful “mental test,” predecessor to all subsequent tests of characteristics of individual differences characteristics (including ability), is generally considered to be the intelligence test developed by French psychologist Alfred Binet and his associate, Théodore Simon. First published in 1905, the Binet-Simon Intelligence Scale was designed to identify children presumably unable to benefit from regular classroom instruction by measuring their ability to judge, understand, and reason. The test was found to be an effective predictor of scholastic achievement. The success of the Binet-Simon scales and of later measures, such as Lewis M. Terman's Stanford-Binet Intelligence Scale (published in 1916), led the emerging testing industry to focus on the further development of intelligence measures. Many of these early intelligence tests actually measured a range of different abilities.

At the outset of World War I, leading psychologists in the intelligence testing movement began attending to the problem of selecting and classifying recruits for the United States military. These efforts resulted in the development of group-administered intelligence tests such as the Army Alpha and Beta. The practical usefulness of these assessments and the efficiency with which they could be administered to large numbers of people led to the widespread use of tests and also to intensified research on specific areas of ability relevant to success in a variety of contexts. During the 1920s and 1930s, this shift from measures of general intelligence to measures of specific abilities was accompanied by the development of a statistical technique called factor analysis. By identifying underlying factors on the basis of patterns of intercorrelations among a large number of variables, factor analysis made it possible to demonstrate that specific abilities (e.g., reading speed, reaction time) are indicators of broad areas of ability (e.g., broad visual perception, broad cognitive speediness) and that these broad abilities are somewhat independent of g.

Largely on the basis of evidence obtained from early factor analytic studies, two opposing theoretical approaches to understanding the ability domain emerged. The London school, led by Charles Spearman, emphasized g as the single most important ability. In contrast, a group of American scientists, led by Truman Kelley and Louis L. Thurstone, identified several relatively independent, broad ability factors. A classic study of mechanical ability, led by D. G. Paterson, provided early empirical evidence to support the claim that general areas of ability other than g accounted for significant variance in practical outcomes, such as job performance.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading