IQ Test

#1 Which number should come next in the series? 1 – 1 – 2 – 3 – 5 – 8 – 13


Results

An intelligence quotient (IQ) is a total score derived from several standardized tests designed to assess human intelligence. The abbreviation “IQ” was coined by the psychologist William Stern for the German term Intelligenzquotient, his term for a scoring method for intelligence tests at University of Breslau he advocated in a 1912 book. Historically, IQ is a score obtained by dividing a person’s mental age score, obtained by administering an intelligence test, by the person’s chronological age, both expressed in terms of years and months. The resulting fraction is multiplied by 100 to obtain the IQ score.

When current IQ tests were developed, the median raw score of the norming sample is defined as IQ 100 and scores each standard deviation (SD) up or down are defined as 15 IQ points greater or less, although this was not always so historically. By this definition, approximately two-thirds of the population scores are between IQ 85 and IQ 115. About 2.5 percent of the population scores above 130, and 2.5 percent below 70.

Scores from intelligence tests are estimates of intelligence. Unlike, for example, distance and mass, a concrete measure of intelligence cannot be achieved given the abstract nature of the concept of “intelligence”. IQ scores have been shown to be associated with such factors as morbidity and mortality, parental social status, and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates and the mechanisms of inheritance.

IQ scores are used for educational placement, assessment of intellectual disability, and evaluating job applicants. Even when students improve their scores on standardized tests, they do not always improve their cognitive abilities, such as memory, attention and speed. In research contexts they have been studied as predictors of job performance, and income. They are also used to study distributions of psychometric intelligence in populations and the correlations between it and other variables. Raw scores on IQ tests for many populations have been rising at an average rate that scales to three IQ points per decade since the early 20th century, a phenomenon called the Flynn effect. Investigation of different patterns of increases in subtest scores can also inform current research on human intelligence.

Historically, even before IQ tests were devised, there were attempts to classify people into intelligence categories by observing their behavior in daily life. Those other forms of behavioral observation are still important for validating classifications based primarily on IQ test scores. Both intelligence classification by observation of behavior outside the testing room and classification by IQ testing depend on the definition of “intelligence” used in a particular case and on the reliability and error of estimation in the classification procedure.

The English statistician Francis Galton made the first attempt at creating a standardized test for rating a person’s intelligence. A pioneer of psychometrics and the application of statistical methods to the study of human diversity and the study of inheritance of human traits, he believed that intelligence was largely a product of heredity (by which he did not mean genes, although he did develop several pre-Mendelian theories of particulate inheritance). He hypothesized that there should exist a correlation between intelligence and other observable traits such as reflexes, muscle grip, and head size. He set up the first mental testing centre in the world in 1882 and he published “Inquiries into Human Faculty and Its Development” in 1883, in which he set out his theories. After gathering data on a variety of physical variables, he was unable to show any such correlation, and he eventually abandoned this research.