Entry
Reader's guide
Entries A-Z
Subject index
Standard Scores
Standard scores enable us to more readily understand the meaning of a particular score. For example, knowing that a student answered 20 items correctly on a 30-item math test gives us only a very rough view of that student's performance. It is hard to know how good or bad this score is without knowing how other students at the same grade level scored. We would know more if we were told that the mean score for all other students at the same grade level taking the exam under the same circumstances was 17. We would at least know that our student was “somewhat above average.” If we were also told that the scores had a standard deviation of 3, we would have a better idea of “how much above average,” namely, one standard deviation above the mean. Suppose we learned that the same student answered 35 items correctly on a 50-item test of verbal ability where the mean was 40 and the standard deviation was 5. We would then know that this same student scored one standard deviation below the mean on the verbal test.
As illustrated in the above example, standardizing sets of test scores by adjusting each score to the mean and standard deviation of the set serves the following two important functions: (a) It allows us to relate one student's test score to the complete set of scores for all comparable students, and (b) it allows us to compare the score on one test to the score on another test for the same student. The latter can be particularly useful in identifying a student's relative strengths and weaknesses.
There are a variety of ways of forming standard scores. For example, some personality measures are adjusted such that the mean will be 50 and the standard deviation 10, and some intelligence tests are scored such that the mean will be 100 and the standard deviation 15. The most common form of standard score used by social scientists, however, is the Z-score, which transforms each score X by subtracting the mean of the scores, X¯, and dividing the difference by the standard deviation, s : z = (X − X¯)/s.
Each transformed score then directly represents how many standard deviations above or below the mean that score is. z =+1 means the score is one standard deviation above the mean, z =−1 means one standard deviation below the mean, z = 0 means that that score falls at the exact arithmetic mean, and so forth. When the form of the original distribution of scores is known, these standard scores can be further translated into percentiles. For example, in a normal distribution, only 16% of the scores are more than one standard deviation above the mean, so a z-score of +1 falls at the 84th percentile. (Most statistics books include a table for translating standard scores in a normal distribution into percentiles.)
Another useful property of z-scores and other standard scores formed by adjusting for the mean and standard deviation is that they are unit-free. For example, a person's standard score for height does not depend on whether height was measured in inches or in centimeters. Likewise, correlation is calculated from standard scores so that, for example, one could compare the correlation between years of education and income across cultures without concern for different monetary units in different countries. The standardization implied by the term standard score has many practical advantages.
...
- Analysis of Variance
- Association and Correlation
- Association
- Association Model
- Asymmetric Measures
- Biserial Correlation
- Canonical Correlation Analysis
- Correlation
- Correspondence Analysis
- Intraclass Correlation
- Multiple Correlation
- Part Correlation
- Partial Correlation
- Pearson's Correlation Coefficient
- Semipartial Correlation
- Simple Correlation (Regression)
- Spearman Correlation Coefficient
- Strength of Association
- Symmetric Measures
- Basic Qualitative Research
- Basic Statistics
- F Ratio
- N(n)
- t-Test
- X¯
- Y Variable
- z-Test
- Alternative Hypothesis
- Average
- Bar Graph
- Bell-Shaped Curve
- Bimodal
- Case
- Causal Modeling
- Cell
- Covariance
- Cumulative Frequency Polygon
- Data
- Dependent Variable
- Dispersion
- Exploratory Data Analysis
- Frequency Distribution
- Histogram
- Hypothesis
- Independent Variable
- Measures of Central Tendency
- Median
- Null Hypothesis
- Pie Chart
- Regression
- Standard Deviation
- Statistic
- Causal Modeling
- Discourse/Conversation Analysis
- Econometrics
- Epistemology
- Ethnography
- Evaluation
- Event History Analysis
- Experimental Design
- Factor Analysis and Related Techniques
- Feminist Methodology
- Generalized Linear Models
- Historical/Comparative
- Interviewing in Qualitative Research
- Latent Variable Model
- Life History/Biography
- Log-Linear Models (Categorical Dependent Variables)
- Longitudinal Analysis
- Mathematics and Formal Models
- Measurement Level
- Measurement Testing and Classification
- Multilevel Analysis
- Multiple Regression
- Qualitative Data Analysis
- Sampling in Qualitative Research
- Sampling in Surveys
- Scaling
- Significance Testing
- Simple Regression
- Survey Design
- Time Series
- ARIMA
- Box-Jenkins Modeling
- Cointegration
- Detrending
- Durbin-Watson Statistic
- Error Correction Models
- Forecasting
- Granger Causality
- Interrupted Time-Series Design
- Intervention Analysis
- Lag Structure
- Moving Average
- Periodicity
- Serial Correlation
- Spectral Analysis
- Time-Series Cross-Section (TSCS) Models
- Time-Series Data (Analysis/Design)
- Trend Analysis
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches