Entry
Reader's guide
Entries A-Z
Subject index
Guttman Scaling
Many phenomena in the social sciences are not directly measurable by a single item or variable. However, the researcher must still develop valid and reliable measures of these theoretical CONSTRUCTS in an attempt to measure the phenomenon under study. SCALING is a process whereby the researcher combines more than one item or variable in an effort to represent the phenomenon of interest. Many SCALING models are used in the social sciences.
One of the most prominent is Guttman scaling. Guttman scaling, also known as scalogram analysis and cumulative scaling, focuses on whether a set of items measures a single theoretical construct. It does so by ordering both items and subjects along an underlying cumulative dimension according to intensity. An example will help clarify the distinctive character of Guttman scaling. Assume that 10 appropriations proposals for the Department of Defense are being voted on in Congress—the differences among the proposals only being the amount of money that is being allocated for defense spending from $100,000,000 to $1,000,000,000 at hundred million-dollar increments. These proposals would form a (perfect) Guttman scale if one could predict how each member of Congress voted on each of the 10 proposals by knowing only the total number of proposals that each member of Congress supported. A scale score of 8, for example, would mean that the member supported the proposals from $100,000,000 to $800,000,000 but not the $900,000,000 or $1,000,000,000 proposals. Similarly, a score of 2 would mean that the member only supported the $100,000,000 and $200,000,000 proposals while opposing the other 8 proposals. It is in this sense that Guttman scaling orders both items (in this case, the 10 appropriations proposals) and subjects (in this case, members of Congress) along an underlying cumulative dimension according to intensity (in this case, the amount of money for the Department of Defense).
A perfect Guttman scale is rarely achieved; indeed, Guttman scaling anticipates that the perfect or ideal model will be violated. It then becomes a question of the extent to which the empirical data deviate from the perfect Guttman model. Two principal methods are used to determine the degree of deviation from the perfect model: (a) minimization of error, proposed by Guttman (1944), and (b) deviation from perfect reproducibility, based on work by Edwards (1948). According to the minimization of error criterion, the number of errors is the least number of positive responses that must be changed to negative, or the least number of negative responses that must be changed to positive, for the observed responses to be transformed into an ideal response pattern. The method of deviation from perfect reproducibility begins with a perfect model and counts the number of responses that are inconsistent with that pattern. Error counting based on deviations from perfect reproducibility results in more errors than the minimization of error technique and is a more accurate description of the data based on scalogram theory. For this reason, it is superior to the minimization of error method.
References
- Analysis of Variance
- Association and Correlation
- Association
- Association Model
- Asymmetric Measures
- Biserial Correlation
- Canonical Correlation Analysis
- Correlation
- Correspondence Analysis
- Intraclass Correlation
- Multiple Correlation
- Part Correlation
- Partial Correlation
- Pearson's Correlation Coefficient
- Semipartial Correlation
- Simple Correlation (Regression)
- Spearman Correlation Coefficient
- Strength of Association
- Symmetric Measures
- Basic Qualitative Research
- Basic Statistics
- F Ratio
- N(n)
- t-Test
- X¯
- Y Variable
- z-Test
- Alternative Hypothesis
- Average
- Bar Graph
- Bell-Shaped Curve
- Bimodal
- Case
- Causal Modeling
- Cell
- Covariance
- Cumulative Frequency Polygon
- Data
- Dependent Variable
- Dispersion
- Exploratory Data Analysis
- Frequency Distribution
- Histogram
- Hypothesis
- Independent Variable
- Measures of Central Tendency
- Median
- Null Hypothesis
- Pie Chart
- Regression
- Standard Deviation
- Statistic
- Causal Modeling
- Discourse/Conversation Analysis
- Econometrics
- Epistemology
- Ethnography
- Evaluation
- Event History Analysis
- Experimental Design
- Factor Analysis and Related Techniques
- Feminist Methodology
- Generalized Linear Models
- Historical/Comparative
- Interviewing in Qualitative Research
- Latent Variable Model
- Life History/Biography
- Log-Linear Models (Categorical Dependent Variables)
- Longitudinal Analysis
- Mathematics and Formal Models
- Measurement Level
- Measurement Testing and Classification
- Multilevel Analysis
- Multiple Regression
- Qualitative Data Analysis
- Sampling in Qualitative Research
- Sampling in Surveys
- Scaling
- Significance Testing
- Simple Regression
- Survey Design
- Time Series
- ARIMA
- Box-Jenkins Modeling
- Cointegration
- Detrending
- Durbin-Watson Statistic
- Error Correction Models
- Forecasting
- Granger Causality
- Interrupted Time-Series Design
- Intervention Analysis
- Lag Structure
- Moving Average
- Periodicity
- Serial Correlation
- Spectral Analysis
- Time-Series Cross-Section (TSCS) Models
- Time-Series Data (Analysis/Design)
- Trend Analysis
- Loading...
Get a 30 day FREE TRIAL
-
Watch videos from a variety of sources bringing classroom topics to life
-
Read modern, diverse business cases
-
Explore hundreds of books and reference titles
Sage Recommends
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches