Skip to main content icon/video/no-internet

Many phenomena in the social sciences are not directly measurable by a single item or variable. However, the researcher must still develop valid and reliable measures of these theoretical CONSTRUCTS in an attempt to measure the phenomenon under study. SCALING is a process whereby the researcher combines more than one item or variable in an effort to represent the phenomenon of interest. Many SCALING models are used in the social sciences.

One of the most prominent is Guttman scaling. Guttman scaling, also known as scalogram analysis and cumulative scaling, focuses on whether a set of items measures a single theoretical construct. It does so by ordering both items and subjects along an underlying cumulative dimension according to intensity. An example will help clarify the distinctive character of Guttman scaling. Assume that 10 appropriations proposals for the Department of Defense are being voted on in Congress—the differences among the proposals only being the amount of money that is being allocated for defense spending from $100,000,000 to $1,000,000,000 at hundred million-dollar increments. These proposals would form a (perfect) Guttman scale if one could predict how each member of Congress voted on each of the 10 proposals by knowing only the total number of proposals that each member of Congress supported. A scale score of 8, for example, would mean that the member supported the proposals from $100,000,000 to $800,000,000 but not the $900,000,000 or $1,000,000,000 proposals. Similarly, a score of 2 would mean that the member only supported the $100,000,000 and $200,000,000 proposals while opposing the other 8 proposals. It is in this sense that Guttman scaling orders both items (in this case, the 10 appropriations proposals) and subjects (in this case, members of Congress) along an underlying cumulative dimension according to intensity (in this case, the amount of money for the Department of Defense).

A perfect Guttman scale is rarely achieved; indeed, Guttman scaling anticipates that the perfect or ideal model will be violated. It then becomes a question of the extent to which the empirical data deviate from the perfect Guttman model. Two principal methods are used to determine the degree of deviation from the perfect model: (a) minimization of error, proposed by Guttman (1944), and (b) deviation from perfect reproducibility, based on work by Edwards (1948). According to the minimization of error criterion, the number of errors is the least number of positive responses that must be changed to negative, or the least number of negative responses that must be changed to positive, for the observed responses to be transformed into an ideal response pattern. The method of deviation from perfect reproducibility begins with a perfect model and counts the number of responses that are inconsistent with that pattern. Error counting based on deviations from perfect reproducibility results in more errors than the minimization of error technique and is a more accurate description of the data based on scalogram theory. For this reason, it is superior to the minimization of error method.

Edward G.Carmines and JamesWoods
10.4135/9781412950589.n385

References

Carmines, E. G., & McIver, J.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading