Skip to main content icon/video/no-internet

Cronbach's Alpha

Cronbach's alpha is a statistic that measures the internal consistency among a set of survey items that (a) a researcher believes all measure the same construct, (b) are therefore correlated with each other, and (c) thus could be formed into some type of scale. It belongs to a wide range of reliability measures.

A reliability measure essentially tells the researcher whether a respondent would provide the same score on a variable if that variable were to be administered again (and again) to the same respondent. In survey research, the possibility of administering a certain scale twice to the same sample of respondents is quite small for many reasons: costs, timing of the research, reactivity of the cases, and so on. An alternative approach is to measure reliability in terms of internal consistency. Internal consistency would indicate that all of the items (variables) vary in the same direction and have a statistically meaningful level of correlation with each other. This can be done, for instance, using the so called split-half method. The most widespread approach, however, in the case of attitude and opinion scales, is to measure the coherence of the responses through the different items in order to discover which of the items are less correlated with the overall score: this is what item-total correlations do. A more sophisticated statistic that uses this same logic is Cronbach's alpha, which is calculated as follows:

None

where n represents the number of the items, and None is the average intercorrelation among them.

Cronbach's alpha ranges between 0 and 1. The greater the value of alpha, the more the scale is coherent and thus reliable (alpha is actually an approximation to the reliability coefficient). Some authors have proposed a critical value for alpha of 0.70, above which the researcher can be confident that the scale is reliable. The logic of this rule is that with an alpha of .70 or greater, essentially 50% (or more) of the variance is shared among the items being considered to be scaled together. Others have proposed the value of 0.75 or the stricter 0.80. If alpha is < .70, it is recommended that the scale be modified, for example, by deleting the least correlated item, until the critical value of 0.70 is finally reached or hopefully exceeded. The output of Statistical Package for the Social Sciences (SPSS) and other statistical packages used by survey researchers gives the researcher critical information on this issue, reporting the value of alpha if each of the items would be deleted. The researcher then deletes the item that, if removed, yields the highest alpha.

Since Cronbach's alpha tends to rise with the number of the items being considered for scaling, some researchers tend to solve the problem of its possible low value by building scales with numerous items. It has been noted that this praxis is often abused. In the end, a proliferation of items may yield a scale that annoys many respondents and can lead to dangerous respondent burden effects (e.g. yea-saying, false opinions, response set, satisficing).

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading