Cronbach’s alpha, a statistical test of whether questions on a survey get at the same notion

As explained on Wikipedia, Cronbach’s α (alpha) is a statistic commonly used to measure the internal consistency or reliability of survey questions. Alpha is most appropriately used when the items measure different substantive areas within a single construct, such as client or employee satisfaction. A “high” value of alpha is often used as evidence that the items measure an underlying (or latent) construct. UCLA’s website on statistics explained this.

An analyst of a client satisfaction survey could calculate Cronbach’s alpha to learn the degree to which the various questions strike at the same perception; do they measure the same thing? Or an engagement survey of employees could use it to confirm reliability and the variation accounted for by the true score of the “underlying construct.” Construct is the notion (variable) that is being measured.

Cronbach’s alpha depends very much on the number of questions on the survey and the average inter-correlation among them. If you increase the number, you increase Cronbach’s alpha. As the average inter-item correlation increases, Cronbach’s alpha also increases (holding the number of items constant).

Alpha ranges from 0 to 1 and may be used with dichotomous questions (those with two possible answers) and multi-point questions (such as a rating scale: 1 = poor, 5 = excellent). Social scientists accept scores of 0.7 and above to be an acceptable reliability coefficient.

We welcome comments

Your email address will not be published. Required fields are marked *