Template:Short description {{ safesubst:#invoke:Unsubst||date=__DATE__|$B= Template:Ambox }}
Cronbach's alpha (Cronbach's <math>\alpha</math>), also known as tau-equivalent reliability (<math>\rho_T</math>) or coefficient alpha (coefficient <math>\alpha</math>), is a reliability coefficient and a measure of the internal consistency of tests and measures.<ref name=c1951>Template:Cite journal</ref><ref name=c1978>Template:Cite journal</ref><ref name=Cho>Template:Cite journal</ref> It was named after the American psychologist Lee Cronbach.
Numerous studies warn against using Cronbach's alpha unconditionally. Statisticians regard reliability coefficients based on structural equation modeling (SEM) or generalizability theory as superior alternatives in many situations.<ref name="Sijtsma">Template:Cite journal</ref><ref name="GY">Template:Cite journal</ref><ref name="RZ">Template:Cite journal</ref><ref name="ChoKim">Template:Cite journal</ref><ref name="RM">Template:Cite journal</ref><ref name="c2004">Template:Cite journal</ref>
HistoryEdit
In his initial 1951 publication, Lee Cronbach described the coefficient as Coefficient alpha<ref name=c1951/> and included an additional derivation.<ref name="Cronbach">Template:Cite journal</ref> Coefficient alpha had been used implicitly in previous studies,<ref name="Hoyt">Template:Cite journal</ref><ref name="Guttman">Template:Cite journal</ref><ref name="JF">Template:Cite journal</ref><ref name="Gulliksen">Template:Cite book</ref> but his interpretation was thought to be more intuitively attractive relative to previous studies and it became quite popular.<ref>Template:Cite journal</ref>
- In 1967, Melvin Novick and Charles Lewis proved that it was equal to reliability if the true scoresTemplate:Efn of the compared tests or measures vary by a constant, which is independent of the people measured. In this case, the tests or measurements were said to be "essentially tau-equivalent."<ref name="NL">Template:Cite journal</ref>
- In 1978, Cronbach asserted that the reason the initial 1951 publication was widely cited was "mostly because [he] put a brand name on a common-place coefficient."<ref name="c1978" />Template:Rp<ref name="Cho" /> He explained that he had originally planned to name other types of reliability coefficients, such as those used in inter-rater reliability and test-retest reliability, after consecutive Greek letters (i.e., <math>\beta</math>, <math>\gamma</math>, etc.), but later changed his mind.
- Later, in 2004, Cronbach and Richard Shavelson encouraged readers to use generalizability theory rather than <math>\rho_{T}</math>. Cronbach opposed the use of the name "Cronbach's alpha" and explicitly denied the existence of studies that had published the general formula of KR-20 before Cronbach's 1951 publication of the same name.<ref name="c2004" />
Prerequisites for using Cronbach's alphaEdit
To use Cronbach's alpha as a reliability coefficient, the following conditions must be met:<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref>
- The data is normally distributed and linearTemplate:Efn;
- The compared tests or measures are essentially tau-equivalent;
- Errors in the measurements are independent.
Formula and calculationEdit
Cronbach's alpha is calculated by taking a score from each scale item and correlating it with the total score for each observation. The resulting correlations are then compared with the variance for all individual item scores. Cronbach's alpha is best understood as a function of the number of questions or items in a measure, the average covariance between pairs of items, and the overall variance of the total measured score.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref name=RM/>
<math display="block">\alpha = {k \over k-1 } \left(1 - {\sum_{i=1}^k \sigma^2_{y_i} \over \sigma^2_y} \right)</math>
where:
- <math>k</math> represents the number of items in the measure
- <math>\sigma_{y_i}^2</math> the variance associated with each item i
- <math>\sigma_y^2</math> the variance associated with the total scores, <math>y = \sum_{i=1}^k y_i</math>
Alternatively, it can be calculated through the following formula:<ref>Template:Cite AV media</ref>
- <math> \alpha = {k \bar c \over \bar v + (k - 1) \bar c} </math>
where:
- <math>\bar v</math> represents the average variance
- <math>\bar c</math> represents the average inter-item covariance.
Common misconceptionsEdit
{{ safesubst:#invoke:Unsubst||date=__DATE__|$B= Template:Ambox }} Application of Cronbach's alpha is not always straightforward and can give rise to common misconceptions, some of which are detailed here.<ref name="ChoKim" />
The value of Cronbach's alpha ranges between zero and oneEdit
By definition, reliability cannot be less than zero and cannot be greater than one. Many textbooks mistakenly equate <math>\rho_{T}</math> with reliability and give an inaccurate explanation of its range. <math>\rho_{T}</math> can be less than reliability when applied to data that are not essentially tau-equivalent. Suppose that <math>X_2</math> copied the value of <math>X_1</math> as it is, and <math>X_3</math> copied by multiplying the value of <math>X_1</math> by -1.
The covariance matrix between items is as follows, <math>\rho_{T}=-3</math>.
<math>X_1</math> | <math>X_2</math> | <math>X_3</math> | |
---|---|---|---|
<math>X_1</math> | <math>1</math> | <math>1</math> | <math>-1</math> |
<math>X_2</math> | <math>1</math> | <math>1</math> | <math>-1</math> |
<math>X_3</math> | <math>-1</math> | <math>-1</math> | <math>1</math> |
Negative <math>\rho_{T}</math> can occur for reasons such as negative discrimination or mistakes in processing reversely scored items.
Unlike <math>\rho_{T}</math>, SEM-based reliability coefficients (e.g., <math>\rho_{C}</math>) are always greater than or equal to zero.
This anomaly was first pointed out by Cronbach (1943)<ref name="c1943">Template:Cite journal</ref> to criticize <math>\rho_{T}</math>, but Cronbach (1951)<ref name="Cronbach"/> did not comment on this problem in his article that otherwise discussed potentially problematic issues related <math>\rho_{T}</math>.<ref name="c2004"/>Template:Rp<ref>Template:Cite journal</ref>
If there is no measurement error, the value of Cronbach's alpha is one.Edit
This anomaly also originates from the fact that <math>\rho_{T}</math> underestimates reliability.
Suppose that <math>X_2</math> copied the value of <math>X_1</math> as it is, and <math>X_3</math> copied by multiplying the value of <math>X_1</math> by two.
The covariance matrix between items is as follows, <math>\rho_{T}=0.9375</math>.
<math>X_1</math> | <math>X_2</math> | <math>X_3</math> | |
---|---|---|---|
<math>X_1</math> | <math>1</math> | <math>1</math> | <math>2</math> |
<math>X_2</math> | <math>1</math> | <math>1</math> | <math>2</math> |
<math>X_3</math> | <math>2</math> | <math>2</math> | <math>4</math> |
For the above data, both <math>\rho_{P}</math> and <math>\rho_{C}</math> have a value of one.
The above example is presented by Cho and Kim (2015).<ref name = ChoKim/>
A high value of Cronbach's alpha indicates homogeneity between the itemsEdit
Many textbooks refer to <math>\rho_{T}</math> as an indicator of homogeneity<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> between items. This misconception stems from the inaccurate explanation of Cronbach (1951)<ref name = Cronbach/> that high <math>\rho_{T}</math> values show homogeneity between the items. Homogeneity is a term that is rarely used in modern literature, and related studies interpret the term as referring to uni-dimensionality. Several studies have provided proofs or counterexamples that high <math>\rho_{T}</math> values do not indicate uni-dimensionality.<ref name=Cortina>Template:Cite journal</ref><ref name=ChoKim/><ref name=GLM>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref>Template:Cite journal</ref><ref name=TBC>Template:Cite journal</ref> See counterexamples below.
<math>X_1</math> | <math>X_2</math> | <math>X_3</math> | <math>X_4</math> | <math>X_5</math> | <math>X_6</math> | |
---|---|---|---|---|---|---|
<math>X_1</math> | <math>10</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>3</math> |
<math>X_2</math> | <math>3</math> | <math>10</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>3</math> |
<math>X_3</math> | <math>3</math> | <math>3</math> | <math>10</math> | <math>3</math> | <math>3</math> | <math>3</math> |
<math>X_4</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>10</math> | <math>3</math> | <math>3</math> |
<math>X_5</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>10</math> | <math>3</math> |
<math>X_6</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>3</math> | <math>10</math> |
<math>\rho_{T}=0.72</math> in the uni-dimensional data above.
<math>X_1</math> | <math>X_2</math> | <math>X_3</math> | <math>X_4</math> | <math>X_5</math> | <math>X_6</math> | |
---|---|---|---|---|---|---|
<math>X_1</math> | <math>10</math> | <math>6</math> | <math>6</math> | <math>1</math> | <math>1</math> | <math>1</math> |
<math>X_2</math> | <math>6</math> | <math>10</math> | <math>6</math> | <math>1</math> | <math>1</math> | <math>1</math> |
<math>X_3</math> | <math>6</math> | <math>6</math> | <math>10</math> | <math>1</math> | <math>1</math> | <math>1</math> |
<math>X_4</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>10</math> | <math>6</math> | <math>6</math> |
<math>X_5</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>6</math> | <math>10</math> | <math>6</math> |
<math>X_6</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>6</math> | <math>6</math> | <math>10</math> |
<math>\rho_{T}=0.72</math> in the multidimensional data above.
<math>X_1</math> | <math>X_2</math> | <math>X_3</math> | <math>X_4</math> | <math>X_5</math> | <math>X_6</math> | |
---|---|---|---|---|---|---|
<math>X_1</math> | <math>10</math> | <math>9</math> | <math>9</math> | <math>8</math> | <math>8</math> | <math>8</math> |
<math>X_2</math> | <math>9</math> | <math>10</math> | <math>9</math> | <math>8</math> | <math>8</math> | <math>8</math> |
<math>X_3</math> | <math>9</math> | <math>9</math> | <math>10</math> | <math>8</math> | <math>8</math> | <math>8</math> |
<math>X_4</math> | <math>8</math> | <math>8</math> | <math>8</math> | <math>10</math> | <math>9</math> | <math>9</math> |
<math>X_5</math> | <math>8</math> | <math>8</math> | <math>8</math> | <math>9</math> | <math>10</math> | <math>9</math> |
<math>X_6</math> | <math>8</math> | <math>8</math> | <math>8</math> | <math>9</math> | <math>9</math> | <math>10</math> |
The above data have <math>\rho_{T}=0.9692</math>, but are multidimensional.
<math>X_1</math> | <math>X_2</math> | <math>X_3</math> | <math>X_4</math> | <math>X_5</math> | <math>X_6</math> | |
---|---|---|---|---|---|---|
<math>X_1</math> | <math>10</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>1</math> |
<math>X_2</math> | <math>1</math> | <math>10</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>1</math> |
<math>X_3</math> | <math>1</math> | <math>1</math> | <math>10</math> | <math>1</math> | <math>1</math> | <math>1</math> |
<math>X_4</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>10</math> | <math>1</math> | <math>1</math> |
<math>X_5</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>10</math> | <math>1</math> |
<math>X_6</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>1</math> | <math>10</math> |
The above data have <math>\rho_{T}=0.4</math>, but are uni-dimensional.
Uni-dimensionality is a prerequisite for <math>\rho_{T}</math>. One should check uni-dimensionality before calculating <math>\rho_{T}</math> rather than calculating <math>\rho_{T}</math> to check uni-dimensionality.<ref name = Cho/>
A high value of Cronbach's alpha indicates internal consistencyEdit
The term "internal consistency" is commonly used in the reliability literature, but its meaning is not clearly defined. The term is sometimes used to refer to a certain kind of reliability (e.g., internal consistency reliability), but it is unclear exactly which reliability coefficients are included here, in addition to <math>\rho_{T}</math>. Cronbach (1951)<ref name = Cronbach/> used the term in several senses without an explicit definition. Cho and Kim (2015)<ref name = ChoKim/> showed that <math>\rho_{T}</math> is not an indicator of any of these.
Removing items using "alpha if item deleted" always increases reliabilityEdit
Removing an item using "alpha if item deleted"Template:Clarify may result in 'alpha inflation,' where sample-level reliability is reported to be higher than population-level reliability.<ref name=KL>Template:Cite journal</ref> It may also reduce population-level reliability.<ref name=r2007>Template:Cite journal</ref> The elimination of less-reliable items should be based not only on a statistical basis but also on a theoretical and logical basis. It is also recommended that the whole sample be divided into two and cross-validated.<ref name=KL/>
Ideal reliability level and how to increase reliabilityEdit
Nunnally's recommendations for the level of reliabilityEdit
Nunnally's book<ref name="n1">Template:Cite book</ref><ref name="n3">Template:Cite book</ref> is often mentioned as the primary source for determining the appropriate level of dependability coefficients. However, his proposals contradict his aims as he suggests that different criteria should be used depending on the goal or stage of the investigation. Regardless of the type of study, whether it is exploratory research, applied research, or scale development research, a criterion of 0.7 is universally employed.<ref name="LBM">Template:Cite journal</ref> He advocated 0.7 as a criterion for the early stages of a study, most studies published in the journal do not fall under that category. Rather than 0.7, Nunnally's applied research criterion of 0.8 is more suited for most empirical studies.<ref name="LBM"/>
1st edition<ref name=n1/> | 2nd & 3rd<ref name=n3/> edition | |
---|---|---|
Early stage of research | 0.5 or 0.6 | 0.7 |
Applied research | 0.8 | 0.8 |
When making important decisions | 0.95 (minimum 0.9) | 0.95 (minimum 0.9) |
His recommendation level did not imply a cutoff point. If a criterion means a cutoff point, it is important whether or not it is met, but it is unimportant how much it is over or under. He did not mean that it should be strictly 0.8 when referring to the criteria of 0.8. If the reliability has a value near 0.8 (e.g., 0.78), it can be considered that his recommendation has been met.<ref name = c2020>Template:Cite journal</ref>
Cost to obtain a high level of reliabilityEdit
Nunnally's idea was that there is a cost to increasing reliability, so there is no need to try to obtain maximum reliability in every situation.
Trade-off with validityEdit
Measurements with perfect reliability lack validity.<ref name = ChoKim/> For example, a person who takes the test with a reliability of one will either receive a perfect score or a zero score, because if they answer one item correctly or incorrectly, they will answer all other items in the same manner. The phenomenon where validity is sacrificed to increase reliability is known as the attenuation paradox.<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref>
A high value of reliability can conflict with content validity. To achieve high content validity, each item should comprehensively represent the content to be measured. However, a strategy of repeatedly measuring essentially the same question in different ways is often used solely to increase reliability.<ref>Template:Cite journal</ref><ref>Template:Cite journal</ref>
Trade-off with efficiencyEdit
When the other conditions are equal, reliability increases as the number of items increases. However, the increase in the number of items hinders the efficiency of measurements.
Methods to increase reliabilityEdit
Despite the costs associated with increasing reliability discussed above, a high level of reliability may be required. The following methods can be considered to increase reliability.
Before data collection:
- Eliminate the ambiguity of the measurement item.
- Do not measure what the respondents do not know.<ref>Template:Cite journal</ref>
- Increase the number of items. However, care should be taken not to excessively inhibit the efficiency of the measurement.
- Use a scale that is known to be highly reliable.<ref>Lee, H. (2017). Research Methodology (2nd ed.), Hakhyunsa.</ref>
- Conduct a pretest - discover in advance the problem of reliability.
- Exclude or modify items that are different in content or form from other items (e.g., reverse-scored items).
After data collection:
- Remove the problematic items using "alpha if item deleted". However, this deletion should be accompanied by a theoretical rationale.
- Use a more accurate reliability coefficient than <math>\rho_{T}</math>. For example, <math>\rho_{C}</math> is 0.02 larger than <math>\rho_{T}</math> on average.<ref name="PK">Template:Cite journal</ref>
Which reliability coefficient to useEdit
<math>\rho_T</math> is used in an overwhelming proportion. A study estimates that approximately 97% of studies use <math>\rho_T</math> as a reliability coefficient.<ref name = Cho/>
However, simulation studies comparing the accuracy of several reliability coefficients have led to the common result that <math>\rho_T</math> is an inaccurate reliability coefficient.<ref name=KTD>Kamata, A., Turhan, A., & Darandari, E. (2003). Estimating reliability for multidimensional composite scale scores. Annual Meeting of American Educational Research Association, Chicago, April 2003, April, 1–27.</ref><ref name="Osburn">Template:Cite journal</ref><ref name=RZ/><ref name=TC>Tang, W., & Cui, Y. (2012). A simulation study for comparing three lower bounds to reliability. Paper Presented on April 17, 2012 at the AERA Division D: Measurement and Research Methodology, Section 1: Educational Measurement, Psychometrics, and Assessment, 1–25.</ref><ref name=VVS>Template:Cite journal</ref>
Methodological studies are critical of the use of <math>\rho_T</math>. Simplifying and classifying the conclusions of existing studies are as follows.
- Conditional use: Use <math>\rho_T</math> only when certain conditions are met.<ref name=Cho/><ref name=ChoKim/><ref name=RM/>
- Opposition to use: <math>\rho_T</math> is inferior and should not be used.<ref name=DBB>Template:Cite journal</ref><ref name=GY/><ref name=Peters>Template:Cite journal</ref><ref name=RZ/><ref name=Sijtsma/><ref name=YG>Yang, Y., & Green, S. B.Template:Cite journal</ref>
Alternatives to Cronbach's alphaEdit
Existing studies are practically unanimous in that they oppose the widespread practice of using <math>\rho_T</math> unconditionally for all data. However, different opinions are given on which reliability coefficient should be used instead of <math>\rho_T</math>.
Different reliability coefficients ranked first in each simulation study<ref name=KTD/><ref name=Osburn/><ref name=RZ/><ref name=TC/><ref name=VVS/> comparing the accuracy of several reliability coefficients.<ref name=ChoKim/>
The majority opinion is to use structural equation modeling or SEM-based reliability coefficients as an alternative to <math>\rho_T</math>.<ref name=Cho/><ref name=ChoKim/><ref name=DBB/><ref name=GY/><ref name=Peters/><ref name=RM/><ref name=RZ/><ref name=YG/>
However, there is no consensus on which of the several SEM-based reliability coefficients (e.g., uni-dimensional or multidimensional models) is the best to use.
Some people suggest <math>\omega_H</math><ref name=RZ/> as an alternative, but <math>\omega_H</math> shows information that is completely different from reliability. <math>\omega_H</math> is a type of coefficient comparable to Reveille's <math>\beta</math>.<ref name="Revelle">Template:Cite journal</ref><ref name=RZ/> They do not substitute, but complement reliability.<ref name=Cho/>
Among SEM-based reliability coefficients, multidimensional reliability coefficients are rarely used, and the most commonly used is <math>\rho_C</math>,<ref name = Cho/> also known as composite or congeneric reliability.
In addition to single estimates of reliability, Item response theory based approaches can provide estimates of conditional reliability across the full distribution of scores.<ref>Template:Cite journal</ref>
Software for SEM-based reliability coefficientsEdit
General-purpose statistical software such as SPSS and SAS include a function to calculate <math>\rho_T</math>. Users who don't know the formula <math>\rho_T</math> have no problem in obtaining the estimates with just a few mouse clicks.
SEM software such as AMOS, LISREL, and MPLUS does not have a function to calculate SEM-based reliability coefficients. Users need to calculate the result by inputting it to the formula. To avoid this inconvenience and possible error, even studies reporting the use of SEM rely on <math>\rho_T</math> instead of SEM-based reliability coefficients.<ref name = Cho/> There are a few alternatives to automatically calculate SEM-based reliability coefficients.
- R (free): The psych package<ref>{{#invoke:citation/CS1|citation
|CitationClass=web }}</ref> calculates various reliability coefficients.
- EQS (paid):<ref>{{#invoke:citation/CS1|citation
|CitationClass=web }}</ref> This SEM software has a function to calculate reliability coefficients.
- RelCalc (free):<ref name = Cho/> Available with Microsoft Excel. <math>\rho_C</math> can be obtained without the need for SEM software. Various multidimensional SEM reliability coefficients and various types of <math>\omega_H</math> can be calculated based on the results of SEM software.
NotesEdit
ReferencesEdit
External linksEdit
- Cronbach's alpha SPSS tutorial
- The free web interface and R package cocoon allow us to statistically compare two or more dependent or independent Cronbach alpha coefficients.