Jump to content

Internal consistency

From Wikiversity

Internal consistency (or internal reliability) is a way of measuring the inter-correlation amongst a set of measurement items which are used to measure a single construct.

Internal consistency can be measured using:

  1. Split-half reliability: Correlation between the total score for the first half of the items with the total score for the second half of the items
  2. Odd-even reliability: Correlation between the total score for items 1, 3, 5 etc. and the total score for the items 2, 4, 6 etc.
  3. Cronbach's alpha (α): Average of all the possible split-half correlations.

Cronbach's α can range between 0 and 1

  1. The more items, generally the higher the internal reliability will be
  2. General rule of thumb:
    .6 = OK
    .7 = Good
    .8 = Very Good
    .9 = Excellent
    >.95 = too high; items are too inter-related and therefore some are redundant

Scenario

[edit | edit source]
  • Question: Cronbach's for a set of measurement items is .76. If one of the items is removed, the alpha will be .77. Should the item be removed?
  • Answer: Maybe. Investigate the item more closely. How does it relate to the underlying factor? What are its correlations with other items in the factor? Then make a decision. Do you think the measure of the underlying factor is better with or without the item? Why or why not?

See also

[edit | edit source]