Validity and Reliability
by Lindsey Knowles
1. Validity Evidence
1.1. A test has validity evidence if it can be proven that it measures what it is supposed to measure
2. Content Validity Evidence
2.1. To check content validity, you must compare the assessment to the content to verify that they correspond.
3. Criterion References Validity Evidence
3.1. Used to correlate assessment scores to an outside criterion.
4. Content Related Criterion Reference Validity Evidence
4.1. When a measurement directly correlates with another form of measurement. For instance, when a screening tool yields the same scoring as a full length evaluation.
5. Predictive Validity Evidence
5.1. A form which predicts future behaviors.
6. Construct Validity Evidence
6.1. Provides information that directly corresponds with a theory or logical explanation.
7. Reliability
7.1. This is proven when a test generally receives the same outcome over multiple administrations.
8. Test-Retest or Stability
8.1. When a test is given more then one to examine the coorelation between the attempts.
9. Alternate Forms or Equivalence
9.1. When a test has two or more equivalent forms which can be used to test the correlation between the forms.
10. Internal Consistency
10.1. Determining that individual items are correlated and that the assessment itself is consistent.
11. Split-half method
11.1. When items of two different forms of the test are assigned to two different students, and the total score of each test is used to determine correlation between each form of the test.
12. Kuder-Richardson Methods
12.1. Measures test items of one form of a test, to test items of another form of a test.