Types of Validity and Reliability
von Kathleen Entwistle
1. Accuracy - Does the test score fairly closely to a student's true level of skill, ability and aptitude?
2. Reliability- Does the test yield the same or similar score rankings consistently, all other factor being equal?
3. Validity- Does the test measure what it is supposed to measure?
4. Reference
5. For Learning and Assessment
6. Kubiszyn, T., & Borich, G. (2013). Validity Evidence. Educational Testing and Measurement:: Classroom Application and Practice, (pp. 326-338). Hoboken, NJ: John Wiley and Son, Inc.. (Original work published 2003)
7. Alternate- form estimates of Reliability-- Obtained by administering two alternate or equivalent forms of a test to the same group and correlating their scores.
8. Test-Retest Stability-Reliability is shown if duplicate scores of a repeated test is observed.
9. Criterion-Related Evidence- Scores from a test are correlated with an external criterion
10. Concurrent Validity Evidence- Allows for multiple tests that are similar
11. Construct Validity Evidence-Determined by finding whether test results correspond with scores on other variables as predicted by some rationale or theory.
12. Predictive Validity Evidence- Determined by correlating test scores with a criterion measure collected after a period of time.
13. Content Validity Evidence- Test should match the curriculum standards
14. Validity
15. Reliability
15.1. Internal Consistency estimates of Reliability- Falls into two general categories split-half or odd-even estimates, Item-total correlations such as Kuder-Richardson procedure.
15.1.1. Split-half and odd-even estimates- Test is divided into halves. the halves are then correlated with each other. Spearman-Brown prophecy formula corrects the assessments as if whole test was taken.
15.1.2. Kuder-Richardson Methods- Determines the extent to which the entire test represents a single, fairly consistent measure of the concept.