Types of Validity and Reliability
by Kathleen Entwistle
1. Alternate- form estimates of Reliability-- Obtained by administering two alternate or equivalent forms of a test to the same group and correlating their scores.
2. Test-Retest Stability-Reliability is shown if duplicate scores of a repeated test is observed.
3. Criterion-Related Evidence- Scores from a test are correlated with an external criterion
4. Concurrent Validity Evidence- Allows for multiple tests that are similar
5. Construct Validity Evidence-Determined by finding whether test results correspond with scores on other variables as predicted by some rationale or theory.
6. Predictive Validity Evidence- Determined by correlating test scores with a criterion measure collected after a period of time.
7. Content Validity Evidence- Test should match the curriculum standards
8. Accuracy - Does the test score fairly closely to a student's true level of skill, ability and aptitude?
9. Reliability- Does the test yield the same or similar score rankings consistently, all other factor being equal?
10. Validity- Does the test measure what it is supposed to measure?
11. Validity
12. Reliability
12.1. Internal Consistency estimates of Reliability- Falls into two general categories split-half or odd-even estimates, Item-total correlations such as Kuder-Richardson procedure.
12.1.1. Split-half and odd-even estimates- Test is divided into halves. the halves are then correlated with each other. Spearman-Brown prophecy formula corrects the assessments as if whole test was taken.
12.1.2. Kuder-Richardson Methods- Determines the extent to which the entire test represents a single, fairly consistent measure of the concept.