Validity: Refers to the objectives being measured. Reliability refers to the stability of a test score
by Tina Lopez
1. Criterion-Related Validity Evidence: is established by correlating test scores with an external standard or criterion to obtain a numerical estimate of validity evidence. Two types of criterion-related validity evidence: concurrent and predictive.
1.1. Concurrent Validity evidence is determined by correlating test scores with a criterion measure collected at the same time.
1.2. Predictive Validity Evidence: is determined by correlating test scores with a criterion measure collected after a period of time as passed.
2. Construct Validity Evidence: is determined by finding whether test results correspond with scores on other variables as predicted by some rationale or theory.
3. Content Validity Evidence: is assessed by systematically comparing a test item with instructional objectives to see if they match.
4. Test - Retest: is a form of estimating reliability by administering the same test twice to the same group of individuals with a small time interval between testing and correlating the scores.
5. Alternate-Form: is a form of estimating reliability by administering two alternate or equivalent forms of a test to the same group and correlating their scores.
6. Internal Consistency: is a form of estimating reliability in two sub categories: Split-half or odd-even and item-total correlations.
6.1. Split-Half or odd-even: is a form of estimating reliability by dividing a test into halves and correlate the halves with one another.
6.2. Item total: is a form of estimating reliability by determining the extend to which the entire test represents a single fairly consistent measure of a concept. Also called Kuder-Richardson Method.