Reliability and Validity
by Mishelle Severe
1. Alternate-form: unlike test-retest, alternate-form administers two different or equivalent tests to the same group of test takers and correlates their scores. Using alternate-form reduces the factor of memorization and practice that may occur in the test-retest method.
2. Test-Retest: the administration of the same test twice to the same group of tests takers and then scores are correlated. This determines the reliability of the test.
3. Internal Consistency: has two components: split halves-determines the correlation between two halves of the test which may result in odd-even reliability. Odd-even reliability is when test items levels (easy/difficult) are not equally spread around the test, so test will be divided by odd and even numbers.
4. Content Validity Evidence: the simplest way to determine whether or not tests have sufficient validity. Content validity is primarily used to make sure test items match instructional objectives however does not estimate validity numerically.
5. Criterion-related validity evidence: established by correlating test scores with an external standard or criterion to obtain numerical estimate of validity evidence (Kubiszyn & Borich, 2010).
5.1. Concurrent criterion-related validity evidence: deals with measure that can be administered at the same time as the measure to validated (Kubiszyn & Borich, 2010). Concurrent validity evidence distributes two tests (new test and established test) to determine the correlation between the sets of test scores.
5.2. Predictive validity evidence: determines how well a test predicts future behaviors of exam takers.
6. Construct Validity Evidence: determined by finding whether tests results correspond with scores on other variables as predicted by some rationale or theory (Kubiszyn & Borich, 2010).