Validity & Reliability

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity & Reliability by Mind Map: Validity & Reliability

1. Test-Retest or Stability

1.1. By delivering the same assessment to the same group on different dates more than once, the reliability of the test can be measured.

1.2. Correlations in the results can often determine the reliability of the assessment.

1.3. Experience and memory from past tests negatively influence test-retest data.

2. Alternate Forms or Equivalence

2.1. The concept of alternate forms is similar to test-retest, but the tests are not identical. Instead, the tests measure the same basic concepts using different assessment items.

2.2. Alternate forms can solve the test-retest problem in which experience and memory negatively impact the results.

2.3. Again, the correlation in the results is measured to test for reliability.

3. Internal Consistency

3.1. Internal consistency relies on two methods of determination.

3.1.1. The Kuder-Richardson method shows how reliably the test measures a single concept.

3.1.2. The split-half method shows the overall reliability of a test by dividing that test in half and delivering one half as an assessment to half the students, and the other half of the test to the other half of students.

3.1.2.1. The reliability of the spilt-half method is undermined by the correlations, thus the results are merely estimates.

3.1.2.2. The Spearman-Brown prophecy formula is employed to adjust these estimates, thereby making them more reliable.

4. Content Validity Evidence

4.1. The simplest form of determining validity evidence.

4.2. Aptitude or personality testing make determining content validity more difficult.

4.3. Content validity evidence does not always support the effectiveness of an assessment.

5. Criterion-Related Validity Evidence

5.1. To obtain criterion-related validity evidence test scores are correlated with external criterion to determine their validity.

5.2. Criterion-related validity evidence gathering is divided into two subsets.

5.2.1. Predictive Validity Evidence

5.2.1.1. "Determined by correlating test scores with criterion collected after a period of time has passed" (Kubiszyn & Borich, 2012).

5.2.1.1.1. In other words, the student test scores are measured against future criteria, such as student test scores in 2012 being compared to student test scores in 2014.

5.2.2. Concurrent Criterion-related Evidence

5.2.2.1. "Determined by correlating test scores with a criterion measure at the same time" (Kubiszyn & Borich, 2012, p. 339).

5.2.2.1.1. In other words, the student test scores are measured against current criteria, such as 2012 national test score averages.

6. Construct Validity Evidence

6.1. Theoretical predictors or rationales are used in Construct Validity Evidence gathering to determine if test scores correlate with other variables.

7. "Validity: Does the test measure what it is supposed to measure?" (Kubiszyn & Borich, 2012, p. 329)

7.1. If a test has an intended purpose, then validity evidence demonstrates if the test fulfills that purpose.

8. "Reliability: Does the test yield the same or similar score rankings consistently?" (Kubiszyn& Borich, 2012, p. 329)

8.1. Reliability demonstrates how effective a test is through multiple administrations.