Validity/ Reliabiliy

Get Started. It's Free
or sign up with your email address
Validity/ Reliabiliy by Mind Map: Validity/ Reliabiliy

1. Reliability: Does the test yield the same or similar score ranking(all other factors being equal) consistently? The three basic method often used are:

1.1. Test-Retest or Stability: Test is given twice and the correlation between the scores are found.

1.2. Alternate Forms or Equivalence: Students take two equivalent test and the correlation between both test scores are found.

1.3. Internal Consistency: If one item on a test is correct, a student (person) should be able to get the other item on the test correct due to items being correlated with each other. The two different approaches for determining a test internal consistency is:

1.3.1. Split half: Splitting the test into equal halves and finding the correlation between the two.

1.3.2. Kuder-Richardson Methods: Measure the extent to which items within one form of the test has just as much in common with one another as items in an equivalent form.

2. Validity: Does the test measure what it is suppose to measure? Types of the validity evidence are:

2.1. Content Validity Evidence: Can be established by looking at the test questions to see if they correspond to what the user says should be covered by the test.

2.2. Criterion-Related Validity Evidence: Scores from a test are correlated with external criterion. The two types of this criterion-related validity evidence are:

2.2.1. Concurrent Criterion-Related Validity Evidence: Deals with measures that can be administered at the same time as the measure to be validated.

2.2.2. Predictive Validity Evidence: Shows how well a test can predict future behaviors of persons taking test.

2.3. Construct Validity Evidence: Information corresponds to what you would expect it to measure based on your knowledge.

2.4. Reference: Kubiszyn, T. & Borich, G. (2013). Educational Testing & Measurement: Classroom Application & Practice (10th ed.) John Wiley & Sons, Inc.