Validity and Reliability in Learning Assessment

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity and Reliability in Learning Assessment by Mind Map: Validity and Reliability in Learning Assessment

1. Validity: Does the test measure what it is supposed to measure?

2. Content Validity Evidence: Established by inspecting test questions to determine whether they correspond to what should be on the test

3. Criterion-Related Validity Evidence: Occurs when evidence from tests are correlated with external criterion.

4. Concurrent Criterion-Related Validity: Determined by administering both new test and the established test.

5. Predictive Validity Evidence: Refers to how well the test predicts future behaviors of the examinees, very useful when implementing aptitude tests.

6. Construct Validity Evidence: This is relevant if the information corresponds well with some theory. Example: Test is given on math computation the scores would increase based on intensive coaching in math not foreign language or science.

7. Reliability: Does the test yield the same or similar score rankings (all other factors being equal) consistently?

8. The three basic methods for estimating reliability of scores are: test-retest, alternative form, internal consistency.

8.1. New node

9. Internal Consistency: this is used when a test is designed to measure a single basic concept. For example, the questions need to be consistent and the items need to correlate with each other.

10. Test-retest: Test is given twice and correlation between the tests scores is determined.

11. Alternate Forms: Two forms of the test are given to a group of students. Correlation between scores are determined. Test must be administered under conditions that are equivalent.