Validity and Reliability

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity and Reliability by Mind Map: Validity and Reliability

1. Kuder–Richardson methods determine the extent to which the entire test represents a single, fairly consistent measure of a concept.

2. In internal consistency reliability each single measurement is related to the same subject .It is split into two parts.

2.1. Split-half and odd–even estimates divide a test into halves and correlate the halves with one another.

3. Test–retest estimates of reliability giving the same test on separate occasions and gathering results.

3.1. This is important because students may do better with familiar material. A teacher may give the test and score it. If the scores were unsatisfactory they can reteach the lesson and reassess.

4. Alternate-form estimates of reliability are obtained by administering two alternate or equivalent forms of a test to the same group and correlating their scores in a short amount of time.

4.1. This is important because it is sort of a backup plan if one assessment fails and there is no evidence of student achievement.

5. Construct validity evidence is determined by finding whether test results will be beneficial or detrimental.

5.1. This is important because when educators begin to construct or implement a lesson they make generalizations of the outcome. After a thorough observation they can match up their expectations with what actually happened.

6. Criterion-related validity evidence is established by correlating test scores to see if it is predictive or concurrent to the lesson or learning outcome.

6.1. Predictive validity is when we assess the student's ability to predict something it should theoretically be able to predict.

6.2. Two Types of criterion-related validity

6.2.1. Concurrent validity evidence is determined by correlating test scores with a criterion measure collected at the same time.

6.3. This is important because educators can take scores from previous work and tests and compare them to final tests scores.

7. Content validity evidence is comparing the outcomes of the test to what was taught during the lesson

7.1. This is important because this keeps educators in line with "what" they are supposed to be teaching and measuring. It is a direct measurement against itself.

8. Validity

9. Reliability

9.1. Reliability refers to the stability of a test score over repeated administrations.

10. Three Types of Validity

11. Validity

11.1. A valid test measures what it is supposed to measure.

12. Types of Reliabilty