Validity and Reliability

Just an initial demo map, so that you don't start with an empty map list ...

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity and Reliability by Mind Map: Validity and Reliability

1. Types of Validity

1.1. Content Validity Evidence

1.1.1. " The content validity evidence for a test is established by inspecting test questions to see whether they correspond to what the user decides should be covered by the test." (Kubiszyn & Borich, 2010, pg. 330)

1.2. Criterion Related Validity Evidence

1.2.1. "In establishing criterion-related validity evidence, scores from a test are correlated with an external criterion." (Kubiszyn & Borich, 2010, pg. 330)

1.2.1.1. Types of Criterion-Related Validity Evidence

1.2.1.1.1. Predictive

1.2.1.1.2. Concurrent

1.3. Construct Validity Evidence

1.3.1. "A test has construct validity evidence if its relationship to other information corresponds well with some theory." (Kubiszyn & B0rich, 2010, pg. 332)

2. A valid test measures what it is primarily supposed to measure.

3. Methods of Reliability

3.1. Test- Retest or Stability Method

3.1.1. "The test is given twice and the correlation between the first set of scores and the second set of scores is determined." (Kubiszyn & Borich, 2010, pg. 341)

3.2. Alternative form or Equivalence Method

3.2.1. "To use this method of estimating reliability, two equivalent forms of the test must be available, and they must be administered under conditions as nearly equivalent as possible." (Kubiszyn & Borich, 2010, pg. 343)

3.3. Internal Consistency Method

3.3.1. "If the test in question is designed to measure a single basic concept, it is reasonable to assume that people who get one item right will be more likely to get other, similar items right. In other words, items ought to be correlated with each other, and the test ought to be internally consistent." (Kubiszyn & Borich, 2010, pg. 343)

4. Validity Coefficients

4.1. Content Validity

4.2. Concurrent Criterion Validity

4.3. Predictive Criterion Validity

5. "Content Validity Evidence (asks the question:) Do test items match and measure objectives? (and answer the questions by:) Match the items with objectives. " (Kubiszyn & Borich, 2010, pg. 335)

6. "Concurrent Criterion- Related Validity Evidence (asks the question) How well does performance on the new test match performance on an established test? (It answeres the question by) Correlate new test with an accepted criterion, for example, a well-established test measuring the same behavior. "

7. "Predictive Criterion- Related Validity Evidence (asks the question) Can the test predict subsequent perfdormance, for example, success or failure in the next grade? (It answers the question by) Correlate scores from the new test with a measure of some future performance" (Kubiszyn & Borich, 2010, pg. 335)

8. The reliability of a test refers to the consistency with which it yields the same rank for individuals that take the test more than once.

9. Two Methods

9.1. Split Halves Method

9.2. Kuder- Richardson Methods

10. Reliability Coefficients

10.1. "PRINCIPLE 1: Group variability affects the size of the reliability coefficient. Higher coefficients result from heterogeneous groups than from homogeneous groups. " (Kubiszyn & Borich, 2010, pg. 346)

10.2. "PRINCIPLE2: Scoring reliability limits test score reliability. If tests are scored unreliably, error is introduced that will limit the reliability ofthetest scores." (Kubiszyn & Borich. 2010, pg. 347)

10.3. "PRINCIPLE3: All other factors being equal, the more items included in a test, the higher the reliability of the scores." (Kubiszyn & Borich, 2010, pg. 347)

10.4. "PRINCIPLE4: Reliability of test scores tends to decrease as tests become too easy or too difficult." (Kubiszyn & Borich, 2010, pg. 348)

11. Reliabilty ensures that the lesson plans and assessments are valid.

12. Valididty ensures that the information being tested relates to the information that was actually taught.