Validity and Reliability

Just an initial demo map, so that you don't start with an empty map list ...

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity and Reliability by Mind Map: Validity and Reliability

1. A test has validity evidence if we can demonstrate that it measures what it says it measures (Kubiszyn & Borich, 2013).

2. Concurrent criterion- related validity evidence deals with measures that can be administered at the same time as the measure to be validated (Kubiszyn & Borich, 2013).

3. Reliability

4. Get started!

4.1. Use toolbar to add ideas

4.2. Key shortcuts

4.2.1. INS to insert (Windows)

4.2.2. TAB to insert (Mac OS)

4.2.3. ENTER to add siblings

4.2.4. DEL to delete

4.2.5. Press F1 to see all key shortcuts

4.3. Drag & Drop and double-click canvas

4.4. Find out more?

4.4.1. Online Help

4.4.2. Use Cases & Templates Personal Todo List Vacation Planning Meeting Minutes Project Plan more...

4.4.3. Tools and Gadgets Offline Mode Geistesblitz Tools Email & SMS Gateways Offline Mode

5. Validity

6. : Does the test yield the same or similar score rankings (all other factors being equal) consistently?

7. Types of Validity

8. Types of Reliability

9. Content Validity Evidence- . The content validity evidence for a test is established by inspecting test questions to see whether they correspond to what the user decides should be covered by the test (Kubiszyn & Borich, 2013).

10. Predictive validity evidence refers to how well the test predicts some future behavior of the examinees.

11. Test–retest is a method of estimating reliability that is exactly what its name implies. The test is given twice, and the correlation between the first set of scores and the second set of scores is determined (Kubiszyn & Borich, 2013).

12. Alternate Forms or Equivalence If there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test. Both forms are administered to a group of students, and the correlation between the two sets of scores is determined (Kubiszyn & Borich, 2013).

13. Internal Consistency If the test in question is designed to measure a single basic concept, it is reasonable to assume that people who get one item right will be more likely to get other, similar items right. In other words, items ought to be correlated with each other, and the test ought to be internally consistent (Kubiszyn & Borich, 2013).

14. Split-Half Methods To find the split-half (or odd–even) reliability, each item is assigned to one half or the other (Kubiszyn & Borich, 2013).

15. Both relates to how valid the test should be and how reliable it is for students.

16. Kubiszyn, T. & Borich, G. (2013). Educational testing & measurement: Classroom application and practice (10th ed.). John Wiley & Sons, Inc., Hoboken, NJ.