Validity And Reliability

Solve your problems or get new ideas with basic brainstorming

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity And Reliability by Mind Map: Validity And Reliability

1. Split halfs involves splitting the test into two equivalent halves and determining the correlation between them.

1.1. Kuder-Richardson involves measuring the extent to which items within one form of the test have as much in common with one another as do the items in that one form with corresponding items.

2. Internal Consistency means that items in the test should be correlated with each other and ithe test should be internally consistent. Two approaches to making sure that the test is internally consisten is by Split -half & Kuder -Richardson methods.

3. Alternative form means if there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test.

3.1. New node

4. Test-Retest is a method of estimating reliability that is exactly what it name implies. The test is given twice and the correlation between the two are determined.

5. Internal Consistency

6. Test-Retest

6.1. Alternative Form

7. Three Types of Reliablity:

8. Validity:

8.1. Validity ask the question :Does the test measure what it is suppose to measure?

9. Plan

9.1. Goals

9.1.1. Goal 1

9.1.2. Goal 2

9.2. Rules

9.2.1. Session Rule 1

9.2.2. Session Rule 2

9.3. Define Problems

9.4. Capture Ideas

9.5. Prioritize Ideas

9.6. Define Action Points

10. Reliability

10.1. Reliability ask the question:Does the test yeid the same of similar score rankings (all other factors being equal) consistently?

11. Interpreting Reliability Coeffcients

11.1. Group variability affects test scores reliability

11.1.1. As Group variability increases, reliability goes up.

11.2. Scoring reliability limits test score relaibility.

11.2.1. As scoring relaibility goes down, so does the test reliability

11.3. Test length affects test scores reliability.

11.3.1. As test length increases, the test reliability tends to go up.

11.4. Item diffucultyaffects test score reliability.

11.4.1. As items become very easy or hard, the test's relaibility dows down.

12. Construct Validity Evidence

13. Reliability of a test refers to the consistency with which it yeilds the same rank for individuals who take the test more than once.

14. Types of Validity

15. 1.Content Validity Evidence: this is established for a test by inspecting test questions to see whether they correspond to what the user decides should be covered in the test.

16. 2. Criterion-Related Validity Evidence: scores from a test are correlated with an external criterion. There are two types of criterion-related validity evidence.

17. Concurrent & Predictive :Concurrent deals with the measures that can be administered at the same time as the the measure to be validated. Predictive refers to how well the test predicts som future behavior of the examinees.

18. Some test used while trying to dechipher concurrent validity are: Standford-Binet V & (WISC-IV) Wechsler Intelligence Scale for Children-IV

19. A test has construct validity evidence if its relationship to other information corresponds well witht the same theory.