Validity and Reliability

Track and organize your meetings within your company

Get Started. It's Free
or sign up with your email address
Validity and Reliability by Mind Map: Validity and Reliability

1. Validity measns does the test measure what it is supposed to measure Match test with objectives (pg. 326).

2. Construct Validity Evidence

3. Concurrent validity evidence is determined by correlating test scores with a criterion measure collected at the same time.

4. Predictive validity evidence refers to how well the test predicts some future behavior. And is determined by administering the test to a group of subjects then measuring the subjects on whatever test is supposed to predict after a period of time has elasped (pg. 328).

5. A test has contruct validy if its relationship to other information corresponds well with some theory. Any information that lets you know whether results from the test correspond to what you would expect tells you something about the construct validity evidence for a test (pg. 328).

6. It differs from concurrent validity evidence in that there is no accepted second measure available of what you're trying to measure, and it differs from predictive validity evidence in that there is no available measure of future behavior.

7. Importance of validity and reliability

8. Validity

9. Criteriion-Related Validity Evidence

10. Predicktive Validity Evidence is useful and important for aptitude tests, which attempt to predict how well test-takers will do in some future setting (pg. 328).

11. Content Validity Evidence

12. Content validity evidence is assessed by systematically comparing a test item with instructional objectives to see of they match. Content validity evidence does not yeild a mumerical estimate of valdity (pg. 327).

13. Consturct validity evidence is important in estaablishing the validity of a test when we cannot anchor our test either to a well=established test measuring the same behavior or to any measurable future behavior (pg. 331).

14. Reference

15. Kubiszyn, T. & Borich, G. D. (2013). EDUCATIONAL TESTING & MEASUREMENT: Classroom Application and Practice. (10th ed.). John Wiley & Sons, Inc., Hoboken, NJ.

16. Test-retest estimates of reliability are obtained by administering the same test twice to the same groug of individuals, with a small time interval between testing, and correlating the scores. The longer the time interval, the lower test-retest estimates will be.

17. Reliability refers to the stability of a test score over repeated administration, assuming the trait being measured has not changed (pg. 338).

18. Alternate forms or Equivalence

19. Alternate-form estimates of reliability are obtained by administering two alternate or equivalent forms of a test to the same group and correlating their scores. The time interval between testing is as short as possible (pg. 340).

20. Internal consistency estimates of reliability fall into two general cortegories: split half of odd-end estimates and item-total correlations, such as the Kuder-Richardson (KR) procedure. These estimates should be used only when the test measures a single unitary trait.

21. Split-half and odd-end even estimates divide a test into halves and correlate the half with one another. Because these correlations are based on half tests, the obtained correlation underestimate the reliability of the whole test. The Spearman-Brown prophecy formula is used to correct these estimates to what they would be if they were based on the whole test (pg. 341).

22. Reliability

23. Internal Consistency

24. Kuder-Richardson methods determine the extent to which the entire test represents a single, fairly consistent measure of a concept.

25. Internal consistency estimates tend to yield inflacted reliability estimates for speed tests. Since most achievement tests are at least partially speeded, internal consistency estimates for such test will be somewhat inflated (pg. 343).