Validity and Reliability in Assessment

Solve your problems or get new ideas with basic brainstorming

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity and Reliability in Assessment by Mind Map: Validity and Reliability in Assessment

1. If there are large differences in the scores then it is determined not to be reliable.

2. Does not however check for reading difficulty or poor test items

3. Test is considered to have construct validity if its relationship to other information corresponds well with some theory or rationale.

4. Is used when there is no criterion to anchor the test, such as when measuring something new, or not measured well before.

5. Test is given and then after a period of time, the person who took it is measured to determine if it predicted what it should

6. The external criterion is a well established test.

7. If the same students rank similarly on both tests; then the new test is considered to be valid.

8. Used to validate new tests

9. Both the old test and the new test are given to the same group

10. Uses numerical indices

11. Deals with measures that can be administered at the same time as the measure being validated

12. Concurrent Criterion-Related Validity Evidence

13. Predictive Criterion-Related Validity Evidence

14. The external criterion it is correlated with is future behavior or test

15. If predicted behavior exists, then test is considered valid

16. Is used to predict some future behavior of those taking it.

17. Is excellent for achievement tests

18. Uses logical judgment

19. Only minimal in its ability to determine a tests validity.

20. External criterion is instructional objectives

21. The inspection of questions to see if they correspond to what it is the test should cover.

22. Content Validity Evidence

23. Does the test measure what it is supposed to?

24. Criterion-Related Validity Evidence

25. Construct Validity Evidence

26. Test-Retest (Stability)

27. Does the test yield the same or similar score rankings consistently when all other factors are equal?

28. Reliability

29. Validity

30. Test is given this week, and then given again next week, with no instruction given between tests

31. If the scores have a high correlation, then the test is considered reliable

32. Alternate Forms (Equivalance)

33. Two equivalent forms of test are given

34. The correlation between the two forms is then determined

35. Shortly after, like that afternoon, same students take the opposite half of the test than they did in the morning

36. Internal Consistency (Split-Halves)

37. If a test item is supposed to measure one basic concept, then one item should be correlated to the next. Therefore, if a student gets one item correct, they should get similar items correct.

38. A test is split into two equal halves

39. Both forms are given to a group of students in the morning

40. If they are strongly correlated then they can be assumed to be reliable measures of the internal consistency of a test

41. Both haves are administered to a group of students

42. The scores for each half are then determined and correlation computed using the Spearman-Brown formula, or one of the Kuder-Richardson formulas

43. In interpreting validity, one must remember that validity evidence depends on both the strength of the validity coefficient and the purpose the test is being used for; group variability affects the strength of the variability coefficient; and that the coefficients aught to be considered in terms of the importance and reliability of the criterion. (Kubiszyn & Borich, 2010)

44. In interpreting reliability coefficients one must keep in mind that group variability; scoring reliability; test length; and item difficulty all affect or limit test score reliability. (Kubiszyn & Borich, 2010)

45. References:Kubiszyn, T., & Borich, G., (2010). Educational Testing and Measurement: Classroom Application and Practice, 9th Edition