Validity and Reliability in Assessment

Solve your problems or get new ideas with basic brainstorming

Comienza Ya. Es Gratis
ó regístrate con tu dirección de correo electrónico
Validity and Reliability in Assessment por Mind Map: Validity and Reliability in Assessment

1. Does not however check for reading difficulty or poor test items

2. Test is considered to have construct validity if its relationship to other information corresponds well with some theory or rationale.

3. Is used when there is no criterion to anchor the test, such as when measuring something new, or not measured well before.

4. Test is given and then after a period of time, the person who took it is measured to determine if it predicted what it should

5. The external criterion is a well established test.

6. If the same students rank similarly on both tests; then the new test is considered to be valid.

7. Used to validate new tests

8. Both the old test and the new test are given to the same group

9. Uses numerical indices

10. Deals with measures that can be administered at the same time as the measure being validated

11. Concurrent Criterion-Related Validity Evidence

12. Predictive Criterion-Related Validity Evidence

13. The external criterion it is correlated with is future behavior or test

14. If predicted behavior exists, then test is considered valid

15. Is used to predict some future behavior of those taking it.

16. Is excellent for achievement tests

17. Uses logical judgment

18. Only minimal in its ability to determine a tests validity.

19. External criterion is instructional objectives

20. The inspection of questions to see if they correspond to what it is the test should cover.

21. Content Validity Evidence

22. Does the test measure what it is supposed to?

23. Criterion-Related Validity Evidence

24. Construct Validity Evidence

25. Does the test yield the same or similar score rankings consistently when all other factors are equal?

26. Validity

27. In interpreting validity, one must remember that validity evidence depends on both the strength of the validity coefficient and the purpose the test is being used for; group variability affects the strength of the variability coefficient; and that the coefficients aught to be considered in terms of the importance and reliability of the criterion. (Kubiszyn & Borich, 2010)

28. References:Kubiszyn, T., & Borich, G., (2010). Educational Testing and Measurement: Classroom Application and Practice, 9th Edition

29. If there are large differences in the scores then it is determined not to be reliable.

30. Test-Retest (Stability)

31. Reliability

32. Test is given this week, and then given again next week, with no instruction given between tests

33. If the scores have a high correlation, then the test is considered reliable

34. Alternate Forms (Equivalance)

35. Two equivalent forms of test are given

36. The correlation between the two forms is then determined

37. Shortly after, like that afternoon, same students take the opposite half of the test than they did in the morning

38. Internal Consistency (Split-Halves)

39. If a test item is supposed to measure one basic concept, then one item should be correlated to the next. Therefore, if a student gets one item correct, they should get similar items correct.

40. A test is split into two equal halves

41. Both forms are given to a group of students in the morning

42. If they are strongly correlated then they can be assumed to be reliable measures of the internal consistency of a test

43. Both haves are administered to a group of students

44. The scores for each half are then determined and correlation computed using the Spearman-Brown formula, or one of the Kuder-Richardson formulas

45. In interpreting reliability coefficients one must keep in mind that group variability; scoring reliability; test length; and item difficulty all affect or limit test score reliability. (Kubiszyn & Borich, 2010)