14. The external criterion it is correlated with is future behavior or test

15. If predicted behavior exists, then test is considered valid

16. Is used to predict some future behavior of those taking it.

17. Is excellent for achievement tests

18. Uses logical judgment

19. Only minimal in its ability to determine a tests validity.

20. External criterion is instructional objectives

21. The inspection of questions to see if they correspond to what it is the test should cover.

22. Content Validity Evidence

23. Does the test measure what it is supposed to?

24. Criterion-Related Validity Evidence

25. Construct Validity Evidence

26. Test-Retest (Stability)

27. Does the test yield the same or similar score rankings consistently when all other factors are equal?

28. Reliability

29. Validity

30. Test is given this week, and then given again next week, with no instruction given between tests

31. If the scores have a high correlation, then the test is considered reliable

32. Alternate Forms (Equivalance)

33. Two equivalent forms of test are given

34. The correlation between the two forms is then determined

35. Shortly after, like that afternoon, same students take the opposite half of the test than they did in the morning

36. Internal Consistency (Split-Halves)

37. If a test item is supposed to measure one basic concept, then one item should be correlated to the next. Therefore, if a student gets one item correct, they should get similar items correct.

38. A test is split into two equal halves

39. Both forms are given to a group of students in the morning

40. If they are strongly correlated then they can be assumed to be reliable measures of the internal consistency of a test

41. Both haves are administered to a group of students

42. The scores for each half are then determined and correlation computed using the Spearman-Brown formula, or one of the Kuder-Richardson formulas

43. In interpreting validity, one must remember that validity evidence depends on both the strength of the validity coefficient and the purpose the test is being used for; group variability affects the strength of the variability coefficient; and that the coefficients aught to be considered in terms of the importance and reliability of the criterion. (Kubiszyn & Borich, 2010)

44. In interpreting reliability coefficients one must keep in mind that group variability; scoring reliability; test length; and item difficulty all affect or limit test score reliability. (Kubiszyn & Borich, 2010)

45. References:Kubiszyn, T., & Borich, G., (2010). Educational Testing and Measurement: Classroom Application and Practice, 9th Edition