Get Started. It's Free
or sign up with your email address
Rocket clouds
Assessments by Mind Map: Assessments

1. Validity

2. Defined:demonstrating that it measures what it says it measures (Kubiszyn, 2010).

3. Does the test measure what it is supposed to?

4. Content Validity Evidence-established by inspecting test questions to see if they correspond to what the user decides should be covered by the test (Kubiszyn, 2010).

5. Criterion-Related Validity Evidence-Scores are correlated with an external criterion.

6. Concurrent Criterion-related validity evidence-examples, IQ tests

7. Predictive validity evidence- examples, SAT's

8. Construct Validity Evidence: Has construct validity if its relationship to other information correspons well with a theory (Kubiszyn, 2010).

9. Validity Coefficients

10. Content Validity Evidence: comparing test items with instructional objectives to determine whether the items match or measure the objectives(Kubiszyn, 2010).

11. Concurrent And Predictive Validity Evidence: require the correlation of a predictor or concurrent measure with criterion measure (Kubiszyn, 2010).

12. Reliability

13. Defined: Consistency to which it yields the same rank for individuals who take the test more than once (Kubiszyn, 2010).

14. Does the test yield the same or similar score rankings consistently?

15. Test-Retest or Stability- method of estimating reliability that is exactly what its name implies (Kubiszyn, 2010).

16. Alternate Forms or Equivalence- these forms can be used to obtain an estimate of the relaibility of the scores from the tests (Kubiszyn, 2010).

17. Internal Consistency- items should be consistent with each other and the tests should be internally consistent.

18. Kuder-Richardson methods: Measure the extent to which items within one form in the test have in common.

19. Split-half methods: each item assigned to one half or the other

20. Reliability Coefficients

21. Principle 1: Group variability affects the size of the reliability coefficient. Higher coefficients result from homogenous groups (Kubiszyn, 2010).

22. Principal 2: Scoring Reliability limits test score reliability. If tests are scored unreliably, erro is introduced that will limit the reliability of the test scores (Kubiszyn, 2010).

23. Principal 3: All other factors being equal, the more items included in a test, the higher the reliability of the scores (Kubiszyn, 2010).

24. Principal 4: Reliability of test scores tends to decrease as tests become too easy or too difficult (Kubiszyn, 2010).