Evaluating Assessments

Project Control, Project Closing, Timeline template

Get Started. It's Free
or sign up with your email address
Rocket clouds
Evaluating Assessments by Mind Map: Evaluating Assessments

1. Validity

1.1. Content Validity Evidence

1.1.1. Looks to see that questions match what should be covered.

1.1.2. Easiest when the test area is achievement.

1.1.3. More difficulty if concept being tested is a personality or aptitude trait.

1.1.4. The Problem is it does not tell us if the reading level of the test is too high or if the item is poorly constructed, it only tells us if the test looks valid.

1.2. Criterion-Related Validity Evidence

1.2.1. Concurrent Is determined by administering both the new test and the established test to a group, than finding the correlation between the two sets of test scores (Kubiszyn, pg. 330). "Deals with measures that can be administered at the same time as the measure to be validated." (Kubiszyn, pg. 330).

1.2.2. Predictivie "Is determined by administering the test to a group of students, then measuring the students on whatever the test is supposed to predict after a period of time has elapsed" (Kubiszyn, pg. 331). Refers to the usefulness of test scores to predict future performance.

1.3. Construct Validity

1.3.1. Its relationship to other information corresponds well with some theory.

1.3.2. Requires the compilation of multiple sources of evidence.

1.3.3. Requires evidence that the test measures what it purports to measure.

1.3.4. Requires evidence that the test does not measure irrelevant attributes.

2. Reliability

2.1. Test-Retest Method

2.1.1. The test is given twice and the correlation between the first set of scores and the second set of scores is determined (Kubiszyn, 2010).

2.2. Alternate Forms or Equivalence Method

2.2.1. Two equivalent forms of a test are administered to a group of students, and the correlation between the two sets of scores is determined (Kubiszyn, 2010).

2.3. Internal Consistency Method

2.3.1. Designed to measure a single basic concept- "Assume people who get 1 item righ will more likely get other similar items right"(Kubiszyn, 2010). Split-Half Method Test items are put into two groups. The total scores for each student on each half of the test is determined and the correlation between the two total scores is computed.(Kubiszyn, pg. 2010). Kuder-Richardson Method "These methods measure the extent to which items within one form of the test have as much in common with one another as do the items in that one form with corresponding items in an equivalent form." (Kubiszyn, pg. 344).

3. References

3.1. Kubiszyn, T. & Borich, G. (2010). Educational testing & measurement: Classroom application and practice (9th ed.). John Wiley & Sons, Inc., Hoboken, NJ.