Validity and Reliabilty

Track and organize your meetings within your company

Laten we beginnen. Het is Gratis
of registreren met je e-mailadres
Validity and Reliabilty Door Mind Map: Validity and Reliabilty

1. Content: The content validity evidence for a test is established by inspecting test questions to see whether they correspond to what the user decides should be covered by the test (Kubiszyn & Borich, 2013 p.327)." The assessor should ensure that the questions on the test are related to what was studied and the learning outcomes that he wants his students to know.

2. Concurrent criterion-related validity evidence deals with measures that can be administered at the same time as the measure to be validated (Kubiszyn & Borich, 2013 p.327)."

2.1. Action Item 1

2.2. Action Item 2

2.3. Concurrent criterion-related validity evidence deals with measures that can be administered at the same time as the measure to be validated (Kubiszyn & Borich, 2013 p.327)."

2.3.1. Concurrent criterion-related validity evidence deals with measures that can be administered at the same time as the measure to be validated (Kubiszyn & Borich, 2013 p.327)."

3. References Kubiszyn, T. & Borich, G. (2013). Educational testing & measurement: Classroom application and practice (10th ed.). John Wiley & Sons, Inc., Hoboken, NJ.

4. VALIDITY Does the test measure what it is supposed to measure? (Kubiszyn & Borich, 2013)

5. "Criterion-related validity evidence, scores from a test are correlated with an external criterion (Borich & Kubisyn, 2010 p.330)." The external criterion should be pertinent to the original test; e.g. comparing the scores from a month's worth of math quizes to a final amth test at the end of the month. A student that did well on the weekly quizes should have a positive reflection on the math test.

5.1. Concurrent criterion-related validity evidence deals with measures that can be administered at the same time as the measure to be validated (Kubiszyn & Borich, 2013 p.327)."

5.2. Predictive validity evidence refers to how well the test predicts some future behavior of the examinees (Kubiszyn & Borich, 2013 p.328).

6. RELIABILTY: The reliability of a test refers to the consistency with which it yields the same rank for individuals who take the test more than once. In other words, a test (or any measuring instrument) is reliable if it consistently yields the same, or nearly the same, ranks over repeated administrations during which we would not expect the trait being measured to have changed (Kubiszyn & Borich, 2013).

6.1. John Demo

6.2. Jane Demo

6.3. Johnny Appleseed

7. Test–Retest or Stability Test–retest is a method of estimating reliability that is exactly what its name implies. The test is given twice, and the correlation between the first set of scores and the second set of scores is determined. For example, Students are given the same exam twice and the scores earned on both exams are measured. The issue with this method is the uncertainty of the assessment being valid due to the fact that some of the students may have a good memory of the test whereas others may have forgotten some or most of the information, and though they may have scored well on the first test, may do very poorly on the second test and vice-versa.

8. Alternate Forms or Equivalence If there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test. Both forms are administered to a group of students, and the correlation between the two sets of scores is determined. This estimate eliminates the problems of memory and practice involved in test–retest estimates (Kubiszyn & Borich,2013). To use this method of estimating reliability, two equivalent forms of the test must be available, and they must be administered under conditions as nearly equivalent as possible. The most critical problem with this method of estimating reliability is that it takes a great deal of effort to develop one good test, let alone two (Kubiszyn & Borich, 2013).

8.1. Kuder–Richardson Methods Another way of estimating the internal consistency of a test is through one of the Kuder–Richardson methods. These methods measure the extent to which items within one form of the test have as much in common with one another as do the items in that one form with corresponding items in an equivalent form (Kubiszyn & Borich, 2013).

8.2. Split-Half Methods To find the split-half (or odd–even) reliability, each item is assigned to one half or the other. Then, the total score for each student on each half is determined and the correlation between the two total scores for both halves is computed. Essentially, a single test is used to make two shorter alternative forms (Kubiszyn & Borich, 2013).

9. Internal Consistency If the test in question is designed to measure a single basic concept, it is reasonable to assume that people who get one item right will be more likely to get other, similar items right (Kubiszyn & Borich, 2013).