# Validity & Reliability

Find the right structure and content for your course and set up a syllabus

Get Started. It's Free
Validity & Reliability

## 1. Important to Learning & Assessment

### 1.1. Validity

1.1.1. "Does the test measure what it is supposed to measure?" (Kubiszyn & Borich, 2010, p. 329)

### 1.2. Reliability

1.2.1. "Does the test yield the same or similar score rankings (all other factors being equal) consistently?" (2010, p. 329)

### 1.3. Accurracy

1.3.1. "Does the test score fairly closely approximate an individual's true level of ability, skill, or aptitude?" (2010, p. 329)

## 5. Internal Consistency

### 5.3. Split-halves (or odd-even) reliability method

5.3.1. Divide a test into halves and correlate the halves with one another. Because these correlations are based on half tests, the obtained correlations underestimate the reliability of the whole test (2010, p. 349).

5.3.2. The Spearman–Brown prophecy formula is used to correct these estimates to what they would be if they were based on the whole test (2010, 349).

### 5.4. Kuder–Richardson methods

5.4.1. Measures the extent to which items within one form of the test have as much in common with one another as do the items in that one form with corresponding items in an equivalent form (2010, p. 344).

5.4.2. The strength of this estimate of reliability is dependent upon the extent to which the whole test “represents a single, fairly consistent measure of a concept” (2010, p. 344).

## 6. If it can be demonstrated that a test measures what it is suppose to measure, then it has validity evidence.

### 6.1. Content Validity Evidence

6.1.1. Simplest

6.1.2. In the classroom testing context, it answers the question "Does the test measure the instruction objectives?" (Kubiszyn & Borich, 2010, p. 330)

6.1.3. Does not yield a numerical evidence, yields a logical judgment...

6.1.4. These three assume that some criterion exists external to the test that can be used to anchor or validate the test (2010, p. 332).

### 6.2. Criterion-Related Validity Evidence

6.2.1. Concurrent Validity Evidence

6.2.1.1. Deals with measures that can be administered at the same time as the measure to be validated (2010, p. 330)..

6.2.2. Predictive Validity Evidence

6.2.2.1. Refers to how well the test predicts some future behavior of the examinees (2010, p. 331).

6.2.2.1.1. Yields numerical indices of validity

### 6.3. Construct Validity Evidence

6.3.1. If a test's relationship to other data corresponds soundly with some theory (a logical explanation that accounts for the interrelationships amongst a set of variables)

6.3.2. It is different than concurrent validity evidence because there is no recognized second measure obtainable of what it is that one is attempting to measure, and it is different than predictive validity evidence because no measure of future behavior is available.