# Validity and Reliability

Get Started. It's Free 538
1451
480
1
Validity and Reliability ## 3. Ways of Estimating Reliability

### 3.1. Test-retest

3.1.1. The test is given twice and the correlation between the first set of scores and the second set of scores is determined. Usually this is unreliable because students may use their memory on the second test. Also, the longer the interval between the test and re-test the lower the reliability coefficient.

### 3.2. Alternate Forms

3.2.1. Two forms of a test are given to students. Then, the correlation between the two tests is determined. The tests must be equivalent and given in the same conditions.

### 3.3. Internal Consistency

3.3.1. Test items should be correlated with each other.

3.3.2. Split-half methods

3.3.2.1. Dividing the test into two halves and determining the correlation between them.

3.3.2.2. This should be used when the items are on different levels of difficulty and are spread out on the test.

3.3.2.3. If the test consists of the questions ranging from easiest to moredifficult, then the test should be divided by placing all odd-numbered items into one half and all even-numbered items into the other half.

3.3.3. Kuder-Richardson methods

3.3.3.1. Used to determne the extent to which the entire test represents a single consisten measure of a concept.

3.3.4. Problems with internal consistency

3.3.4.1. Sometimes yield inflated reliability estimates

## 6. Types of Validity

### 6.2. Criterion-related validity

6.2.1. Concurrent criterion related validity

6.2.1.1. A new test and established test are given to students. Then, find the correlation between the two scores.

6.2.2. Predictive Validity

6.2.2.1. How well the tests predicts a future behavior of the test takers.

### 6.3. Construct Validity

6.3.1. If the relationship of the test corresponds with some theory. Any information that tells you whether results from the test correspond to what you would expect.