# Education

Different methods and interpretations of coefficients through validity and reliability in education

Get Started. It's Free
Education

## 1. Validity

### 1.1. A valid test measure

1.1.1. Content Validity Evidence

1.1.1.1. items match learning objectives

1.1.1.2. Non-numerical

1.1.1.3. Matches/fits instructional objectives

1.1.2. Criterion-Related

1.1.2.1. Yields numerical value

1.1.2.1.1. Proximate results

1.1.3. Concurrent Criterion

1.1.3.1. Numerical value

1.1.3.1.1. related validity evidence

1.1.3.1.2. scres from a test are correlated w/ an external cirterion

1.1.4. Predictive Validity Evidence

1.1.4.1. test prediction

1.1.4.1.1. Behavior

1.1.4.1.2. Aptitude tests

1.1.4.2. 2 sets of scores are correlated

1.1.4.2.1. determines the worth of a test

1.1.5. Construct Validity

1.1.5.1. Theory of logical explanation

1.1.5.1.1. Rationale account for interrelationships among set of variables

1.1.5.2. Correspond

1.1.5.2.1. expectations of results

1.1.5.2.2. construct validity of test

1.1.5.2.3. reflects/demonstrates relationships

1.1.5.3. Anchor-establishing measure on behavior

### 1.2. Interpreting Validity Coefficients

1.2.1. Content Validity

1.2.1.1. Comparing test items with the learning objectives to determine whether the items match or measure objectives

1.2.2. Concurrent/Predictive

1.2.2.1. Correlate

1.2.2.2. well established test measuring same behavior

1.2.3. Principle 1

1.2.3.1. concurrent is higher than predictive

1.2.4. Principle 2

1.2.4.1. group variability affects validity coefficient

1.2.4.2. Heterogeneous

1.2.4.3. Homogeneous

1.2.5. Principle 3

1.2.5.1. Relevance and Reliability

1.2.5.2. size of resulting coefficient

1.2.5.3. dependent on-reliability of predictor AND criterion measure

## 2. Reliability

### 2.1. Test-Retest or Stability

2.1.1. Method of estimating reliability

2.1.2. Memory or experience involved during 2nd administration

2.1.2.1. Intervals between tests should be considered

2.1.2.2. Longer the interval-lower the reliability coefficient

### 2.2. Alternate Forms/Equivalence

2.2.1. 2 eqivaluent forms of a test

2.2.2. used to estimate reliability of scores

2.2.2.1. unreliable test

2.2.2.2. Critical problem-takes a great deal to make 1 good test-let alone 2

### 2.3. Internal Consistency

2.3.1. Split-halves

2.3.1.1. splitting test into 2 parts

2.3.2. Correlation between items

2.3.2.1. internally consistent

2.3.2.2. Reliability of the scores can be estimated by the internal consistency method

### 2.4. Split-Half Method

2.4.1. score 1/2 then combine scores

2.4.2. Odd-Even tests

2.4.2.1. Corrected/adjusted upward to reflect reliability if test were twice as long

2.4.2.2. Most frequently used method-Spearman-Brown prophecy formula

### 2.5. Kuder-Richardson/Coefficient Alpha Methods

2.5.1. Measures comparison with one form of test with another

2.5.2. Multiple scored tests

2.5.2.1. Computation best if left to computer programer or test publisher

2.5.2.2. Seeded tests-Typing test or Power test-essay or word problems

## 3. Interpreting Reliability Coefficients

### 3.1. Principle 1

3.1.1. Group variability affects reliability coefficient

### 3.2. Principle 2

3.2.1. Scoring reliability limits test score

### 3.3. Principle 3

3.3.1. More items included-higher reliability

### 3.4. Principle 4

3.4.1. Reliability decreases if test is too easy or too difficult