Evaluating Tests

Just an initial demo map, so that you don't start with an empty map list ...

Get Started. It's Free
or sign up with your email address
Rocket clouds
Evaluating Tests by Mind Map: Evaluating Tests

1. Validity Evidence

1.1. Content Validity Evidence

1.1.1. Inspect test questions for coverage

1.1.2. Minimum requirement

1.1.2.1. Might look valid, but possibly poorly constructed

1.1.3. Fits instructional objectives

1.1.4. Do test items match and measure objectives?

1.1.4.1. Match the items with objectives.

1.2. Criterion Related Validity Evidence

1.2.1. Concurrent criterion related validity evidence

1.2.1.1. numeric value=correlation coefficient

1.2.1.2. Administering the new and established test to the same group to find the correlation

1.2.1.2.1. High correlation= concurrent validity evidence has been established

1.2.1.3. How well do the new and old tests compare?

1.2.1.3.1. Correlate new test with an accepted criterion

1.2.2. Predictive Validity Evidence

1.2.2.1. Predicts how well test takers will do

1.2.2.1.1. Correlate scores from the current test to a test in a future.

1.2.2.2. two variables are group of students and measurement of the subjects

1.2.2.2.1. predictive validity coefficient

1.2.2.3. numerical evidence

1.2.2.4. measure of future behavior

1.3. Construct Validity Evidence

1.3.1. info that lets you know whether results from the test are what you expected

2. Reliability of a Test

2.1. Test-Retest

2.1.1. aka Stability method

2.1.2. test is given twice

2.1.2.1. compare both tests' scores

2.1.3. Problem: score may differ due to memory or experience from the time in between

2.2. Alternate Forms

2.2.1. AKA equivalence method

2.2.2. two equivalent forms are given to the same group

2.2.2.1. both tests scores are correlated

2.2.3. Problem: hard to create two good tests

2.3. Internal Consistency

2.3.1. Split half methods

2.3.1.1. AKA odd-even

2.3.1.2. split one test into two sections

2.3.1.3. must use a test with similar concept throughout

2.3.1.4. Problem: speed test reliability

2.3.2. Item-total correlations

2.3.2.1. Kuder-Richardson (KR) procedure

2.3.2.2. look at the percentage of students passing each item on test

2.3.2.3. must use a test with similar concept throughout

2.3.2.4. Problem; speed test reliability