Test Evaluation by Shawn Tran

Get Started. It's Free
or sign up with your email address
Test Evaluation by Shawn Tran by Mind Map: Test Evaluation by Shawn Tran

1. Validity: Does the test measure what it is supposed to measure?

1.1. Content Validity Evidence

1.1.1. Analyze the test questions to make sure they are assessing what the test is designed to test

1.2. Criterion-Related Validity Evidence

1.2.1. Concurrent -- a new test and an established test are given to a group of students. The resulting scores are compared to determine the correlation between the two tests.

1.2.1.1. High correlation = new test has concurrent validity evidence

1.2.1.2. Low correlation = new test does not have concurrent validity evidence

1.2.2. Predictive -- how well does the test predict future behavior of the examinees

1.2.2.1. Example: relationship between a student's SAT score and college GPA

1.3. Construct Validity Evidence

1.3.1. Relationship to information corresponds to a theory

1.3.1.1. Example: a test of multiplication facts should see improved test scores when the student is given instruction in multiplication, not geometry

2. Reliability: Does the test yield the same score consistently across multiple trials?

2.1. Alternate Forms or Equivalence -- two equivalent forms of the test are administered and the results are compared

2.2. Internal Consistency -- a person will answer similar test items the same

2.2.1. Split Halves: divide a test in 2 and administer each part separately, then compare results

2.2.1.1. Only use if questions are written in random order, not easiest to hardest

2.2.2. Odd-Even: student does odd problems during one testing period and even problems during a separate testing period

2.2.2.1. Use when test questions are written from easiest to hardest in the test layout

2.3. Test-Retest or Stability -- test is given twice and the scores are compared to see if the results are similar

2.3.1. Must consider the elapsed time between administering of tests so memory of test items does not impact the score

3. Accuracy: Does the test score fairly close to the student's true level?

3.1. Standard Error of Measurement

3.1.1. Standard deviation of the error scores of a test

3.2. Sources of error that can affect accuracy

3.2.1. Error Within Test Takers -- fatigue, illness, seeing another person's answers, etc.

3.2.2. Error Within the Test -- poorly written questions, test questions with clues to the answers, ambiguous questions, etc.

3.2.3. Error in Test Administration -- misreading the amount of time allowed, instructions, attitudes, etc.

3.2.4. Error in Scoring -- incorrect scoring keys, extraneous marks on scantron sheet, etc.

4. Reference

4.1. Kubiszyn, T. & Borich, G. (2013). Educational testing and measurement: Classroom application and practice. (10th ed.). Hoboken, New Jersey: John Wiley & Sons, Inc.