Reliability: consistency of which test scores indicate a similar ranking of students

Get Started. It's Free
or sign up with your email address
Reliability: consistency of which test scores indicate a similar ranking of students by Mind Map: Reliability: consistency of which test scores indicate a similar ranking of students

1. Test-Retest or Stability

1.1. "The test is given twice and the correlation between the first set of scores and the second set of scores is determined," (Kubiszyn & Borich, 2010, p. 341).

1.1.1. Recruit Project Sponsor

1.1.2. Recruit Project Manager

1.1.3. Review Related Projects and Lessons Learned

1.1.4. Prepare Project Initiation Plan

1.1.5. Brief the Initial Project Team

1.1.6. Review Project Kick-off Plans and Presentation Map

1.1.7. Hold Project Kick-off Meeting

1.2. Similar or related scores indicate the test as a reliable means of assessment for some subject.

1.2.1. Establish Project Objective

1.2.2. Establish Project Scope

1.2.3. Map Requirements

1.2.4. Map Solution

1.2.5. Map Training Requirement

1.2.6. Review Project Scope

1.3. Problem: test memory can cause better peformance, thus creating a correlation where they may not really be one.

1.3.1. Determine Project Approach, Stages and Steps

1.3.2. Estimate Project Duration

1.3.3. Establish Resource Requirements

1.3.4. Prepare Project Schedule and Budget

1.3.5. Prepare Work breakdown structure

1.3.6. Document Success Criteria

1.3.7. Review Project Schedule

2. Alternate Forms or Equivalence

2.1. Useful when two different, yet equivalent test forms are availabe. Must be administered in most equivalent circumstances as well.

2.1.1. Establish checkpoints

2.1.2. Acquire team resources for stage

2.1.3. Conduct stage kick-off meeting

2.2. "Both forms are administered to a group of students, and the correlation between the two sets of socres are determined," (Kubiszyn & Borich, 2010, p. 343).

2.2.1. Determine Frequency of Meetings

2.2.2. Schedule Meetings

2.2.3. Brief Project Board

2.2.4. Prepare Meetings

2.2.5. Conduct Meetings

2.2.6. Follow-up Meeting

2.3. Minimizes test memory problem of test-retest method

2.3.1. Schedule Quality Review Meeting

2.3.2. Prepare for Quality Review Meeting

2.3.3. Conduct Quality Review Meeting

2.3.4. Follow-up Quality Review Meeting

3. Internal Consistency: when a single concept is evaluated per test item, to generate reliable performance indicators

3.1. Split-half methods

3.1.1. Computes alternative form reliability

3.1.2. Test items are split and assigned to one half or the other, then used to compute correlation coeffecients the same way as when there are two tests.

3.1.3. No memory or multiple tests issues

3.2. Kuder-Richardson methods

3.2.1. Requires equivalent test forms

3.2.2. "Measure the extent to which items within one form of the test have as much in common with one another as do the items in that one form with corresponding items in an equivalent form," (Kubiszyn & Borich, 2010, p. 344).

3.2.3. Item to item match-up provides data for correlation coefficeint computation