Learning & Assessment

Find the right structure and content for your course and set up a syllabus

Get Started. It's Free
or sign up with your email address
Rocket clouds
Learning & Assessment by Mind Map: Learning & Assessment

1. Reliability

1.1. "Group variability affects test score reliability. As group variability increases, reliability goes up." (Kubiszyn & Borich. 2010. p.p. 350)

1.2. "Scoring reliability limits test scores reliability. As scoring reliability goes down, so does the test's reliability." (Kubiszyn & Borich. 2010. p.p. 350)

1.3. "Test length affects test sore reliability. As test length increases, the test's reliability tends to go up." (Kubiszyn & Borich. 2010. p.p. 350)

1.4. "Item difficulty affects test score reliability. As items become very easy or very hard, the test's reliability goes down." (Kubiszyn & Borich. 2010. p.p. 350)

2. Test-Retest or Stability

2.1. Same test twice

2.2. Method of estimating reliability

3. Alternate Forms or Equivalence

3.1. Two alternate or equivalent forms of a test

3.2. Testing time between the two test is short as possible.

4. Internal Consistency

4.1. Split-half methods

4.2. Kuder-Riichardson methods

4.3. "Internal consistency estimates tend to yield inflated reliability estimates for speeded test." (Kubiszyn & Borich. 2010. p.p. 349)

5. Content Validity

5.1. "Do test items match and measure objectives?" (Kubiszyn & Borich. 2010. p.p. 335)

5.2. In order to answer this question you need to match the items with the objectives.

6. Validity

6.1. "The adequency of validity evidence depends on both the strength of the validity coefficient and the purpose the test is being used for." (Kubiszyn & Borich. 2010. p.p. 340)

6.2. "Group variability affects the strength of the validity coefficient." (Kubiszyn & Borich. 2010. p.p. 340)

6.3. "Validity coefficients shouldbe considered in terms of the relevance and reliability of the criterion or standard." (Kubiszyn & Borich. 2010.p.p. 340)

7. Criterion-Related Validity

7.1. "How well does performance on the new test match performance on an established test?" (Kubiszyn & Borich. 2010.p.p. 335)

7.2. "Correlate new test with an accepted criterion, for example, a well-established test measuring the same behavior." (Kubiszyn & Borich. 2010.p.p. 335)

7.3. Predictive Criterion-Related Validity, "can the test predict subsequent performance." (Kubiszyn & Borich. 2010. p.p. 335)

8. Construct Validity

8.1. "A test has construct validity evidence if its relationship to other information corresponds well with some theory." (Kubiszyn & Borich. 2010. p.p. 332)