Different methods and interpretations of coefficients through validity and reliability in education

Get Started. It's Free
or sign up with your email address
Education by Mind Map: Education

1. Reliability

1.1. Test-Retest or Stability

1.1.1. Method of estimating reliability

1.1.2. Memory or experience involved during 2nd administration Intervals between tests should be considered Longer the interval-lower the reliability coefficient

1.2. Alternate Forms/Equivalence

1.2.1. 2 eqivaluent forms of a test

1.2.2. used to estimate reliability of scores unreliable test Critical problem-takes a great deal to make 1 good test-let alone 2

1.3. Internal Consistency

1.3.1. Split-halves splitting test into 2 parts

1.3.2. Correlation between items internally consistent Reliability of the scores can be estimated by the internal consistency method

1.4. Split-Half Method

1.4.1. score 1/2 then combine scores

1.4.2. Odd-Even tests Corrected/adjusted upward to reflect reliability if test were twice as long Most frequently used method-Spearman-Brown prophecy formula

1.5. Kuder-Richardson/Coefficient Alpha Methods

1.5.1. Measures comparison with one form of test with another

1.5.2. Multiple scored tests Computation best if left to computer programer or test publisher Seeded tests-Typing test or Power test-essay or word problems

2. Interpreting Reliability Coefficients

2.1. Principle 1

2.1.1. Group variability affects reliability coefficient

2.2. Principle 2

2.2.1. Scoring reliability limits test score

2.3. Principle 3

2.3.1. More items included-higher reliability

2.4. Principle 4

2.4.1. Reliability decreases if test is too easy or too difficult

3. As students go through their education, they are required to take many different assessments that will determine the knowledge that they have retained. The scores combined determine the reliability of the test and how well it is considered a functioning test. Educators need for their tests to be consistent and reliable for the students to ensure that they are receiving tests that focus on the topic and relate to the material.

4. Kubiszyn, T., Borich, G., (2010). Educational Testing and Measurement: Classroom Application and Practice. (9th Ed.) Hoboken, NJ: John Wiley & Sons, Inc.

5. Validity

5.1. A valid test measure

5.1.1. Content Validity Evidence items match learning objectives Non-numerical Matches/fits instructional objectives

5.1.2. Criterion-Related Yields numerical value Proximate results

5.1.3. Concurrent Criterion Numerical value related validity evidence scres from a test are correlated w/ an external cirterion

5.1.4. Predictive Validity Evidence test prediction Behavior Aptitude tests 2 sets of scores are correlated determines the worth of a test

5.1.5. Construct Validity Theory of logical explanation Rationale account for interrelationships among set of variables Correspond expectations of results construct validity of test reflects/demonstrates relationships Anchor-establishing measure on behavior

5.2. Interpreting Validity Coefficients

5.2.1. Content Validity Comparing test items with the learning objectives to determine whether the items match or measure objectives

5.2.2. Concurrent/Predictive Correlate well established test measuring same behavior

5.2.3. Principle 1 concurrent is higher than predictive

5.2.4. Principle 2 group variability affects validity coefficient Heterogeneous Homogeneous

5.2.5. Principle 3 Relevance and Reliability size of resulting coefficient dependent on-reliability of predictor AND criterion measure

6. Education assesses fundamental aspects that we teach our students that are important for their future. In concern with assignments and assessments, validity is important for reassuring that the students are learning the necessary information in relation to the learning objectives that are set forth in the classroom at the beginning of the year.