Validity and Reliability

A map defining reliability and validity.

Get Started. It's Free
or sign up with your email address
Validity and Reliability by Mind Map: Validity and Reliability

1. Reliability

1.1. Measures if a test is reliable. In other words, it refers to, “the consistency with which it yields the same rank for individuals who take the test more than once,” (p.341).

1.2. Examinees can take the test over and over and it should yield the same result or close to the same. It is important for tests to be consistent because it saves time and possibly money depending on the institution.

1.3. Three Types of Reliability

1.3.1. Test-retest or Stability

1.3.1.1. This is “exactly how it [...] is implied,” testing for stability (p.341). Testing and retesting to make sure that every time yields the same results.

1.3.2. Alternative Form or Equivalence

1.3.2.1. *There are two forms of the same test. These are where the multiple tests are formed in order to measure the same thing. *Both are given to examinees and their scores will determine the reliability of the tests. *These eliminate “memory and practicing involved in test-retest estimates,” (p.343)

1.3.3. Internal consistency

1.3.3.1. This states that if the person were to get one “item” correct in a line of items, then they are highly likely to get similar items correct (p.343).

1.3.3.2. Two Ways To Measure

1.3.3.2.1. Split-Half

1.3.3.2.2. Kuder-Richardson

2. Validity

2.1. Content Validity Evidence

2.1.1. test questions are inspected, “to see whether they correspond to what the user decides should be covered by the test,” (p.330). These validations are based on specific subject content and can easily be determine whether or not the questions are appropriate for what is being measured.

2.1.2. It is a more objective approach because it is straight forward. There is no requirement of subjective content because there cannot be an easy conclusion drawn from it.

2.1.3. A test with good content validity measures the “instructional objectives,” (p.330).

2.2. Answers the question, “Does the test measure what it is supposed to measure?” (p.329).

2.3. Criterion-Related Validity Evidence

2.3.1. the scores from tests are “correlated,” (p.330) via other criterion that are “external” from the test.

2.3.2. Two Types of Criterion-Related Validity

2.3.2.1. Concurrent criterion evidence

2.3.2.1.1. Can be given at the same time that the measure is given before validation. When finding the correlation between the two tests there is a numeric value that is placed which is called the, “validity coefficient,” (p.330) which would be used to prove that the validity measured is good.

2.3.2.2. Predictive validity evidence

2.3.2.2.1. Determines “how well the test predicts [...]future[...] behavior[s] of the examinees,” (p. 331) which is useful for aptitude tests. This test is mostly objective in that it is used on a large scale of participants. Usually it is for a determinate for specific gains such as admission to college, licensure or certifications.

2.4. Construct Validity Evidence

2.4.1. This is present "if its relationship to other information corresponds well with some theory [or] a logical explanation or rationale that can account for the interrelationships among a set of variables,” (p.332).

2.4.2. In a way it can be viewed as a hypothetical situation where according to theories that are drawn, a relationship exists between certain variables. This is of course if the relationship has corresponded to the theory given.

2.4.3. Can be best used on grade level tests. If a student has mastered specific objectives they should be able to score a specific amount based on criterion. If the test has construct validity, then this relationship would show.