1.1. "The ability to apply concrete measures to abstract concepts" (The Graide Network, 2018, p. 6).
1.2. This would be used when trying to measure things like kindness or intelligence.
2. Criterion Validity
2.1. "If a test is highly correlated with another valid criterion, it is more likely that the test is also valid" (The Graide Network, 2018, p. 7).
3. Content Validity
3.1. A test that "adequately examine all aspects that define the objective" (The Graide Network, 2018, p.7).
4. Validity is abstract which is why it is crucial to collect evidence to prove why a tool is considered valid.
5. "the instrument measures what it intends to measure" (The Graide Network, 2018, p. 3)
5.1. Assessments need to have a connection between the purpose and the data. When this is present a test is more likely to be valid (The Graide Network, 2018).
6. Two Stages of Validity
6.1. 1) Is the research question itself valid?
6.2. 2) Is the instrument that is being used valid?
7. Validity is considered the most fundamental when creating and evaluating assessments (The Graide Network, 2018).
8. Content Evidence vs Face Evidence (Merler, 2017)
8.1. Content Evidence ask questions about whether the assessment matches the criteria set out by the state or school district.
8.2. Face Evidence as questions on whether the students think that the assessment was a fair judgement of what they learned.