Assesment Validity & Reliability
von Shanta Reed
1. Validity- determines if an assessment measures what it says it measures.
2. Reliability- "consistency in which it yields the same rank of individuals who take the test more than once"
2.1. Test-Retest
2.1.1. A test is given twice and the correlation between the two sets of scores are determined.
2.2. Alternative Form
2.2.1. two forms of a test are given and can be used to get an estimate of the reliability of the test. The correlation between both sets of scores are determined.
2.3. Internal Consistency
2.3.1. Within an assessment it can be assumed that a test -taker will get all alike items correct. Items should be correlated with one another and the reliability of the test can be judged based on the internal consistency of the assessment.
3. Criterion Related Validity- scores are correlated with a external criterion. There are two types: concurrent criterion & predictive validity.
3.1. Predictive- refers to how well the tests predict some future behavior of the examinees. Aptitude test, i.e SAT
3.2. concurrent criterion- deals with measures that can be administered at the same time then measure to be validated. The information determines the correlation between the two assessments.
4. Content Validity- inspect test questions to ensure that they cover what the user decides should be covered on the assessment.
5. Construct Validity- relationships to other information corresponds well with a theory
5.1. any information that lets you know whether results from the test correspond to what you would expect
6. Reliability refers to the stability of of a test over repeated administrations of the test.
7. Valid Assessments should be able to answer these questions
7.1. Is the test valid for the intended purpose?
7.2. Does the test measure what it is supposed to measure?
7.2.1. Does the test do the job it was designed to do?