Different types of validity and reliability

Get Started. It's Free
or sign up with your email address
Rocket clouds
Different types of validity and reliability by Mind Map: Different types of validity and reliability

1. Alternative Forms

1.1. When using the alternative form method of testing the relaiability of an assessment, there are two forms of one test.

1.2. This type of reliability is important because if a student produces two very different scores two different forms of the same assessment, the assessment can be consider unreliable.


3. Content Validity

3.1. When testing for content validity, test questions should match up with the learning objectives.

3.2. This is important because it is not helpful to have a good assessment that does not test what it is meant to test. The educator/test composer have to make sure that there are not questions in the assessment that were not covered in lessons.

4. Concurrent Criterion-Related Validiity

4.1. When testing for Concurrent Criterion-Related Validity, two test should be given to the same group of students and the correlation between the two scores must be calculated.

4.2. This is important because if the scores produced from a new assessment does not have a high correlation to a highly favored, valid and very reliable assessment, then it can be consider to not be valid and reliable and therefore not be used.

5. Predictive Criterion Validity

5.1. This type of validity is used to test how well an assessment predicts future behavior. When testing for Predictive Criterion Validity, an assessment must be given and after a certain period of time, the individuals who took the assessment should be measured.

5.2. This is important because if the students do not exhibit the behavior they are suppose to exhibit in the predicitions then the test can be consider to not have done what it was meant to do and therefeor be invalid.


7. Internal Consistency

7.1. When using internal consistency as a type of reliability, the test is divided up between even and odd numbers or the test is split in half and the correleation between the two parts that have been divided are calculated.

7.2. This is important because if the correlation is not high, this means that the test is not consistent from beginning to end. Once again we will not be able to tell the reason for the difference in test scores because the test is not reliable.

8. Test - Retest

8.1. When using test-retest to test for an assessment's reliability, an assessment is given twice and a correlation is calculated between the two test scores.

8.2. This is important because if a student takes the same test and produce two extrememly different scores then the test could be considered reliable. If the scores produces are not within the same rank, we can not be sure that the information was learned and that the learning objectives were accomplished.