Create your own awesome maps

Even on the go

with our free apps for iPhone, iPad and Android

Get Started

Already have an account?
Log In

Validity and Reliability by Mind Map: Validity
0.0 stars - 0 reviews range from 0 to 5

Validity and Reliability

Crystal English EDU645: Dr. Reason Week 5 Discussion 2   Validity and Reliability Using MindMeister or Microsoft Word, create a mind map that represents the various types of validity and reliability, and explains why they are important in learning and assessment.  Then, post your mind map to the discussion forum, either as an attachment (for Word documents) or pasted link (for MindMeister).


"Does the test yield the same or similar score rankings (all other factors being equal) consistently? (Borich & Kubiszyn, 2010 p239)" After repeated trials the data should be examined to ensure that there are no quantitative anomolies that could infer an unreliable or inconsistent outcome. Ensuring consistency of tests is important because a teacher should only administer tests that have been proven to do what they are intended to do.

Test-Retest or Stability

"The test is given twice and the correlation between the first set of scores and the second set of scores is determined (Borich & Kubisyn, 2010 p.341)." It's exactly what the name sounds like. Students take the twice and the scores are compared with each other, checking for correlations.

Alternate Forms or Equivalence

"If there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test (Borich & Kubisyn, 2010 p.343)." This method is a lot like the test-retest method, though by not using the same test two times you can eliminate the problem of  skewed results upon taking the second test. The two tests are taken and the data compared; however, this method does require the assessor to make two good tests.

Internal Consistency

"If the test in question is designed to measure a single basic concept, it is reasonable to assume that people who get one item right will be more likely to get other, similar items right(Borich & Kubisyn, 2010 p.343)." The test should be consistent within itself. This method is self explanitory if one skill or subject is being measured student are more apt to pass because if they answer one question right they should know the rest because it is only measuring one concept.

Split-Half Method

Kuder-Richardson Methods


This method of addressing reliability examines the ways in which we rate students; especially in more subjective terms such as oral exams. Educators would  assess the test-taker and then confer about their scores, coming to some sort of agreement on the best score or best way to rate their score.


"Does the test measure what it's supposed to measure? (Borich & Kubiszyn, 2010 p329)." When testing for validity the assessors must check the content of the test and compare the content with their learning outcomes and objectives. The test should effective in measuring the instructional objective expressing to educators the students have grasp the concept of the lesson


"The content validity evidence for a test is established by inspecting test questions to see whether they correspond to what the user decides should be covered by the test (Borich & Kubisyn, 2010 p.330)." The educator should ensure that the questions on the test are related to instructional objectives  and the learning outcomes that the students should know.


"Criterion-related validity evidence, scores from a test are correlated with an external criterion (Borich & Kubisyn, 2010 p.330)." The external criterion should be pertinent to the original test; e.g.  educators could utilize the scores from a month's worth of math quizes to a final exam at the end of the month. A student that did well on the weekly math quizes should have grades that are in  correlation when  the final is scored.

Concurrent Criterion-Related Validity Evidence

Predictive Validity Evidence


"A test has construct validity evidence if its relationship to other information corresponds well with some theory (Borich & Kubisyn, 2010 p.330)."  The test scores should be compared to what the assessors expect the results would be.


Basically basing the value of the testing straegies on how the test looks. This method is usually an assessment that is based on the educators ideas.


References Kubiszyn, T. & Borich, G. (2010). Educational testing & measurement: Classroom application and practice (9th ed.)