Create your own awesome maps

Even on the go

with our free apps for iPhone, iPad and Android

Get Started

Already have an account?
Log In

Validity and Reliability Using MindMeister or Microsoft Word, create a mind map that represents the various types of validity and reliability, and explains why they are important in learning and assessment. Then, post your mind map to the discussion forum, either as an attachment (for Word documents) or pasted link (for MindMeister). by Mind Map: Validity and
Reliability

Using
MindMeister or
Microsoft Word,
create a mind map
that represents
the various types
of validity and
reliability, and
explains why they
are important in
learning and
assessment.
 Then, post your
mind map to the
discussion forum,
either as an
attachment (for
Word documents)
or pasted link (for
MindMeister).
0.0 stars - reviews range from 0 to 5

Validity and Reliability Using MindMeister or Microsoft Word, create a mind map that represents the various types of validity and reliability, and explains why they are important in learning and assessment. Then, post your mind map to the discussion forum, either as an attachment (for Word documents) or pasted link (for MindMeister).

Content "The content validity evidence for a test is established by inspecting test questions to see whether they correspond to what the user decides should be covered by the test (Borich & Kubisyn, 2010 p.330)." The assessor should ensure that the questions on the test are related to what was studied and the learning outcomes that he wants his students to know.

Concurrent Criterion-Related Validity Evidence "Concurrent criterion-related validity evidence deals with measures that can be administered at the same time as the measure to be validated (Borich & Kubisyn, 2010 p.330)." The assessor should compare the test with an already establish test that's been validated over time. They should administer both of the tests to their students and then find the correlations using their data. The results of the correlation will give a numeric value.

Construct "A test has construct validity evidence if its relationship to other information corresponds well with some theory (Borich & Kubisyn, 2010 p.330)." The test scores should be compared to what the assessors expect the results would be. As an example; in a University language arts gen. ed. class the professor should expect the English majors to receive scores higher than those of Science majors. This type of validity measurement has no 2nd statistical correspondent, purely theory of expected results.

Face "Face validity is concerned with how a measure or procedure appears (Writing@CSU, 2012)." Literally an examination of the face value of a procedure measuring how it looks, whether or not it appears to be a worthwhile test, questioning the design, and whether or not it will work reliably. There are no outside theories that are used in conjunction with face validity, simply the assessors or other observers’ opinions.

Validity Does the test measure what it's supposed to measure? (Borich & Kubiszyn, 2010 p329)." When testing for validity the assessors must check the content of the test and compare the content with their learning outcomes and objectives.

Testing for validity is important because as a teacher you should ensure that your students are receiving fair and accurate tests with questions that meet your expected learning outcomes.

"Criterion-related validity evidence, scores from a test are correlated with an external criterion (Borich & Kubisyn, 2010 p.330)." The external criterion should be pertinent to the original test; e.g. comparing the scores from a month's worth of vocab quizzes to a final vocab test at the end of the month. A student that did well on the weekly quizzes should have a negative correlation with the vocab test

Predictive Validity Evidence "Predictive validity evidence refers to how well the test predicts some future behavior of the examinees (Borich & Kubisyn, 2010 p.331)." Many universities prefer this type of validity test as they can use it as a basis for academic success in higher education. Some examples of predictive tests are; SAT, ACT, and the GRE. To test predictive validity you simply need to wait and see if how close the predictions were to reality

Reliability "Does the test yield the same or similar score rankings (all other factors being equal) consistently? (Borich & Kubiszyn, 2010 p239)" After repeated trials the data should be examined to ensure that there are no quantitative anomalies that could infer an unreliable or inconsistent outcome. Ensuring reliability of tests is important because a teacher should only administer tests that have been proven to do what they are intended to do. This intention being- an accurate and fair portrayal of student's academic learning and progress throughout the course

Test-Retest or Stability "The test is given twice and the correlation between the first set of scores and the second set of scores is determined (Borich & Kubisyn, 2010 p.341)." It's exactly what the name sounds like. The assesses take the test two times and the scores are compared with each other, checking for correlations. A problem with this method of testing reliability is that taking the assessment the second time would skew the data, unless the test-takers managed to forget the entire test layout in between the assessment periods.

Alternate Forms or Equivalence "If there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test (Borich & Kubisyn, 2010 p.343)." Similar to the test-retest method, though by not using the same test two times you can eliminate the problem of skewed results upon taking the second test. The two tests are taken and the data compared; however, this method does require the assessor to make two good tests. Potentially a lot more work.

Internal Consistency likely to get other, similar items right(Borich & Kubisyn, 2010 p.343)." The test should be consistent within itself. If a test has many questions related to the same topics or subjects, then it would make sense that a student who answers one of these questions correctly would have a higher probability of answering questions correctly with similar topics. There are a couple of different methods to ensure internal consistency.

Split-Half Method This method of internal consistent splits the test in half, generally an odd-even split. Each half of the test is assessed and then compared with each other to get the final correlative data., Kuder-Richardson Methods "These methods measure the extent to which items within one form of the test have as much in common with one another as do the items in that one form with corresponding items in an equivalent forms (Borich & Kubisyn, 2010 p.344)." This is a data heavy method for checking reliability that requires two tests with corresponding items. Every question from every student has to be analyzed to come to a statistical conclusion. At the end of all the calculations you should be left with a number that will tell you how reliable the test questions are in comparison to each other as well as the total test reliability, Reference Kubiszyn, Tom (102009). Educational Testing and Measurement: Classroom Application and Practice [9] (VitalSource Bookshelf), Retrieved from http://online.vitalsource.com/books/9780470571880/page/333