Create your own awesome maps

Even on the go

with our free apps for iPhone, iPad and Android

Get Started

Already have an account?
Log In

Validity and Reliability by Mind Map: Validity
and
Reliability
0.0 stars - 0 reviews range from 0 to 5

Validity and Reliability

Aaron Fitzgerald EDU645: Learning & Assessment in the 21st Century Week 5 Discussion 2 Validity and Reliability Using MindMeister or Microsoft Word, create a mind map that represents the various types of validity and reliability, and explains why they are important in learning and assessment. Then, post your mind map to the discussion forum, either as an attachment (for Word documents) or pasted link (for MindMeister).

Reliability

"Does the test yield the same or similar score rankings (all other factors being equal) consistently? (Borich & Kubiszyn, 2010 p239)" After repeated trials the data should be examined to ensure that there are no quantitative anomolies that could infer an unreliable or inconsistent outcome. Ensuring reliability of tests is important because a teacher should only administer tests that have been proven to do what they are intended to do. This intention being- an accurate and fair portrayal of student's academic learning and progress throughout the course.

Test-Retest or Stability

"The test is given twice and the correlation between the first set of scores and the second set of scores is determined (Borich & Kubisyn, 2010 p.341)." It's exactly what the name sounds like. The assessees take the test two times and the scores are compared with each other, checking for correlations. A problem with this method of testing reliability is that taking the assessment the second time would skew the data, unless the test-takers managed to forget the entire test layout in between the assessment periods.

Alternate Forms or Equivalence

"If there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test (Borich & Kubisyn, 2010 p.343)." Similar to the test-retest method, though by not using the same test two times you can eliminate the problem of skewed results upon taking the second test. The two tests are taken and the data compared; however, this method does require the assessor to make two good tests. Potentially a lot more work.

Internal Consistency

"If the test in question is designed to measure a single basic concept, it is reasonable to assume that people who get one item right will be more likely to get other, similar items right(Borich & Kubisyn, 2010 p.343)." The test should be consistent within itself. If a test has many questions related to the same topics or subjects, then it would make sense that a student who answers one of these questions correctly would have a higher probability of answering questions correctly with similar topics. There are a couple of different methods to ensure internal consistency.

Split-Half Method

Kuder-Richardson Methods

Interrater

"Interrater reliability is the extent to which two or more individuals (coders or raters) agree (Writing@CSU, 2012)." This method of addressing reliability examines the ways in which we rate assessees; especially in more subjective terms such as oral exams. The raters will assess the test-taker and then confer about their scores, coming to some sort of agreement on the best score or best way to rate their score.

Validity

"Does the test measure what it's supposed to measure? (Borich & Kubiszyn, 2010 p329)." When testing for validity the assessors must check the content of the test and compare the content with their learning outcomes and objectives. Testing for validity is important because as a teacher you should ensure that your students are receiving fair and accurate tests with questions that meet your expected learning outcomes.

Content

"The content validity evidence for a test is established by inspecting test questions to see whether they correspond to what the user decides should be covered by the test (Borich & Kubisyn, 2010 p.330)." The assessor should ensure that the questions on the test are related to what was studied and the learning outcomes that he wants his students to know.

Criterion

"Criterion-related validity evidence, scores from a test are correlated with an external criterion (Borich & Kubisyn, 2010 p.330)." The external criterion should be pertinent to the original test; e.g. comparing the scores from a month's worth of vocab quizes to a final vocab test at the end of the month. A student that did well on the weekly quizes should have a negative correlation with the vocab test.

Concurrent Criterion-Related Validity Evidence

Predictive Validity Evidence

Construct

"A test has construct validity evidence if its relationship to other information corresponds well with some theory (Borich & Kubisyn, 2010 p.330)."  The test scores should be compared to what the assessors expect the results would be. As an example; in an University language arts gen. ed. class the professor should expect the English majors to recieve scores higher than those of Science majors. This type of validity measurement has no 2nd statistical correspondent, purely theory of expected results.

Face

"Face validity is concerned with how a measure or procedure appears (Writing@CSU, 2012)." Literally an examination of the face value of a procedure measuring how it looks, whether or not it appears to be a worthwhile test, questioning the design, and whether or not it will work reliably. There are no outside theories that are used in conjuction with face validity, simply the assessors or other observors opinions.

References

References Kubiszyn, Tom (102009). Educational Testing and Measurement: Classroom Application and Practice [9] (VitalSource Bookshelf), Retrieved from http://online.vitalsource.com/books/9780470571880/page/333 Writing@CSU. (1993-2012). Writing Guide: Reliability and Validity. In Writing@CSU. Retrieved February 15, 2012, from http://writing.colostate.edu/guides/research/relval/index.cfm.