Validity and Reliability

Plan your life and the next important steps and goals to proceed with a happy life

Get Started. It's Free
or sign up with your email address
Rocket clouds
Validity and Reliability by Mind Map: Validity and Reliability

1. Construct Validity

1.1. used to ensure that the measure is actually measure what it is intended to measure (i.e. the construct), and not other variables. Using a panel of “experts” familiar with the construct is a way in which this type of validity can be assessed. The experts can examine the items and decide what that specific item is intended to measure. Students can be involved in this process to obtain their feedback

2. VALIDITY

2.1. Face Validity

2.1.1. ascertains that the measure appears to be assessing the intended construct under study. The stakeholders can easily assess face validity. Although this is not a very “scientific” type of validity, it may be an essential component in enlisting motivation of stakeholders. If the stakeholders do not believe the measure is an accurate assessment of the ability, they may become disengaged with the task.

2.2. Criterion-Related Validity

2.2.1. used to predict future or current performance

2.3. Formative Validity

2.3.1. applied to outcomes assessment it is used to assess how well a measure is able to provide information to help improve the program under study

2.4. Sampling Validity

2.4.1. ensures that the measure covers the broad range of areas within the concept under study. Not everything can be covered, so items need to be sampled from all of the domains. This may need to be completed using a panel of “experts” to ensure that the content area is adequately sampled. Additionally, a panel can help limit “expert” bias

3. RELIABILITY

3.1. TEST-RETEST OR STABILITY

3.1.1. Same Test 2 given at two different times.

3.1.1.1. Significance in learning and assessment

3.1.1.1.1. Results from both test gives test makers or administrators the prospect on test reliability by observing the correlation between scores.

3.2. Parallel forms reliability

3.2.1. is a measure of reliability obtained by administering different versions of an assessment tool (both versions must contain items that probe the same construct, skill, knowledge base, etc.) to the same group of individuals. The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions.

3.2.1.1. Significance in learning and assessment

3.2.1.1.1. Teachers will be able to observe how student's scores are correlated within the interval of time.

3.3. Inter-rater reliability

3.3.1. measure of reliability used to assess the degree to which different judges or raters agree in their assessment decisions. Inter-rater reliability is useful because human observers will not necessarily interpret answers the same way; raters may disagree as to how well certain responses or material demonstrate knowledge of the construct or skill being assessed.

3.3.1.1. Kuder-Richardson

3.3.1.1.1. "measures the extent to which items within on form of the test have as much in common with one another as do the items in that one from with corresponding items in an equivalent form" (Borich & Kubiszyn, 2010, pg. 344).

3.4. Internal Consistency Reliability

3.4.1. measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results.

3.4.1.1. Average Inter-Item Correlation

3.4.1.2. Split-Half Reliability

4. Reference