Validity and Reliability

Just an initial demo map, so that you don't start with an empty map list ...

Get Started. It's Free
or sign up with your email address
Validity and Reliability by Mind Map: Validity and Reliability

1. Validity: How assessments are determined to hold worth.

1.1. Content Validity Evidence: Testing the validity of the course curriculum in connection with the course objectives and testing.

1.1.1. Is not a statistical approach not mathmatical

1.1.2. Provides a direct connection to the curriculum objectives and learning assessments.

1.1.3. Allows for instructional feedback and guidance as related to consistent learning objectives

1.1.3.1. Significant to the adaption of objectives to the validity of assessments enhances the over all productivity.

1.2. Construct Validity Evidence

1.2.1. An expansion of Content Validity Evidence but the difference the objectives are not being weighed but rather if certain expectations are being met.

1.2.2. Follows education theories working to connect test scores to those of other assessments to see if they translate.

1.2.2.1. Interesting because this is a furthered tool that allows for assessments such as formulative or objective to be connected to formal test scores for an overall measurable outcomes that is expected.

1.2.3. Used mostly as a comparison for in house assessments.

1.3. Criterion-Related Validity Evidence

1.3.1. A numerical validity that compares and connects with off campus assessments and in house assessments on a provided scale of acceptable marks.

1.3.1.1. An example of this is Standardized State Testing

1.3.2. Concurrent Validity Assessment

1.3.2.1. Applies when one more more test/assessment are taken at the same time and compared to curriculum expectations assessed at the same time.

1.3.3. Predictive Validity Assessment

1.3.3.1. Has a similar outcome as the concurrent assessment only they are not correlated that the same time and compared, there is a period of time that is allotted prior to a collection of scores that are measured.

2. Working to assess students as we determine if the curriculum is on track.

3. Are the assessment valid enough to provide the validity that we determine?

4. Through that evidence of validity a full circle will compare to the connection of reliability. Creating a circle of dependency for assessments. Educators need a valid assessment that will provide reliability that is also consistent with the course objectives.

5. Reliability: The consistent application of assessments on learning objectives and course content.

5.1. Test- Retest Estimates of Reliability

5.1.1. A form of assessment that educators would test the class or individual one time and than repeat it another time.

5.1.1.1. Provides consideration or modification of common deficiencies. Enhances the reliability through repetition by correlating the marks of each.

5.1.2. A consideration when scores are remarkable low or significantly different than the expectations. A retest would allot for the application of reliable course content.

5.2. Alternate Form Estimates of Reliability

5.2.1. Takes the same assessment that was given and presents it in two different applications to compare and consider the results.

5.2.1.1. Method of Splitting

5.2.1.1.1. Break down the test as a whole

5.2.1.1.2. Divide it into two equal parts

5.3. Internal Consistency Estimates of Reliability

5.3.1. A concentrated assessment

5.3.2. Typically used when reviewing a singular topic or unit of education

5.3.3. A more in depth approach to the cognitive recall process of course curriculum.

5.4. Kruder-Richardson Method

5.4.1. Methodology that insists a singular test provides a consistent measure of reliability to the course curriculum.

5.4.1.1. Consider that the class as a whole resulted in a remarkably low test score there should be consideration for that validity to assure that is is productive.

6. Consistent estimates will play in hand with how planed outcomes and measurable learning objectives are created.