1.1. measure of reliability obtained by administering the same test twice over a period of time to a group of individuals.
2. Reliability
2.1. The degree to which an assessment tool produces stable and consistent results.
3. Alternate Forms or Equivalence
3.1. Two forms of a test, these forms can be used to obtain an estimate of the reliability of the scores from the test.
4. Internal Consistency
4.1. Measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results.
5. Split-Half Methods
5.1. Each item is assigned to one half or other and then correlation between the 2 total scores for each halves is computed.
6. Refrences:
7. Construct Validity
7.1. Construct Validity is used to ensure that the measure is actually measures what it's intend to measure.
8. Concurrent Criterion-Related Validity
8.1. This criterion is an assessment of whether a measure apperar, on the face of it, to measure the conceptit is intended to measure.
9. Criterion-Related Validity
9.1. Criterion-related validity applies to instruments than have been developed for usefulness as indicator of specific trait or behavior, either now or in the future. It also, correlates test results with another criterion of interest.
10. Content Validity
10.1. Concerns the extent to which a measure adequately represents all facets of a concept
11. Validity
11.1. Does the test measure what it is supposed to measure?