
1. Reliability
1.1. The extent to which an experiment, test, or any measuring procedure yields the same result on repeated trials.
1.2. Coefficients
1.2.1. Stability
1.2.1.1. A test or measure is administered. Some time later the same test or measure is re-administered to the same or highly similar group
1.2.2. Equivalency
1.2.2.1. “Are the two forms of the test or measure equivalent?” If different forms of the same test or measure are administered to the same group;
1.2.3. Stability & Equivalency
1.2.3.1. Using Equivalance test paper as second paper in another day
1.2.4. Split-Half
1.2.4.1. Exam given once only where contain dua set of paper.
1.2.5. Internal consistency
1.2.5.1. It is an indicator of reliability for a test or measure which is administered once.
1.2.6. Interrater reliability
1.2.6.1. Different trained raters, using a standard rating form, should measure the object of interest consistently;
2. Objectivity
2.1. Objectivity of a test is an important factor for it affects both the validity and reliability of the test
3. Practicabilty
3.1. Practical test is one which can be practically administered.
3.2. Tests can be made more practical by making it more objective (more controlled items)
4. Interpretability
4.1. Easy to interpret
4.2. Easy to converted to statistical
5. Validity
5.1. Something fair as it is what it's supposed to measure.
5.2. The soundness of your interpretation and uses of student's assessment result.
5.3. Four principles of validation
5.3.1. Interpretation
5.3.2. Uses
5.3.3. Values implied
5.3.4. Consequences
5.4. Types of validation
5.4.1. Content validity
5.4.2. Face validity
5.4.3. Construct Validity
5.4.4. Concurrent Validity
5.4.5. Forecast Validity