1. Quality of Measurement
1.1. Reliability
1.1.1. Types of Reliability
1.1.1.1. Inter-Rater / Inter-Observer Reliability
1.1.1.1.1. the degree of agreement or correlation between the ratings or coding of two independent raters or observers of the same phenomenon
1.1.1.2. Test-Retest Reliability
1.1.1.2.1. the correlation between scores on the same test or measure at two successive time points
1.1.1.3. Parallel-Forms Reliability
1.1.1.3.1. the correlation between two versions of the same test or measure that were constructed in the same way (same construct, same topics)
1.1.1.4. Internal Consistency Reliability
1.1.1.4.1. a correlation that assesses the degree to which items on the same multi-item instrument are interrelated (same construct, different topics)
1.1.1.4.2. The Average Inter-Item Correlation
1.1.1.4.3. Average Item-Total Correlation
1.1.1.4.4. Split-Half Reliability
1.1.1.4.5. Cronbach's Alpha
1.1.2. True Score Theory
1.1.2.1. X = T + e
1.1.2.1.1. Systematic Error
1.1.2.1.2. Random Error
1.1.2.1.3. Reducing Measurement Error
1.1.2.2. Theory of Reliability
1.1.2.2.1. Covarience (X1)(X2) / SD (X1) * SD (X2)
1.1.3. definition: consistency or stability of an observation.
1.2. Validity
1.2.1. definition: accuracy of an observation
1.2.2. Construct Validity
1.2.2.1. definition: the extent to which your measure or instrument actually measures what it is theoretically supposed to measure
1.2.2.2. Translational Validity
1.2.2.2.1. Face Validity
1.2.2.2.2. Content Validity
1.2.2.3. Criterion-Related Validity
1.2.2.3.1. Predictive Validity
1.2.2.3.2. Concurrent Validity
1.2.2.3.3. Convergent Validity
1.2.2.3.4. Discriminant Validity
1.2.2.4. Threats of Construct Validity
1.2.2.4.1. definition: any factor that causes you to make an incorrect conclusion about whether your operationalization variables reflect well the construct they are intended to represent
1.2.2.4.2. 1. Inadequate Preoperational Explanation of Constructs
1.2.2.4.3. 2. Mono-operation Bias (e.g. versions of treatments)
1.2.2.4.4. 3. Mono-Method Bias (e.g. multiple ways to measure)
1.2.2.4.5. 4. Interaction of Treatments
1.2.2.4.6. 5. Interaction of Testing and Treatments
1.2.2.4.7. 6. Restricted Generalizability across Constructs
1.2.2.4.8. 7. Confounding Constructs and Level of Constructs (e.g. dosages)
1.2.2.4.9. Social Threats to Construct Validity
2. Levels of Measurement
2.1. definition: the relationship between numerical values on a measure.
2.2. Nominal
2.2.1. simply to name the attribute
2.3. Ordinal
2.3.1. attributes are ranked in order
2.4. Interval
2.4.1. distance between numbers is interpretable (temperature)
2.5. Ratio
2.5.1. Ratio is interpretable, there is always an absolute zero (weight)