1. How to use this mind map: This mind map summarizes threats to research validity. Nodes may be collapsed and expanded to see explanations or to test your knowledge. Click any icons associated with topics for additional information and/or links to other resources.
2. MISCONDUCT
2.1. Conflicts of interest
2.1.1. Examples...
2.1.1.1. Commercial
2.1.1.2. Academic
2.1.1.3. Political
2.1.2. Detection...
2.1.2.1. Read the conflicts of interest declaration, funding source, institutional (academic or corporate) affiliations
2.2. Publication bias
2.2.1. Definition
2.2.1.1. Selective publication based on results ("positive" studies more likely to be published than "negative" studies) and/or intentional suppression of dissemination for financial reasons
2.3. Fraud
2.3.1. Click here for a famous and egregious example that has had major effects on public health.
2.4. Skewed presentation
2.4.1. Examples
2.4.1.1. Adjusting scale of graph axes to bias interpretation
2.4.1.2. emphasizing relative measures of effect or association instead of absolute measures.
3. CHANCE
4. BIAS
4.1. Recruitment (before study)
4.1.1. Allocation bias
4.1.1.1. Definition...
4.1.1.1.1. a systematic difference between participants in how they are allocated to treatment.
4.1.1.2. Prevention...
4.1.1.2.1. the recruiter must not be able to discern or influence the assignment, so-called "concealed allocation"
4.1.1.2.2. Objective randomization scheme
4.1.1.3. Susceptible studies...
4.1.1.3.1. Clinical trials
4.1.1.4. Example
4.1.1.4.1. If an investigator is able to ascertain characteristincs of a subject that may be related to the outcome of interest and preferentially assign subjects to one group or the other based on this knowledge.
4.1.2. Selection bias (sometimes called sampling bias)
4.1.2.1. Definition...
4.1.2.1.1. Selection of a comparison group (controls) that is not representative of the population that produced the cases. .
4.1.2.2. Susceptible studies...
4.1.2.2.1. Observational studies
4.1.2.3. Examples (3)...
4.1.2.3.1. Self-selection (volunteer)bia
4.1.2.3.2. "Healthy worker"
4.1.2.3.3. Berkson's fallacy (hospitalized patients)
4.1.2.4. Prevention...
4.1.2.4.1. Careful selection of control group to assure comparability with respect to baseline and prognostic characteristics. Ask,"If a control had had the disease, would they have been as likely to be enrolled as a case?"
4.2. Observation (during study)
4.2.1. Types...
4.2.1.1. Expectancy bias
4.2.1.1.1. Definition...
4.2.1.1.2. Prevention...
4.2.1.2. Measurement bias
4.2.1.2.1. Definition...
4.2.1.2.2. Prevention
4.2.1.2.3. Examples
4.2.1.3. Recall bias
4.2.1.3.1. Definition,,,
4.2.1.3.2. Prevention...
4.2.1.4. Hawthorne effect
4.2.1.4.1. Definition
4.3. Interpretation (after study)
4.3.1. Lead-time bias
4.3.1.1. Definition
4.3.1.1.1. early detection is confused with increased survival
4.3.1.2. Prevention...
4.3.1.2.1. adjust survival according to severity of disease at time of diagnosis
4.3.1.3. Example
4.3.1.3.1. Generally seen in studies of screening
4.3.2. Drop-out
4.3.2.1. Definition...
4.3.2.1.1. Excessive number of patients who leave the study and/or lost to follow-up (usually due to adverse events or dissatisfaction)
4.3.2.2. Prevention...
4.3.2.2.1. None. Minimize effect by intention to treat analysis and/or statistical methods (sensitivity analysis)
5. CONFOUNDING
5.1. Definition...
5.1.1. occurs when a factor is independently related to both the exposure and the outcome, but not directly part of the causal pathway. Leads to misinterpreting association as causation.
5.2. Prevention...
5.2.1. Randomization (if possible); matching, restriction, or stratification can be used with observational studies.