16 Review Criteria for Research Integrity

Criterion Definition
1. Theory-/Hypothesis Driven Measure Selection: Outcome measures should be supported by literature related to study theories and/or hypotheses.
2. Reliability: Outcome measures should demonstrate evidence of reliability.
3. Validity: Outcome measures should demonstrate evidence of validity.
4. Intervention Fidelity: The experimental intervention should be implemented as intended or modified as appropriate as the study progresses.
5. Nature of Comparison Condition: A study's comparison condition should be an appropriate contrast to the experimental intervention.
6. Comparison Fidelity: The comparison condition(s) should be implemented as intended or modified as appropriate as the study progresses.
7. Assurances to Participants: Confidentiality and assurances that participants' standard of care will not be affected by study participation are likely to result in more accurate responses.
8. Participant Expectations: Participants can be biased by how an intervention is introduced to them and by an awareness of their study condition. Information used to recruit and inform study participants should be carefully crafted if possible to equalize expectations.
9. Standardized Data Collection: All outcome data should be collected in a standardized manner. Data collectors trained and monitored for adherence to standardized protocols provide the highest quality evidence of standardized data collection.
10. Data Collection Bias: Data collector bias is most strongly controlled when data collectors are not aware of the conditions to which study participants have been assigned. When data collectors are aware of specific study conditions, their expectations should be controlled for through training and/or statistical methods.
11. Selection Bias: There should be baseline equivalence across study conditions on key variables, and any baseline differences should be appropriately controlled for in statistical analyses.
12. Attrition: Study results can be biased by participant attrition; therefore efforts are required to account for attrition.
13. Missing Data (other than missing data resulting from attrition): Study results can be biased by missing data; therefore efforts are required to account for missing data.
14. Analysis Meets Data Assumptions: The appropriateness of statistical analyses is a function of the properties of the data being analyzed and the degree to which data meet statistical assumptions.
15. Hypothesis-Driven Selection of Analytic Methods: Data analytic approaches should be consistent with study hypotheses rather than being ex post facto data-driven.
16. Anomalous Findings: Findings that contradict the theories and hypotheses underlying an intervention suggest the possibility of confounding causal variables and limit the validity of study findings.
Updated: 4/26/2012