An official website of the United States government
Criterion Definition
1. Alpha Inflation: If there are multiple measures of the same outcome, p values for these measures should be adjusted for alpha inflation or "capitalizing on chance." If an outcome has only one measure, no adjustment is necessary, and the p level is as accurate as any appropriately adjusted measures.
2. A Priori Identification of Outcomes: Outcome measures should be identified prior to the implementation of a study to avoid capitalizing on chance. Ideally, this should be determined from a document written before study implementation. The use of a measure in a pretest can be considered evidence that the measure was selected a priori. Some studies will select portions of an instrument as an outcome measure, and in these cases there must be evidence that the specific measure was identified a priori, not the instrument as a whole.
3. Reliability Outcome measures should have acceptable reliability to be interpretable. "Acceptable" here means reliability at a level that is conventionally accepted by experts in the field.
4. Validity Outcome measures should have acceptable validity to be interpretable. "Acceptable" here means validity at a level that is conventionally accepted by experts in the field.
5. Intervention Fidelity The "experimental" intervention implemented in a study should have fidelity to the intervention proposed by the applicant. Instruments that have tested acceptable psychometric properties (e.g., inter-rater reliability, validity as shown by positive association with outcomes) provide the highest level of evidence.
6. Comparison Fidelity A study's comparison condition should be implemented with fidelity to the comparison condition proposed by the applicant. Instruments for measuring fidelity that have tested acceptable psychometric properties (e.g., inter-rater reliability, validity as shown by predicted association with outcomes) provide the highest level of evidence.
7. Nature of Comparison Condition The quality of evidence for an intervention depends in part on the nature of the comparison condition(s), including assessments of their active components and overall effectiveness. Interventions have the potential to cause more harm than good; therefore, an active comparison intervention should be shown to be better than no treatment.
8. Assurances to Participants Study participants should always be assured that their responses will be kept confidential and not affect their care or services. When these procedures are in place, participants are more likely to disclose valid data.
9. Participant Expectations Participants can be biased by how an intervention is introduced to them and by an awareness of their study condition. Information used to recruit and inform study participants should be carefully crafted to equalize expectations. Masking treatment conditions during implementation of the study provides the strongest control for participant expectancies.
10. Standardized Data Collection All outcome data should be collected in a standardized manner. Data collectors trained and monitored for adherence to standardized protocols provide the highest quality evidence of standardized data collection.
11. Data Collector Bias Data collector bias is most strongly controlled when data collectors are not aware of the conditions to which study participants have been assigned. When data collectors are aware of specific study conditions, their expectations should be controlled for through training and/or statistical methods.
12. Selection Bias Concealed random assignment of participants provides the strongest evidence of control for selection bias. When participants are not randomly assigned, covariates and confounding variables should be controlled as indicated by theory and research.
13. Attrition Study results can be biased by participant attrition. Statistical methods as supported by theory and research can be employed to control for attrition that would bias results, but studies with no attrition needing adjustment provide the strongest evidence that results are not biased.
14. Missing Data Study results can be biased by missing data. Statistical methods as supported by theory and research can be employed to control for missing data that would bias results, but studies with no missing data needing adjustment provide the strongest evidence.
15. Analysis Meets Data Assumptions The appropriateness of statistical analyses is a function of the properties of the data being analyzed and the degree to which data meet statistical assumptions.
16. A Priori Identification of Methods Analytic methods should be identified prior to inspection or analysis of the data to avoid capitalizing on chance. If this practice is not adequately conveyed through reports or published articles, applicants should submit documents such as proposals and IRB submissions. Analyses done in prior exploratory studies of an intervention can be taken as evidence that methods were selected a priori.
17. Analysis Consistent with Study Theory The methods used to analyze the data for each outcome measure should be consistent with the theory and hypotheses underlying the intervention or program.
18. Anomalous Findings Findings that contradict the theories and hypotheses underlying an intervention suggest the possibility of confounding causal variables and limit the validity of study findings.
19. Replications Replications of findings from additional studies of the same intervention done by independent investigators provide the strongest confidence of the effectiveness of an intervention. Evidence about replications can be provided in two ways. Applicants can refer to other studies in narrative and summary ways in introductions and literature reviews in reports and articles. They can also submit data from multiple studies. This criterion can be applied to both types of evidence. Data from multiple studies also are taken into account when multiple measures within and across studies are synthesized and in the NREPP decision tree.
Updated: 04/21/2010