User Tools

Site Tools


development:best_practices_estimation

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
development:best_practices_estimation [2016/03/30 07:51]
schuemie [General principles]
development:best_practices_estimation [2020/03/09 05:18] (current)
schuemie
Line 1: Line 1:
 ====== OHDSI Best Practices for Estimating Population-Level Effects ====== ====== OHDSI Best Practices for Estimating Population-Level Effects ======
  
-:!: //This document is under development. Changes can be proposed and discussed via the OHDSI Forum.//+:!: //This document is under development. Changes can be proposed and discussed via the [[http://​forums.ohdsi.org/​t/​population-level-estimation-workgroup-discussing-best-practices|OHDSI Forum]] and in the [[projects:​workgroups:​est-methods|Population-Level Estimation Workgroup]] meetings.//
  
 ===== General principles ===== ===== General principles =====
Line 7: Line 7:
  
   * **Transparency**:​ others should be able to reproduce your study in every detail using the information you provide.   * **Transparency**:​ others should be able to reproduce your study in every detail using the information you provide.
 +
   * **Prespecify** what you're going to estimate and how: this will avoid hidden multiple testing (fishing expeditions,​ p-value hacking). Run your analysis only once.   * **Prespecify** what you're going to estimate and how: this will avoid hidden multiple testing (fishing expeditions,​ p-value hacking). Run your analysis only once.
 +
   * **Validation of your analysis**: you should have evidence that your analysis does what you say it does (showing that statistics that are produced have nominal operating characteristics (e.g. p-value calibration),​ showing that specific important assumptions are met (e.g. covariate balance), using unit tests to validate pieces of code, etc.)   * **Validation of your analysis**: you should have evidence that your analysis does what you say it does (showing that statistics that are produced have nominal operating characteristics (e.g. p-value calibration),​ showing that specific important assumptions are met (e.g. covariate balance), using unit tests to validate pieces of code, etc.)
  
 ===== Best practices (generic) ===== ===== Best practices (generic) =====
 +  * **Write a full protocol**, and make it public prior to running the study. This should include
 +    * Research question + hypotheses to be tested
 +    * Which method(s), data, cohort definitions. ​
 +    * What is the primary analyses and what are sensitivity analyses?
 +    * Quality control
 +    * Amendments and Updates
  
 +  * **Validate** all code used to produce estimates. The purpose of validation is to ensure the code is doing what we require it to do. Possible options are:
 +    * Unit testing
 +    * Simulation
 +    * Double coding
 +    * Code review
  
-  ​* Make all analysis code available as open source  +  * Include ​**negative controls** (exposure-outcome pairs where we believe there is no effect) ​
-  ​* Include negative controls (exposure-outcome pairs where we believe there is no effect) ​ +
-  * Produce calibrated p-values+
  
 +  * Produce **calibrated p-values**
 +
 +  * Make all analysis code available as **open source** so others can easily replicate your study
  
 ===== Best practices (new-user cohort design) ===== ===== Best practices (new-user cohort design) =====
  
  
 +  * Use **propensity scores** (PS)
  
 +  * Build PS model using **regularized regression** and a **large set of candidate covariates** (as implemented in the CohortMethod package)
  
 +  * Use either **variable-ratio matching** or **stratification** on the PS
 +
 +  * **Compute covariate balance** after matching for all covariates, and terminate study if a covariate has standardized difference > 0.1
  
  
 ===== Best practices (self-controlled case series) ===== ===== Best practices (self-controlled case series) =====
  
 +  * Include a **risk window just prior to start of exposure** to detect time-varying confounding (e.g. contra-indications,​ protopathic bias)
 +
 +===== Best practices ((nested) case-control) =====
  
-  * Include ​risk window just prior to start of exposure to detect time-varying confounding (e.g. contra-indications,​ protopathic bias)+  * **Don'​t** do case-control study
development/best_practices_estimation.1459324289.txt.gz · Last modified: 2016/03/30 07:51 by schuemie