User Tools

Site Tools


development:best_practices_estimation

This is an old revision of the document!


OHDSI Best Practices for Estimating Population-Level Effects

:!: This document is under development. Changes can be proposed and discussed via the OHDSI Forum and in the Population-Level Estimation Workgroup meetings.

General principles

  • Transparency: others should be able to reproduce your study in every detail using the information you provide.
  • Prespecify what you're going to estimate and how: this will avoid hidden multiple testing (fishing expeditions, p-value hacking). Run your analysis only once.
  • Validation of your analysis: you should have evidence that your analysis does what you say it does (showing that statistics that are produced have nominal operating characteristics (e.g. p-value calibration), showing that specific important assumptions are met (e.g. covariate balance), using unit tests to validate pieces of code, etc.)

Best practices (generic)

  • Make all analysis code available as open source
  • Validate all code used to produce estimates. The purpose of validation is to ensure the code is doing what we require it to do. Possible options are:
    • Unit testing
    • Simulation
    • Double coding
    • Code review
  • Include negative controls (exposure-outcome pairs where we believe there is no effect)
  • Produce calibrated p-values

Best practices (new-user cohort design)

Best practices (self-controlled case series)

  • Include a risk window just prior to start of exposure to detect time-varying confounding (e.g. contra-indications, protopathic bias)
development/best_practices_estimation.1461143851.txt.gz · Last modified: 2016/04/20 09:17 by schuemie