User Tools

Site Tools


projects:workgroups:patient-level_prediction:best-practice

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
projects:workgroups:patient-level_prediction:best-practice [2016/04/19 17:48]
prijnbeek created
projects:workgroups:patient-level_prediction:best-practice [2016/05/04 15:43] (current)
prijnbeek [Best practices]
Line 1: Line 1:
-Test+====== OHDSI Best Practices for Patient Level Prediction ====== 
 + 
 +:!: //This document is under development. Changes can be proposed and discussed via the  [[projects:​workgroups:​patient-level_prediction|Patient-Level Prediction Workgroup]] meetings.//​ 
 + 
 +===== General principles ===== 
 + 
 + 
 +  * **Transparency**:​ others should be able to reproduce your study in every detail using the information you provide. Make sure all analysis code is available as open source. 
 +  * **Prespecify** what you're going to predict and how. This will avoid fishing expeditions,​ p-value hacking.  
 +  * **Code validation**:​ it is important to add unit tests, code review, or double coding steps to validate the developed code base. We recommend to test the code on benchmark datasets. 
 +===== Best practices ===== 
 + 
 +**Data characterisation and cleaning**: Before modelling it is important to characterize the cohorts, for example by looking at the prevalence of certain covariates. Tools are being developed in the community to facilitate this. A data cleaning step is recommended,​ e.g. remove outliers in lab values. 
 + 
 +**Dealing with missing values **: A best practice still needs to established. 
 + 
 +**Feature construction and selection**:​ Both feature construction and selection should be completely transparent using a standardised approach to be able repeat the modelling but also to enable application of the model on unseen data. 
 + 
 +**Inclusion and exclusion criteria** should be made explicit. It is recommended to do sensitivity analyses on the choices made. Visualisation tools could help and this will be further explored in the WG.  
 + 
 +**Model development** is done using a split-sample approach. The percentage used for training could depend on the number of cases, but as a rule of thumb 80/20 split is recommended. Hyper-parameter training should only be done on the training set.  
 + 
 +**Internal validation** is done only once on the holdout set. The following performance measures should be calculated: ​   
 +  . Overall performance:​ Brier score (unscaled/​scaled) 
 +  . Discrimination:​ Area under the ROC curve (AUC) 
 +  . Calibration:​ Intercept + Gradient of the line fit on the observed vs predicted probabilities 
 +We recommend box plots of the predicted probabilities for the outcome vs non-outcome people, the ROC plot and a scatter plot of the observed vs predicted probabilities with the line fit to that data and the line x=y added.  ​
projects/workgroups/patient-level_prediction/best-practice.1461088085.txt.gz · Last modified: 2016/04/19 17:48 by prijnbeek