User Tools

Site Tools


projects:workgroups:patient-level_prediction:best-practice

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Last revision Both sides next revision
projects:workgroups:patient-level_prediction:best-practice [2016/05/04 08:22]
jreps [Best practices]
projects:workgroups:patient-level_prediction:best-practice [2016/05/04 08:23]
jreps [Best practices]
Line 22: Line 22:
  
 **Internal validation** is done only once on the holdout set. The following performance measures should be calculated: ​   **Internal validation** is done only once on the holdout set. The following performance measures should be calculated: ​  
-Overall performance:​ Brier score (unscaled/​scaled) +  . Overall performance:​ Brier score (unscaled/​scaled) 
-Discrimination:​ Area under the ROC curve (AUC) +  ​. ​Discrimination:​ Area under the ROC curve (AUC) 
-Calibration:​ Intercept + Gradient of the line fit on the observed vs predicted probabilities+  ​. ​Calibration:​ Intercept + Gradient of the line fit on the observed vs predicted probabilities
 We recommend box plots of the predicted probabilities for the outcome vs non-outcome people, the ROC plot and a scatter plot of the observed vs predicted probabilities with the line fit to that data and the line x=y added.  ​ We recommend box plots of the predicted probabilities for the outcome vs non-outcome people, the ROC plot and a scatter plot of the observed vs predicted probabilities with the line fit to that data and the line x=y added.  ​
projects/workgroups/patient-level_prediction/best-practice.txt · Last modified: 2016/05/04 15:43 by prijnbeek