If you want to implement a ML prediction model in healthcare, you should make sure that your algorithm performs well also in a domain different than the one used during development. In this talk, I will share the lessons we learned at Pacmed while externally validating Pacmed Critical, a model that supports doctors in deciding the best moment to discharge patients from the Intensive Care Unit.
Pacmed Critical predicts the probability that a patient will die or be readmitted to the Intensive Care Unit, if he/she is discharged now. This information will help doctors identify the best moment to discharge their patients. The model has been developed using data from the Amsterdam UMC but it needs to be validated on an external dataset from a different hospital before it is implemented in practice.
In this talk you will discover the most common reasons why external validation can go wrong, namely concept shift and covariate drift. You will also learn how domain adaptation can address such problems by 1) giving more weights to the examples that are more similar between the two domains (importance weighting), 2) regularising the external model towards the original one (sequential models), 3) including the prediction of the original model as a feature of the external one (another flavour of sequential models) and 4) defining different hierarchies of domain-specific model coefficients (hierarchical models). Which of these performs best depends on whether you can train a model on both domains at the same time or if you can only do it separately. By the end of the talk, you will know which techniques is the most suited for your problem.
The presentation will be at the intermediate level: I will briefly introduce the theory behind the domain-adaptation techniques discussed and, then, I will focus on their applicability and impact in practice. The presentation is targeted to data scientists who have experience in the different phases of model development.