Interpretable sequence learning for COVID-19 forecasting

Stay organized with collections Save and categorize content based on your preferences.

This document proposes a novel approach that integrates machine learning into compartmental disease modeling to predict the progression of COVID-19. Our model is explainable by design because it explicitly shows how different compartments evolve and it uses interpretable encoders to incorporate covariates and improve performance. Explainability is valuable to help ensure that the model's forecasts are credible to epidemiologists and to instill confidence in end users such as policy makers and healthcare institutions. Our model can be applied at different geographic resolutions, and we demonstrate it for states and counties in the United States. We show that our model provides more accurate forecasts than state-of-the-art alternatives and that it provides qualitatively meaningful explanatory insights.

Overview

This document outlines the following:

  • Review proposed compartmental model for COVID-19.
  • Understand the model design choices made in order to use the covariates needed to accurately predict COVID-19.
  • Discuss the learning mechanisms developed to improve generalization while learning from limited training data.
  • Review several experiments to compare our model to other publicly available COVID-19 models.
  • Understand the potential limitations and failure cases of our model to guide those who might use the techniques to build forecasting systems that can affect public health decisions.

To read the full white paper, click the button:

Download the PDF