C A L M
M E T H O D S
M E T H O D S
DATASET PREPERATION
With the objective of predicting new cholera cases in any given governorate in Yemen from week to week, there were a number of steps taken to prepare the data. In order to produce models that did not simply rely on seasonal trends and were able to predict spikes in cholera cases, the case and death report time series were made stationary through temporal differencing. It should be noted that the country of Yemen encompasses 21 governorates or administrative divisions. While the CALM models were trained on data from all 21 governorates, data preparation on each governorate was performed separately to preserve each governorate’s unique time series. As the interval between each WHO cholera case/death report was not standard, the data was linearly interpolated into a daily time series. The Yemeni Cholera outbreak is seasonal and endemic as outbreaks spike during the rainy season (April-August) - however, the outbreaks rely on non-seasonal factors such as conflict and damage to health and sanitation (Camacho et al., 2018a). Parsing data required finding the number of new cholera cases in a single day, given the total number of cases in the previous day. The values were then normalized by the population of each governorate (e.g new cases per 10,000 people). Finally, we calculated our four target variables: the number of new cholera cases 0-2 weeks from the present day, 2-4 weeks from the present, 4-6 weeks from the present, and 6-8 weeks from the present.
Our dataset was split into three portions: training, cross-validation, and a hold-out test set. The hold-out set was left untouched until the completion of our methods to provide an accurate real-world simulation of our models’ performance. Our base training set was defined from July 1 to August 15th. While WHO reports extended back as far as May 22, we chose to start on July 1 in order to have enough prior data for feature calculation. Our cross-validation dataset was defined from August 15 to November 10. Finally, our hold-out set started from November 11 and extended to a final date in January/February, which varied for each defined target variable depending on the respective range: a 6-8 week forecast implies a larger time frame between current and forecast date than a 2-4 week forecast, and so the 6-8 week forecast holdout set would end prior to the 2-4 week forecast. It may seem that the cross-validation set outweighs the training set significantly, but this was mitigated with the use of a rolling window forecast - a gold standard for cross-validation in time series forecasting. Rolling window cross-validation is easiest understood with the following example. Given a dataset spanning four weeks, a rolling window forecast would dictate that we train on the first week, predict on the second week, then train on the first two weeks, predict on the third, and finally train on the first three weeks and predict on the fourth. In this example, the first week would be the base-training set (as it was never predicted on and was included in the training set of each fold), the second and third weeks the cross-validation set (as they varied between prediction and training sets), and the fourth the week the hold-out set (as it was never trained on). Our five cross-validation sets were defined as follows: August 16 to August 31, August 31 to September 15, September 15 to September 30, September 30 to October 15, and Finally October 15 to October 30 (it should be noted that the final fold included data from October 30 to November 10 as a prediction set, though this does not cross into the hold-out set). The cross-validation sets were used to select features and find optimal hyperparameters for our model, and the hold-out set was used to simulate real-world performance of our model.
Our dataset was split into three portions: training, cross-validation, and a hold-out test set. The hold-out set was left untouched until the completion of our methods to provide an accurate real-world simulation of our models’ performance. Our base training set was defined from July 1 to August 15th. While WHO reports extended back as far as May 22, we chose to start on July 1 in order to have enough prior data for feature calculation. Our cross-validation dataset was defined from August 15 to November 10. Finally, our hold-out set started from November 11 and extended to a final date in January/February, which varied for each defined target variable depending on the respective range: a 6-8 week forecast implies a larger time frame between current and forecast date than a 2-4 week forecast, and so the 6-8 week forecast holdout set would end prior to the 2-4 week forecast. It may seem that the cross-validation set outweighs the training set significantly, but this was mitigated with the use of a rolling window forecast - a gold standard for cross-validation in time series forecasting. Rolling window cross-validation is easiest understood with the following example. Given a dataset spanning four weeks, a rolling window forecast would dictate that we train on the first week, predict on the second week, then train on the first two weeks, predict on the third, and finally train on the first three weeks and predict on the fourth. In this example, the first week would be the base-training set (as it was never predicted on and was included in the training set of each fold), the second and third weeks the cross-validation set (as they varied between prediction and training sets), and the fourth the week the hold-out set (as it was never trained on). Our five cross-validation sets were defined as follows: August 16 to August 31, August 31 to September 15, September 15 to September 30, September 30 to October 15, and Finally October 15 to October 30 (it should be noted that the final fold included data from October 30 to November 10 as a prediction set, though this does not cross into the hold-out set). The cross-validation sets were used to select features and find optimal hyperparameters for our model, and the hold-out set was used to simulate real-world performance of our model.
Feature Engineering and Tuning
Feature engineering is the crux of applied machine learning, and so we went through an exhaustive feature extraction and selection process in order to arrive at our final features. First, we extracted 45,000 potentially relevant features using the tsFresh package, which calculates an expansive array of time series features on our data (Christ et al., 2018). The objective of calculating these many features was the hope to capture ideal representations of our data: while the majority of these features would not be used in the final model, our coverage of this expansive set allowed us to ensure the best features would be found. We also calculated features over a series of overlapping time frames in order to provide varying frames of reference and lags: 8 weeks prior, 6 weeks prior, 4 weeks prior, 2 weeks prior, and 1 week prior. Features describing geographically neighboring governorates (through taking the mean) were also calculated. While having more data is usually beneficial, in this case, our number of training examples was far outnumbered by the number of features. Therefore, a demanding feature selection process was required. Using tsFresh’s scalable hypothesis tests with a false discovery rate of 0.001, we were able to calculate features statistically relevant to each time-range prediction, providing us with four sets of features ~15,000 in number for each time-frame prediction. Next, we removed collinear features, or those that were 97% correlated with each other, as these features would be redundant to our model. This provided us with sets of ~10,000 features to further narrow. We trained and tuned an extreme gradient boosting model, XGBoost, to rank the features in order of importance for each time-range prediction. Utilizing the ranking produce, we recursively added features based on if they added to our cross-validation loss (the root mean square error across all five cross-validation folds). This allowed us to arrive at the best 30-50 features for each time-range. All in all, we were able to remove ~99.9% of our original features.
The 6 by 6 grid located underneath the first row can be loaded with experimental samples and a percentage value can be determined on a scale from the negative control to the positive control. The circle detection process loops through until the maximum radius reaches 120 pixels. If anywhere from 35 to 40 circles are detected in total, then the loop stops. However, if there are fewer circles detected, then the loops restarts to finish through the maximum radius until anywhere from 25 to 40 circles are detected properly. If less than 25 circles are detected, then an error is caught and another picture is requested to be used. Zooming in or zooming out could possibly make the circle detection process easier for the system and more efficient. The following formula is used to calculate the relative percentage values:
The results are then displayed based upon the row in which the circles fall in on the base of the Chrome-Q hardware. The app is able to determine the row in which the circles fall in by comparing y-coordinates. If the y-values are similar to each other, then the circles are classified as being on the same row. The relative values are then transferred to another page within the app where the user is able to enter information that could help contribute to our machine learning model, CALM. The application uses the latitude, longitude, and timestamp values obtained from the phone's GPS to effectively determine where and when the test was run. When the user submits the data, the results are sent to a MySQL database, which is a part of the Relational Database Service (RDS) as a part of the Amazon Web Services (AWS) platform.
The 6 by 6 grid located underneath the first row can be loaded with experimental samples and a percentage value can be determined on a scale from the negative control to the positive control. The circle detection process loops through until the maximum radius reaches 120 pixels. If anywhere from 35 to 40 circles are detected in total, then the loop stops. However, if there are fewer circles detected, then the loops restarts to finish through the maximum radius until anywhere from 25 to 40 circles are detected properly. If less than 25 circles are detected, then an error is caught and another picture is requested to be used. Zooming in or zooming out could possibly make the circle detection process easier for the system and more efficient. The following formula is used to calculate the relative percentage values:
The results are then displayed based upon the row in which the circles fall in on the base of the Chrome-Q hardware. The app is able to determine the row in which the circles fall in by comparing y-coordinates. If the y-values are similar to each other, then the circles are classified as being on the same row. The relative values are then transferred to another page within the app where the user is able to enter information that could help contribute to our machine learning model, CALM. The application uses the latitude, longitude, and timestamp values obtained from the phone's GPS to effectively determine where and when the test was run. When the user submits the data, the results are sent to a MySQL database, which is a part of the Relational Database Service (RDS) as a part of the Amazon Web Services (AWS) platform.
CALM
There are two main components of the CALM platform; the SMS component and the machine learning component. The entirety of the platform is written in Python 3.6+, and several libraries, including the pandas, numpy, scikit-learn, beautifulsoup, xgboost, and flask libraries are utilized. In order to make predictions, the machine learning aspect of CALM ___
To distribute SMS notifications, Michael Koohang graciously allowed Lambert iGEM to modify his RatWatch project (developed at Georgia Tech) to create CALM’s SMS component. The code for SMS distribution is located on a server using the Flask microframework for logic and computation. The Flask server interacts with Twilio’s (an SMS-survey provider) Python API in order to send out text messages to a specified population. The population’s survey results are aggregated and stored on the Flask server using pandas.
We hope to see CALM in use throughout the cholera field within the next few years as medical organizations begin using it to prevent outbreaks and better distribute medical supplies. As cholera already has a cure, a machine-learning based approach to predicting and preventing cholera, especially one that is open-source and free to use, will drastically reduce the time, energy, and money required to treat an infected population. Finally, we believe the CALM project will not only treat millions of people affected with cholera, but will also begin efforts to use CALM’s foundation to predict other diseases such as malaria and parasitic infections.
CALM began as a subcomponent of Lambert’s 2018 project and rapidly developed throughout the beginning of the 2018 season. In late May Lambert participated in the Day One Challenge, an Atlanta-based AI competition, and won. Through further collaboration and outreach with the Day One organization Lambert has been able to receive feedback and advice from professionals in a variety of fields, such as epidemiology, computer science, machine learning, and business. As CALM develops further, we hope to not only see other teams adopt the platform to address other issues, but also for healthcare organizations across the world to utilize CALM and adapt it to other diseases.
We hope to see CALM in use throughout the cholera field within the next few years as medical organizations begin using it to prevent outbreaks and better distribute medical supplies. As cholera already has a cure, a machine-learning based approach to predicting and preventing cholera, especially one that is open-source and free to use, will drastically reduce the time, energy, and money required to treat an infected population. Finally, we believe the CALM project will not only treat millions of people affected with cholera, but will also begin efforts to use CALM’s foundation to predict other diseases such as malaria and parasitic infections.
CALM began as a subcomponent of Lambert’s 2018 project and rapidly developed throughout the beginning of the 2018 season. In late May Lambert participated in the Day One Challenge, an Atlanta-based AI competition, and won. Through further collaboration and outreach with the Day One organization Lambert has been able to receive feedback and advice from professionals in a variety of fields, such as epidemiology, computer science, machine learning, and business. As CALM develops further, we hope to not only see other teams adopt the platform to address other issues, but also for healthcare organizations across the world to utilize CALM and adapt it to other diseases.