Return to search

Personalized Policy Learning with Longitudinal mHealth Data

Mobile devices, such as smartphones and wearable devices, have become a popular platform to deliver recommendations and interact with users. To learn the decision rule of assigning recommendations, i.e. policy, neither one homogeneous policy for all users nor completely heterogeneous policy for each user is appropriate. Many attempts have been made to learn a policy for making recommendations using observational mobile health (mHealth) data. The majority of them focuses on a homogeneous policy, that is a one-fit-to-all policy for all users. It is a fair starting point for mHealth study, but it ignores the underlying user heterogeneity. Users with similar behavior pattern may have unobservable underlying heterogeneity. To solve this problem, we develop a personalized learning framework that models both population and personalized effect simultaneously.

In the first part of this dissertation, we address the personalized policy learning problem using longitudinal mHealth application usage data. Personalized policy represents a paradigm shift from developing a single policy that may prescribe personalized decisions by tailoring. Specifically, we aim to develop the best policy, one per user, based on estimating random effects under generalized linear mixed model. With many random effects, we consider new estimation method and penalized objective to circumvent high-dimensional integrals for marginal likelihood approximation. We establish consistency and optimality of our method with endogenous application usage. We apply our method to develop personalized prompt schedules in 294 application users, with a goal to maximize the prompt response rate given past application usage and other contextual factors. We found the best push schedule given the same covariates varied among the users, thus calling for personalized policies. Using the estimated personalized policies would have achieved a mean prompt response rate of 23% in these users at 16 weeks or later: this is a remarkable improvement on the observed rate (11%), while the literature suggests 3%-15% user engagement at 3 months after download. The proposed method compares favorably to existing estimation methods including using the R function glmer in a simulation study.

In the second part of this dissertation, we aim to solve a practical problem in the mHealth area. Low response rate has been a major issue that blocks researchers from collecting high quality mHealth data. Therefore, developing a prompting system is important to keep user engagement and increase response rate. We aim to learn personalized prompting time for users in order to gain a high response rate. An extension of the personalized learning algorithm is applied on the Intellicare data that incorporates penalties of the population effect parameters and personalized effect parameters into learning the personalized decision rule of sending prompts. The number of personalized policy parameters increases with sample size. Since there is a large number of users in the Intellicare data, it is challenging to estimate such high dimensional parameters. To solve the computational issue, we employ a bagging method that first bootstraps subsamples and then ensembles parameters learned from each subsample. The analysis of Intellicare data shows that sending prompts at a personalized hour helps achieve a higher response rate compared to a one-fit-to-all prompting hour.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/d8-94k8-1490
Date January 2019
CreatorsHu, Xinyu
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0021 seconds