Microrandomized Trials: Just-In-Time mHealth Designs Get Granular

Microrandomized Trials: Just-in-time mHealth designs get granular

By Wendy Anson, Ph.D.

“Pull” mHealth portable and wearable devices let us choose the health interventions we want and deploy them from wherever we happen to be. Now, with the advent of increasingly powerful sensors, “push” interventions can adaptively respond to our actions or states and push out targeted support at the optimal time and place.

These just-in-time adaptive interventions (JITAIs) use advanced technology—such as sophisticated sensing devices and home-base ecological momentary assessment (EMA)—to devise their decision rules, which specify exactly where and when particular intervention components will be delivered to increase the likelihood of having their intended effects.

Researchers claim #RandomizedClinicalTrials cannot provide a granular enough view of human behavior #mHealth

Although JITAIs are becoming a popular topic of research, Klasnja and colleagues, in their recent article “Microrandomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions,” claim that commonly used experimental designs cannot specify human behavior at a granular enough level to inform JITAI construction. Instead, the authors present the “microrandomized trial” as a design that enables causal modeling of proximal effects of the randomized intervention components along with assessment of the time-varying moderation of those effects.

Klasnja’s article appears in an eHealth/mHealth OBSSR-funded special issue of Health Psychology, the journal of the Society for Health Psychology of the American Psychological Association (APA).

Microrandomized Trials Are Better Suited Than RTC to Evaluate Behavioral Interventions

Behavioral interventions are traditionally developed using the randomized controlled trial (RCT). This method assesses whether the intervention “package” as a whole had an effect on the behavior under consideration. However, according to Klasnja et al, RCTs cannot determine which components of the behavioral intervention are effective, when they are effective, and what psychosocial or contextual factors may have influenced their efficacy.

While it is true that random assignments help ferret out the treatment-effect moderation with respect to baseline characteristics, such as gender, but “baseline randomization is not effective against causal confounding.” For this reason Klasnja et al maintain that standard RTC data are ill-suited to determine when a particular intervention piece should be applied and which variables at the time of delivery might enhance the intervention.

Standard RCT data can’t say when an intervention should be applied, which variables should be used

Klasnja describes microrandomization as a process whereby an intervention option is randomly assigned at each relevant decision point, which is a moment in time when a particular intervention might be effective based on the participant’s past behavior, his or her current context, and behavioral theory.

Further, the microrandomized trial design allows for multiple components on an intervention to be randomized concurrently. With intervention options randomly assigned at each decision point, a study that lasts weeks or months could randomize each participant hundreds or thousands of times, depending on the frequency at which the intervention pieces of interest are delivered.

In another article published in the same special issue, Nahum-Ahani, Hekler, and Spruijt-Metz define distal outcome as the ultimate goal of the JITAI and proximal outcomes as short-term goals that the intervention is intended to achieve.

Often, proximal outcomes mediate the distal outcomes (i.e., lagged effects come into play when the user does not follow the suggestion at the time he or she received it, yet remembers the suggestion the next day and acts upon it).

For example, in a relapse-prevention intervention for obesity, an intervention piece might target social support. The proximal outcome of this component might be the number of daily interactions with friends and family who support healthy eating. A microrandomized trial, Klasnja explains, might focus on examining whether the social support intervention piece is increasing these interactions, while also considering any mediating processes by determining if increased social interactions are later associated with reduced probability of relapse.

With this more granular level of analysis, the microrandomized trial method can help answer the questions: What are the proximal and lagged effects of an intervention piece? How do these proximal and lagged effects change over time? Which factors (for example, time-invariant or time-varying) moderate an intervention piece’s proximal or lagged effects?

In a microrandomized trial, intervention option is randomly assigned at a point where it may be effective

Klasnja cautions that microrandomized trials have limitations, including the fact that they are only applicable for testing push interventions. Such trials would not be successful at assessing interventions that users can access at will. Also, because microrandomized trial data focus on proximal outcomes, the method is best used when proximal outcomes can be defined in precise, “theory-based terms.” Finally, the design is not useful for testing of support mechanisms designed for very rare events, such as prevention of manic episodes in the bipolar disorder. Given that it is not likely that these manic events would be experienced repeatedly by any one person over the course of the trial, this study and others like it would be best served by traditional cross-sectional designs.

Read the Article

Microrandomized Trials: An Experimental Design for Developing Just-in-Time Adaptive Interventions

Related Article

Building Health Behavior Models to Guide the Development of Just-in-Time Adaptive Interventions: A Pragmatic Framework


About the Author

Wendy Anson headshotWendy Anson, Ph.D.

Wendy Anson, Ph.D., is senior science writer/editor for OBSSR at NIH. She has written and developed literature reviews, book chapters, reports, grant sections, curriculum and award-winning educational films in the science and social science arena for medical schools, research hospitals, educational broadcasting organizations and universities. Her Ph.D. is in educational psychology and technology.


Photo Credit: Shutterstock/watcharakun