Digital health: Real-world engagement and patient-generated data

John Torous, MD, MBI, director of digital psychiatry at Beth Israel Deaconess Medical Center and an advisor for the NIH All of Us Research Program, discusses driving toward the next level in digital health with patient-generated data.

Illustrated hand pointing to smartphone with many apps

Smartphones and wearables make it easy to capture a wealth of patient-generated health data. Yet, ensuring this is high-quality data and deploying it to reap value is nowhere near as straightforward. As digital health tools continue to multiply, developers and investors in this space are finding that millions of downloads and data points may not add up to engagement or lead to improvement on evidence-based health measures. So, what will bolster engagement? And how can patient-generated data springboard from promising pilot studies to scalability in the real world?

John Torous, MD, MBI is director of digital psychiatry at Beth Israel Deaconess Medical Center, editor-in-chief of JMIR Mental Health and an advisor for the NIH All of Us Research Program on their smartphone mood study. His responses to our questions below are excerpted from the Executive Education at HMS webinar “Patient-Generated Data in the Real World.”

Edited and condensed for clarity


Digital health seeks — and sometimes stumbles around — patient engagement. Can you share a few examples?

One example is the mental health world. A very interesting paper looked at engagement with popular mental health apps, apps that may be top sellers in the iTunes and Android stores. The researchers didn’t look at number of downloads — they actually looked at retention. They asked, how many people are using the same app at one day, two days, three days, seven days, even a month. They found engagement falls off pretty quickly. For almost all of the apps related to mental health, it looks almost like an exponential decay curve.

Peer-support apps or apps that involve a social aspect had a little bit better engagement: about 8% at one week for most of the mental health-related apps; for peer-support apps, it was 18%.

A different example is a crowdsourcing vital signs device featured in an interesting paper called the SCOUT study. Some very famous people in the digital health space, like Eric Topol, were part of the study. But less than 3% of participants were meaningfully engaging over the duration of the trial. Even the first movers weren’t quite sticking with it.

So, the first lesson is: a high number of downloads is not enough. You really want to see how people are using the tools. They’re meant to be health tools. There’s meant to be some engagement, sometimes for months, sometimes for years, and certainly for at least one week. Taking a realistic look at the data and how it works can be very informative. It’s one thing to have people sign up and start. It’s a different thing to actually look at, what is the data doing?

What are a few key points to consider about engagement?

One thing to look at is perceived ease of use and how that impacts engagement. In one study from Germany, perceived ease of use drove intent to use and, in some ways, drove actual use. Given that ease of use is extremely important, adding more features, more widgets, more buttons may not be the right approach.

My group has published several papers (see College student engagement with mental health apps: analysis of barriers to sustained use, Actionable health app evaluation: translating expert frameworks into objective metrics and User Engagement in Mental Health Apps: A Review of Measurement, Reporting, and Validity) on what we know about user experience in health care. Especially when it comes to technology and digital apps, there's a lot we don't know about how these things work and what patients want. How do we deliver empathy? How do we deliver a lot of the nontangible elements of it?

One thing we found that truly drives engagement is using these technologies in a way that’s not only patient-centered, but what we’ll call “relationship-centered,” designing not just for the patient, not just for the clinician, but for both. For example, at the end of a patient visit, we engage in shared decision-making. We say, “What data directed at your care would be interesting to learn about at the next visit?”

Let’s say we’re starting you on a medication for anxiety. Let’s have you track your anxiety symptoms. We may also want to know how the environment impacts your anxiety. From a smartphone we can learn how much time people are away from home and at home, how active they are. Depending on the clinical situation, the clinical need, we can customize available data streams and work with a patient to say, “What do you want to track? What could we use at your next visit? And how is this going to improve your care?”

How meaningful is patient-generated data? What value can it generate?

If you’re feeling more depressed today based on your longitudinal data, can we learn more about what this means for you? Might anxiety come next? Maybe you have sleep trouble next. Can we actually move toward preventative medicine? As we get high-quality data in, as we develop and use the correct methods, we can begin to make interesting predictions about patients. But it really relies on high-quality data and methods.

Our group does a lot of work with passive data. One of our recent studies collected passive data looking at sensors, like an accelerometer. You can imagine, in some ways, an accelerometer could be proxy for physical activity, and that could be useful in predicting outcomes or tracking how people are doing in many health conditions. We found the amount of accelerometer data and type of sensors differed for each person in the study. In some ways, this is “bring your own device” science, and people’s phones are not medical-grade devices. They may have different chip sets, different sensors, different operating systems. So, we’re getting variable-quality data from phones using this path. And we’re going to have data quality issues unless we’re giving every person a phone — which makes for a great pilot study, but that’s not the real world and that’s not the scalability of digital health.

A lot of studies will, in part, use online recruitment. An advantage is you can get thousands of people quickly. But if your sample is all, say, self-reported depression, self-reported hypertension? We’ve seen a lot of online studies will have interesting results, but sometimes when you try to bring them to the clinical model, something is different. I think starting with a very valid sample and methods is the right way to go.

— Francesca Coltrera

View the full webinar, “Patient-Generated Data in the Real World,” which delves more deeply into this topic. Continue the conversation on Twitter by connecting with us @HMS_ExecEd or with Dr. Torous @JohnTorousMD.