Published on in Vol 8, No 7 (2020): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/17703, first published .
Untold Stories in User-Centered Design of Mobile Health: Practical Challenges and Strategies Learned From the Design and Evaluation of an App for Older Adults With Heart Failure

Untold Stories in User-Centered Design of Mobile Health: Practical Challenges and Strategies Learned From the Design and Evaluation of an App for Older Adults With Heart Failure

Untold Stories in User-Centered Design of Mobile Health: Practical Challenges and Strategies Learned From the Design and Evaluation of an App for Older Adults With Heart Failure

Original Paper

1Department of Human-centered Computing, School of Informatics and Computing, IUPUI, Indianapolis, IN, United States

2Parkview Mirro Center for Research and Innovation, Parkview Health, Fort Wayne, IN, United States

3Department of BioHealth Informatics, School of Informatics and Computing, IUPUI, Indianapolis, IN, United States

4Department of Medicine, School of Medicine, Indiana University, Indianapolis, IN, United States

5Regenstrief Institute, Indianapolis, IN, United States

Corresponding Author:

Richard J Holden, PhD

Regenstrief Institute

1101 W 10th St #421

Indianapolis, IN, 46202

United States

Phone: 1 317 278 5323

Email: rjholden@iu.edu


Background: User-centered design (UCD) is a powerful framework for creating useful, easy-to-use, and satisfying mobile health (mHealth) apps. However, the literature seldom reports the practical challenges of implementing UCD, particularly in the field of mHealth.

Objective: This study aims to characterize the practical challenges encountered and propose strategies when implementing UCD for mHealth.

Methods: Our multidisciplinary team implemented a UCD process to design and evaluate a mobile app for older adults with heart failure. During and after this process, we documented the challenges the team encountered and the strategies they used or considered using to address those challenges.

Results: We identified 12 challenges, 3 about UCD as a whole and 9 across the UCD stages of formative research, design, and evaluation. Challenges included the timing of stakeholder involvement, overcoming designers’ assumptions, adapting methods to end users, and managing heterogeneity among stakeholders. To address these challenges, practical recommendations are provided to UCD researchers and practitioners.

Conclusions: UCD is a gold standard approach that is increasingly adopted for mHealth projects. Although UCD methods are well-described and easily accessible, practical challenges and strategies for implementing them are underreported. To improve the implementation of UCD for mHealth, we must tell and learn from these traditionally untold stories.

JMIR Mhealth Uhealth 2020;8(7):e17703

doi:10.2196/17703

Keywords



The user-centered design (UCD), or human-centered design, is a framework for iteratively researching, designing, and evaluating services and systems by involving end users and other stakeholders throughout a project life cycle. [1-3]. Mobile health (mHealth) projects benefit from UCD by using input from patients, informal caregivers, clinicians, and other stakeholders during the project life cycle to create better designs and iteratively improve interventions, thus enhancing their usability, acceptance, and potential success when implemented [4-6]. Increasingly, UCD has been recommended and adopted in mHealth projects to great success [7], with many examples of mHealth for people living with HIV [5,8], chronic conditions [9-11], or mental illness [6,12,13].

UCD is supported by popular tools and methods, such as cognitive task analysis, workflow studies and journey mapping, participatory design, rapid prototyping, usability testing, and heuristic evaluation [14-17]. Textbooks, articles, and other resources offer easily accessible and detailed guidance on the general UCD process and specific UCD methods [18,19]. However, an informal review of the literature reveals little information about the practical implementation of UCD methods.

Practical challenges reported in studies largely in non–health care domains include ensuring participants’ representativeness of the target population [20]; threats to innovation [21]; difficulty communicating with people from different backgrounds [22]; and organizational barriers, such as not having convenient access to participants [23].

Although studies applying UCD for mHealth are on the rise, very few mHealth studies report the challenges they face while planning and executing UCD activities, or they do so parenthetically. UCD challenges may be unique or amplified in the field of mHealth. For example, there are known difficulties in evaluating the effectiveness of mHealth solutions, in part because of the variable and multifactorial nature of health and illness trajectories [6]. mHealth projects often involve unique stakeholders, drawn from vulnerable patient populations (eg, older adults, patients with chronic conditions) [24,25] and busy clinician experts [6,26], or sometimes both. Moreover, on the one hand, there is sometimes a mismatch between the use of UCD methods (eg, rapid prototyping, user testing) and emerging technologies (eg, sensors) and, on the other hand, traditions (eg, clinical trials) and technological conservatism that characterize much of the health care sector [27-29].

Given the presence and importance of the practical challenges for implementing UCD, coupled with the increased use of UCD for mHealth, we argue for the need to explicitly describe those challenges. As mHealth technologies become more pervasive, navigating practical UCD challenges is essential for the development of “safe, sound, and desirable” [30] mHealth solutions that improve health outcomes while involving stakeholders in the design process [31,32]. We believe that identifying, reporting, and discussing the untold stories of actually implementing UCD for mHealth will help in overcoming the significant gaps between research and practice [33,34].


Overview

The objectives of this study were as follows:

  • Characterize practical challenges encountered while implementing UCD to design an mHealth app for older adults with heart failure.
  • Discuss strategies that we used or considered using to manage these challenges.

In presenting these challenges and strategies, we offer illustrations from our own experiences, particularly the Power to the Patient (P2P) project (R21 HS025232) and cite others that have been described in the literature in and outside the mHealth arena.

Power to the Patient Project

From 2017 to 2019, we performed a UCD study to design and evaluate information technology for older adults with heart failure. On the basis of previous literature, we knew that these patients had unmet needs and required additional support to monitor and manage symptoms and various related behaviors, including medication use, dietary and fluid restriction, and physical activity [35-37]. Our work focused on delivering information and decision support by leveraging a novel technological opportunity, namely, sharing with patients the data from their cardiac implanted electronic devices (CIEDs). Many patients with heart failure have CIEDs for the delivery of timely cardiac therapy and the capturing of data that can predict decompensation and other events leading to hospitalization and other downstream outcomes [38,39]. Patients seldom receive their CIED data [40,41], but technical and cultural changes are increasing the likelihood that they will in the future [41-43].

Figure 1 presents an overview of the project’s timeline, and Textboxes 1 and 2 further elaborate on the methods used. Select methods and results from the project are available elsewhere [19,32,44-47].

The project began with a problem analysis or formative study of the domain as a precursor to design. This phase comprised interviews with 24 older adults with heart failure, half of whom had CIEDs, to learn about how they made decisions about their health. These interviews used the critical incident technique, a method that asks participants to recall and describe a specific event or scenario and uses probes to better understand the participants’ thoughts and actions during the event or scenario [48,49]. We also examined participants’ decision-making strategies in response to fictitious scenarios [16]. In these scenarios, individuals were presented with hypothetical situations related to data from a fictitious CIED and asked to think aloud as they made decisions about how to respond. Other formative study methods were a brief observation of a device clinic, meetings with 2 cardiologists, and sharing of findings and design work from 2 recent similar studies on CIED data sharing [43,50].

Figure 1. Timeline of the Power to the Patient project.
View this figure
Formative research methods used to establish the Power to the Patient’s domain space.

Patient interviews

  • Method: 70-min (1) critical incident interviews and (2) scenario-based cognitive interviews to understand the decision-making process of older adults with heart failure
  • Participants: 24 English-speaking older adults (≥65 years) diagnosed with heart failure (New York Heart Association Class II-IV) and 14 accompanying support persons (family and friends). Patients were receiving care at Parkview Health (Fort Wayne, Indiana)
  • Procedure: (1) Participants were asked to describe a recent minor adverse health event and were probed with questions about their thought, feelings, and actions. (2) Participants were presented with a picture of a fictitious device that could give them a CIED score representing their heart health; they were asked to describe what they would think and do depending on the score displayed on the device

Patient advisory meetings

  • Method: One-on-one meetings with patient advisors soliciting feedback on (1) personas and use-case scenarios and (2) early design concepts and prototypes
  • Participants: 2 older adults with heart failure from the community (Indianapolis, Indiana) who voluntarily assisted the study in an advisory capacity
  • Procedure: (1) Advisors met with the research team to discuss the preliminary findings from the interviews and early persona development. They provided feedback on the findings, methods, and relevance of the work. (2) Advisors were presented with design alternatives of a Power to the Patient prototype, and then they interacted with it while thinking aloud using a computer and a mouse

Clinician advisory board meeting

  • Method: Group dinner with clinician experts to elicit feedback on personas, use-case scenarios, and early concepts
  • Participants: 7 Parkview Health clinicians (2 cardiologists, a device clinic supervisor, 2 technicians, a nurse, and the vice president of operations for the Parkview Heart Institute)
  • Procedure: Personas and scenarios were presented, among other findings, and questions were asked of clinicians regarding the validity of the findings and related current protocols (some of which were subsequently collected)

Individual interviews with 2 cardiologists

Observation of clinical encounters with a patient in the device clinic

Textbox 1. Formative research methods used to establish the Power to the Patient’s domain space.
Evaluation methods used during the Power to the Patient development.

Usability evaluations, round 1 (R1) and round 2 (R2)

  • Method: 90-min task-driven evaluation sessions of Power to the Patient prototypes to assess usability (primarily) and acceptability (secondarily)
  • Participants: 4 (R1) and 8 (R2) English-speaking older adults diagnosed with heart failure and 3 accompanying support persons (2 in R1 and 1 in R2)
  • Procedure: Participants performed specific tasks in the prototype while thinking aloud. Testing occurred in a private room, with an interactive prototype made in Axure RP 9 running on a Samsung Galaxy S7 smartphone. Participants’ manual interactions were video recorded. Pretest, participants completed a demographic survey (ie, age, gender, technology use, and education level) and the Newest Vital Sign health literacy screening (NVS) [51]; posttest, they completed the system usability scale [52], National Aeronautics and Space Administration Task Load Index (NASA-TLX) [51], and user acceptance survey. Participants were also interviewed about their understanding and projected use of Power to the Patient prototypes.

Usability evaluation, round 3 (R3)

  • Method: 90-min scenario-driven evaluation sessions of Power to the Patient prototypes to assess acceptability (primarily) and usability (secondarily)
  • Participants: 12 English-speaking older adults diagnosed with heart failure, with cardiac implanted electronic devices, and accompanied by 5 support persons
  • Procedure: Participants simulated days 1 and 10 of longitudinal use of the Power to the Patient prototype while thinking aloud. They completed the same assessments as in earlier rounds and were interviewed about their understanding and projected use of the Power to the Patient prototype.

Heuristic evaluation

  • Method: Heuristic evaluation questionnaire to assess usability of Power to the Patient prototypes
  • Participants: 3 user-centered design experts external to the team
  • Procedure: Participants explored the prototype based on 2 use cases. They then reported their observations for 9 heuristics and gave an overall rating for the usability of the prototype
Textbox 2. Evaluation methods used during the Power to the Patient development.

Findings from the formative study were analyzed to develop personas, representing distinct ways patients made decisions, and use-case scenarios, representing decision-making situations in which hypothetical patients with CIEDs might find themselves. A model was also created depicting the flow of naturalistic decision making for heart failure self-care. These products were presented separately to 2 patient advisors and a panel of clinicians, who provided feedback on the realism and relevance of the personas and scenarios. The personas, use-case scenarios, and a review of the literature and market landscape (eg, app store reviews of similar mHealth products) were used to formulate requirements and early design concepts to be presented to patient advisors.

The design involved writing requirements and 5 months of iterative prototyping, concluding with an interactive prototype. Subsequently, we performed 3 rounds of formal laboratory-based usability testing with 24 participants, interleaved with periods of prototype redesign. Each round had a more complete prototype and an increasing number of participants (ie, n=4 in round 1, n=8 in round 2, and n=12 in round 3). The project concluded with a final refinement of the prototype and formal heuristic evaluations by 3 outside UCD experts.

The P2P app designed in this study had 4 core patient-facing components: a heart health score calculated from CIED data (a fictitious concept inspired by existing research [53,54]), self-assessments on recommended heart failure self-care domains (eg, medication use, sodium-restricted diet), tips and strategies for better self-management, and logs of data captured by the app. We designed and tested different implementations of these core concepts, changing information architecture and amending feature sets as we received feedback from study participants. For example, in one iteration, assessments, tips, and strategies were organized using the concept of plans that users could select.


Overview

The UCD process as a whole and the 3 major UCD phases of formative research, design, and evaluation present associated implementation challenges. Twelve of the most pervasive challenges are summarized in Figure 2. Next, we discuss each, with examples from our experience with P2P, supplemented by relevant literature.

Figure 2. UCD challenges encountered during Power to the Patient research, development, and evaluation. UCD: User-centered design.
View this figure

Whole User-Centered Design Process

We identified 3 challenges related to the UCD process as a whole: deciding on the number of design-and-test iterations (UCD process challenge 1); managing logistics associated with UCD projects, such as preparing materials and recruitment costs (UCD process challenge 2); and collaborating as a multidisciplinary team by navigating misaligned goals and communication breakdowns (UCD process challenge 3). Recommendations for the UCD process are listed in Textbox 3.

Recommendations for the user-centered design process.
  1. Keep the number of iterations flexible, estimating the range based on available resources but adjusting as the project progresses.
  2. Avoid endless iterations by having a clear criteria for when to end the design and testing activities.
  3. Consult or collaborate with experienced practitioners to anticipate and manage logistic challenges.
  4. Explicitly discuss and seek to align multidisciplinary team members’ goals and preferences early on, managing these over time through open communication.
  5. Involve a multilinguistic conductor to lead the team and coordinate members with diverse values, norms, practices, vocabularies, theories, and methods.
Textbox 3. Recommendations for the user-centered design process.
User-Centered Design Process Challenge 1: Deciding on the Number of Iterations

We planned the number of design-and-test iterations early in the project to satisfy the requirements of the funding agency and Institutional Review Board (IRB). Three iterations were originally estimated as feasible, given the timeline and amount of funding. Each iteration was assumed to require an average of at least 2 months of time and budget. As the project progressed, it became apparent that those assumptions were correct and that to adhere to budget and project timeline restrictions, we would be able to complete the 3 planned iterations. The number of participants in total and per iteration was based on several considerations. The total of 24 test participants were based on the project budget and timeline, as above, and the goal of having an average of 8 participants per test, a typical upper range for formative testing. For the initial 2 iterations, our focus was on identifying overt usability issues, and based on Nielsen’s recommendations, we planned to enroll about 4 to 8 participants (a few participants can uncover most usability issues [55]). In the last round, as our focus shifted toward evaluating acceptance and the extent to which the design was usable across a more diverse set of users, we recruited 12 participants.

Iterations in UCD benefit software development by reducing usability issues and improving features before it is too late or too expensive to make changes [56]. Iteration is generally recommended [56], but the number of iterations is not fixed and often depends on the project and how it progresses. However, deviating from a planned number of deviations can be costly or prohibited for other (eg, regulatory or management) reasons. Conversely, endless iteration is counterproductive and delays in-the-wild field testing and actual implementation.

UCD Process Challenge 2: Managing UCD Logistics

Many of our recurring challenges were related to logistics, such as meeting recruitment goals, preparing and managing materials (ie, instructions, presentations, and design documentation), and arranging a specific time for members and advisors to meet despite being geographically dispersed. Others have reported similar challenges in mHealth projects [57,58], but more often they are assumed and discussed informally between practitioners. This means that novice UCD practitioners tend to underestimate logistic challenges and have limited information on how to address them. UCD logistics challenges often mirror challenges to implementing patient-centered research in general, for example, the taxonomy of challenges by Holden et al [59], which includes patient identification and recruitment; privacy and confidentiality; conflicts with compensation; and logistical issues, such as travel, timing, and communication. Authors have described strategies to achieve buy-in, trust, transparency, accommodation (flexibility), openness, and anticipation [59,60], as well as checklists for implementing these general strategies [61].

Others have described the additional costs of recruiting end users for UCD projects [20,57]. It is even more difficult to recruit a representative sample in a technology project when technology ownership and proficiency are among the eligibility criteria [62] or the notion of technology leads individuals to decline participation [63]. Age- and illness-related physical and cognitive limitations may also exclude some individuals from mHealth studies [6,64]. In such cases, UCD projects must also consider whether to involve informal caregivers or others (eg, translators) who could assist patients during formative research or testing.

UCD Process Challenge 3: Collaborating as a Multidisciplinary Team

Our P2P project was the fruit of collaboration between a research university and a research center in a large health system, with support from patient and clinician advisors. Our team included a cardiologist coinvestigator who provided invaluable clinical information, feedback, and access to local and national clinical leaders. However, with disciplinary diversity comes disagreement, communication difficulty, and differences in assumptions, and although these are all desirable elements, they require efforts to manage, for example, by frequently asking team members to state their assumptions.

Multidisciplinary collaboration is often encouraged in UCD [6,65,66], including partnerships between designers and clinicians [67,68]. Those who have attempted such collaborations are aware of the methodological and cultural misalignment or divergent goals between Human-Computer Interaction technologists and clinicians [65,69]. Strategies to overcome these collaboration challenges include structured communication and the involvement of a multilinguistic (symphonic) conductor, a person who has learned “each team member’s discipline- or profession-specific values, norms, practices, vocabularies, theories, and methods to coordinate and translate between dissimilar members” [70].

Formative Research

We encountered 3 types of formative research challenges in the P2P project: determining when and how to involve stakeholders (formative research challenge 1), recruiting participants and advisors who are representative of stakeholder groups (formative research challenge 2), and fostering meaningful interactions between stakeholders and designers despite personnel constraints (formative research challenge 3). Recommendations for formative research are listed in Textbox 4.

Recommendations for formative research.
  1. Involve stakeholders early.
  2. Avoid over recruiting or collecting more data than can be expediently analyzed.
  3. Use shorter-cycle iterative research sprints with smaller sample sizes instead [55].
  4. Carefully balance expectations placed on stakeholders versus what they are able or willing to do.
  5. Deprioritize but do not discard the ideas that stakeholders rejected early on before the ideas reached maturity.
  6. When possible, generate involvement from multiple stakeholder groups.
  7. Use diverse recruitment methods to ensure stakeholders are chosen for representativeness, not convenience.
  8. Foster direct relations between designers and stakeholders.
  9. Minimize avoidable personnel changes and practice cross-staffing across user-centered design phases.
Textbox 4. Recommendations for formative research.
Formative Research Challenge 1: Determining Timing and Level of Stakeholder Involvement

We involved patients and clinicians early in the P2P project in several ways, including informant interviews and feedback on analyses, requirements, and early design concepts. This helped learn lessons early, such as clinicians insisting that because of interindividual variability in physical activity, activity goals should be highly individualized, whereas other goals (eg, medication adherence) could be identical for all users. Early learning allowed earlier decisions about scope, facilitated evidence-based design choices, and prevented having to make costly future design revisions. In terms of the scope, early stakeholder involvement helped eliminate especially difficult or risky design concepts, for example, including medication titration advice.

Early involvement was not always simple. The number and depth of P2P interviews yielded more data than could be analyzed in the time allotted. As a result, several important findings from stakeholder interviews were not discovered until further in the design process, negating some of the benefits of early learning. Furthermore, having sought feedback early may have prematurely terminated concepts that were promising but premature at the time.

The forms of involvement vary from collecting extensive data to asking individuals to assess early products [65]. The level of involvement can also be adjusted between informing (as in interviews), advising (as in reviewing concepts), and doing (as in having stakeholders co-perform research or design work) [71]. However, more active or laborious stakeholder involvement risks asking individuals to do more than what is realistic, reasonable, or affordable [21,72]. This is often the case when individuals are asked to be co-designers without adequate training in design, compensation for their contribution, or understanding of the problem space. Although some involvement is essential to UCD, more is not always better [71].

Formative Research Challenge 2: Choosing Stakeholder Representatives

P2P was fortunate to obtain input from patients sampled from a pool of current patients; volunteer advisors who were willing to meet repeatedly with the design team; and various clinicians, some of whom also offered access to their clinic and protocols. Not every design team can easily access stakeholders for formative research in a timely manner, far less multiple stakeholder groups, especially when the stakeholders include busy professionals. Some researchers resort to gathering data from less representative convenience samples, including online services offering access to paid volunteers, such as Amazon’s Mechanical Turk or the Qualtrics Panel [73-75].

Despite having adequate access, P2P was also limited in the variety and representativeness of stakeholders. The patients we interviewed in our formative research were all white, and two-third were male. Patient advisors were more likely to be educated and engaged in their health than peers, consistent with the general trend that patient advisors are rarely ordinary people. Clinicians in our study may have been more motivated than nonparticipants. Although stakeholder involvement is essential to UCD [1,76], it is predicated on stakeholders having unique knowledge or insights that designers do not have. However, stakeholders too have limited knowledge and represent primarily the communities to which they belong, meaning even with stakeholder involvement, there may exist multiple blind spots. When those who are involved differ from end users (eg, on race, education, or motivation), those blind spots may disadvantage underrepresented groups [63]. In practice, however, few design studies have the opportunity to conduct formative research with large samples representative of the population, whereas increasing sample size exacerbates formative research challenge 1 (“When and how much to involve stakeholders”), as discussed above.

Those who have worked extensively with patient advisory boards offer useful advice on assembling the right group of stakeholders, especially when they must work together, as on a panel. Suggestions include leadership commitment to listening to stakeholder suggestions, diverse recruitment (to avoid the abovementioned blindspots), careful selection of individuals who will work well with others, and adequate funding to compensate or otherwise support stakeholders [77].

Formative Research Challenge 3: Fostering Interactions Between Stakeholders and Designers

We attempted to promote direct interactions between designers and both patient and clinician stakeholders. Designers attended many of the formative research sessions or had direct access to the collected raw data. Furthermore, to ensure continuity, there was cross-staffing of formative research, design, and evaluation teams. One project member personally participated in almost every interview, feedback session, design meeting, and usability test. However, she was the only design team member who had interviewed patients and was therefore expected to be the voice of patient participants on the design team. Over time, turnover greatly reduced the number of team members who had been present from the beginning of the project and had therefore participated in any formative research activities.

Having designers interact directly with stakeholders, and especially end users, has been shown to yield better results [78] than hearing about the stakeholders and end users from another source [71]. Continual interaction with stakeholders during the UCD process helps designers gain a firsthand experience of the domain [78,79]. However, when projects progress sequentially from formative research with stakeholders to design and evaluation, turnover and staffing limitations may mean that those designing or testing the product may not have had such firsthand experience.

Design

We identified 3 challenges related to the design phase of UCD: overcoming designers’ assumptions with empirical research findings (design challenge 1), managing project scope and complexity and avoiding scope creep (design challenge 2), and maintaining the innovation equilibrium by balancing new ideas with outside constraints (design challenge 3). Recommendations for design are listed in Textbox 5.

Recommendations for design.
  1. Engage stakeholders during design as ad hoc informants or co-designers to challenge incorrect assumptions.
  2. Conduct iterative new rounds of data collection during the design phase as questions arise that are best answered by gathering evidence.
  3. Seek simplicity and thus reducing complexity.
  4. Monitor for scope creep [80] and overly complex designs, relative to what end users need.
  5. Plan for feature deimplementation (ie, removing features from design), using techniques such as a formal termination plan [33].
  6. Without stifling innovation, ensure stakeholders can rule out designs that are unsafe, unacceptable, infeasible, inconsistent with clinical reality, or otherwise impractical.
  7. For innovative ideas transcending conventional practice, develop clear plans for how the design will fit in or overcome existing infrastructure constraints, regulations, preferences, and habits.
Textbox 5. Recommendations for design.
Design Challenge 1: Overcoming Designers’ Assumptions

Designers naturally make decisions that are inspired by but immediately validated by end user evidence. Some of these decisions are based on assumptions that go unquestioned during design but are discovered to be incorrect during testing. This situation underscores the value of testing and the limitations of design. In the P2P project, for example, we incorporated rewards based on the literature. Nothing in the formative data contradicted the potential value of rewards, so it was not until testing that we learned that most participants thought the rewards were distracting. Some assumptions are also persistent and can be made despite disconfirming formative research findings. For example, members of the design team persistently believed that end users would have little technology experience, despite evidence to the contrary from formative research and usability testing.

Designer bias is difficult to overcome, even when UCD methods are used to collect contradictory evidence. The sequence of design following formative research means some assumptions are not tested or contradicted until the testing phase, by which time the assumptions may have greatly influenced the design. An alternative would be to conduct additional research to challenge the design team’s assumptions during the design phase, but before formal testing [81]. Furthermore, designers should be judicious in the use of design techniques, such as personas, which can lead to oversimplification and encourage misleading assumptions about end users [82]. Other strategies to mitigate incorrect assumptions include conducting more frequent testing or including stakeholders on design teams to challenge assumptions during the design process [50,71].

Design Challenge 2: Managing Project Scope and Complexity

Similar to other design projects, P2P produced many ideas, which were easier to generate than to dismiss. As a result, we attempted to include in a single app a large variety of features. We also attempted to integrate these many features to produce a coherent product. Often, multiple features were being slowly designed in parallel, rather than perfecting 1 feature before moving on to the next. These conditions sometimes led to confusion about the purpose of the app. More features also meant less time and effort spent designing or testing each.

Complexity and scope need to be carefully managed to avoid natural tendencies to add (rather than subtract) from taking over. Additional research could be used to help prioritize features and determine which features are attractive to designers but not needed by end users [28]. When a project’s scope is intentionally large, steps can be taken to create distinct modules (chunks of features) [83], which provide coherent structure and separation. If complexity is inevitable, the project team will need to plan for more extensive testing by conducting longer sessions or sessions with more users.

Design Challenge 3: Maintaining the Innovation Equilibrium

In our experience, designers, clinician stakeholders, and patient stakeholders were divided on what was possible for and needed from the product being designed. Generally, clinicians were more conservative, preferring to replicate existing practices and avoid less studied or riskier options. For instance, clinicians were more conservative than designers about how much unedited information and control over its interpretation to offer patients. Another point of contention was whether to integrate the product into other health information systems, including electronic medical records. Patients preferred integration, whereas designers were divided on leveraging those systems at the expense of their practical limitations and regulatory constraints. Innovation also conflicted with clinical reality, a case where a patient or designer might envision something that is not technically possible or clinically relevant [32]. For example, the design team assumed an ability to predict heart failure events through CIED data that were beyond publicly available scientific knowledge. Designers’ innovative ideas could also be mismatched with what patient end users were used to and could comfortably perform. This may have been the case with patients’ dislike of rewards or reluctance to rate their health using standard online rating conventions (eg, out of 5 stars). In general, end users tend to have more conventional preferences than designers [26]. In mHealth projects, patients may be unaware of or reluctant to suggest all the technological possibilities granted by smartphones [29], such as push notifications [84] or smartphone sensors [27].

In conversations with innovators, UCD professionals often hear the statement attributed to Henry Ford, “If I had asked people what they wanted, they would have said faster horses.” The broader challenge is maintaining the innovation equilibrium: allowing innovators to innovate, while also allowing stakeholders to influence or evaluate their design, especially when it comes to usability, safety, and privacy. The related challenge is to prevent innovation from creating products that commit what Cornet et al [32] call type 2 design error, which “occurs when designers do not accommodate the clinical reality, including biomedical knowledge, clinical workflows, and organizational requirements.”

Evaluation

We identified 3 evaluation challenges: managing the tradeoff between laboratory and in-the-wild usability sessions (evaluation challenge 1); adapting standardized methods to the end user population, in our case, older adults (evaluation challenge 2); and deciding on the number of concurrent evaluation methods relative to the effort spent setting up sessions and analyzing data (evaluation challenge 3). Recommendations for evaluation are listed in Textbox 6.

Recommendations for evaluation.
  1. Use a laboratory setup for usability testing to improve efficiency and effectiveness.
  2. Begin testing in the laboratory but transition to in-the-wild testing as time and budget allow.
  3. Adapt methods to end user needs, when necessary, even if this means deviating from the standard.
  4. Allow for flexibility and experimentation, at times sacrificing standardization.
  5. Control the number of concurrent evaluation methods; use efficiency, pacing, and workload management strategies if multiple methods are implemented.
Textbox 6. Recommendations for evaluation.
Evaluation Challenge 1: Conducting Laboratory or In-the-Wild Usability Sessions

P2P usability testing was conducted in a laboratory setting, albeit in meeting rooms without built-in usability or simulation equipment (eg, a control room, multicamera recording, eye tracking). Although this setting was adequate in most cases, it was at times inconvenient. The laboratory setting was more challenging because it required participants to test prototypes in a time and place dissimilar from the intended context of use. Participants spent 30 min using a prototype technology meant to be used for weeks, months, and years. They were then asked to project how they would use the technology in practice. In the third round of testing, scenarios were used to simulate several days in the prolonged use of the product to help participants project future acceptance and use. However, the cross-sectional and laboratory-based design of our testing limited our confidence in our findings regarding acceptance and future use, relative to findings of usability (eg, observed errors or subjective usability ratings).

Evaluating mHealth prototypes in a laboratory setting offers ideal conditions for detecting product software usability issues, such as navigation or layout issues. Such evaluation however lacks external validity in reproducing the context of the use of the mHealth product [85] and therefore fails to assess most issues related to product hardware usability, operating system usability, acceptance, and longer term outcomes (eg, changes in behavior or health) [6]. Laboratory evaluation is appropriate to quickly iterate on designs and address usability issues before in-the-wild testing to avoid fielding a poorly designed product. However, in-the-wild testing is expensive and time-intensive and may not be possible in every project.

Evaluation Challenge 2: Adapting Methods to End Users

We adapted the standard methods in several ways, including accommodating older adult participants. For example, we administered a simplified version of the System Usability Scale (SUS) self-report measure, which we developed specifically for older adults [9,86,87]. We built flexibility for breaks during testing, especially given the use of diuretic medications by patients with heart failure. We also discovered challenges that we had not anticipated, for example, a participant having difficulty completing computer-based surveys because of vision and motor impairment.

In the context of mHealth projects, adapting standardized usability evaluation methods to end users is often necessary to accommodate patient abilities and limitations. For example, most standardized usability scales have technical or difficult words [86]; thus, many studies edit these measures, for example, by changing the “cumbersome” in the SUS to “awkward” [88]. Although standardized methods ensure scientific reproducibility, rigidity in the UCD process can undermine the goal of iteratively improving a product, which often requires flexibility and experimentation [65]. If, for example, researchers discover that some older participants have difficulty using a touchscreen device, it may be worth adapting the protocol to permit the use of a mouse or stylus in subsequent testing.

Evaluation Challenge 3: Deciding on the Number of Concurrent Evaluation Methods

Our testing involved participant consenting, lengthy pre- and posttest surveys, posttest interviews, and task-based usability testing with think-aloud. In addition, each testing session required a pretest room and equipment setup and posttest aggregation of data from audio, video, computerized, and written recordings. At times, the multiplicity of methods in a single session resulted in testing sessions being cut short. Moreover, the amount of data collected during testing affected the speed at which the team could analyze usability test findings and prepare the next design for another round of testing.

Using multiple concurrent evaluation methods improves triangulation and therefore mHealth usability [89]. However, each method adds burden and affects the timeline. Thus, those implementing UCD should pursue strategies to reduce inefficiency (eg, use of a dedicated testing room to reduce setup labor), ensure pacing (eg, blocking off staff time for testing and analysis), and reduce workload (eg, use of automated usability data collection or analysis) [90].


Limitations

The challenges reported were based on our experience with a P2P project, supplemented by firsthand experience with multiple other mHealth projects and a review of the literature. However, the literature yielded few explicit depictions of challenges and less formal discussion of them. (This validated the goals of this paper.) Both our experiences and most of those described in the literature originated in academic environments, which have unique staffing, timing, and funding characteristics. The UCD implementation challenges and strategies encountered in the industry may be different, although an examination of gray literature (eg, blog posts and popular design books) shows some similarities. Finally, our recommendations are to be taken with caution, as they have not been formally validated across projects, project teams, or environments. The mHealth UCD community should actively debate these recommendations and produce new ones.

Conclusions

UCD implementation for mHealth apps can lead to highly usable and acceptable patient-centered and clinically valid solutions. Implementation is challenging, as the 12 practical challenges in this paper easily illustrate. However, these challenges can be overcome, and our recommendations may help others apply UCD to mHealth or similar arenas. Telling and learning from the typically untold stories will result in more efficient, effective, and sustainable mHealth design efforts, effectively bridging the gap between the science and practice of UCD and mHealth implementation. We call on our fellow researchers, designers, and UCD experts to document and share their own challenges and strategies toward improving the implementation of UCD.

Acknowledgments

This study was sponsored by the Agency for Healthcare Research and Quality (AHRQ) grant R21 HS025232 (Holden, PI). The content is solely the responsibility of the authors and does not necessarily represent the official views of the AHRQ. The authors acknowledge the participants for their contribution. The authors also thank their entire research team, their patient and clinical advisory board members, and Parkview Health staff for their contribution throughout the project. They also thank the two peer reviewers for their helpful comments on the manuscript.

Conflicts of Interest

MM has research funding from AHRQ and the following financial relationships with industry to disclose: research grants from Medtronic, Inc; consulting fees/honoraria with iRhythm Technologies, Inc; and Zoll Medical Corporation; and financial partnership with Medical Informatics Engineering, outside the submitted work.

  1. Ergonomics of human-System Interaction — Part 210: Human-Centred Design for Interactive Systems. International Organization for Standardization. 2010.   URL: https://www.iso.org/standard/52075.html [accessed 2020-06-10]
  2. Gould J, Lewis C. Designing for usability: key principles and what designers think. Commun ACM 1985;28(3):300-311 [FREE Full text] [CrossRef]
  3. Norman D. The Design of Everyday Things: Revised and Expanded Edition. New York, USA: Basic Books; 2013.
  4. McCurdie T, Taneva S, Casselman M, Yeung M, McDaniel C, Ho W, et al. mHealth consumer apps: the case for user-centered design. Biomed Instrum Technol 2012;Suppl:49-56. [CrossRef] [Medline]
  5. Schnall R, Rojas M, Bakken S, Brown W, Carballo-Dieguez A, Carry M, et al. A user-centered model for designing consumer mobile health (mHealth) applications (apps). J Biomed Inform 2016 Apr;60:243-251 [FREE Full text] [CrossRef] [Medline]
  6. Ben-Zeev D, Schueller SM, Begale M, Duffecy J, Kane JM, Mohr DC. Strategies for mhealth research: lessons from 3 mobile intervention studies. Adm Policy Ment Health 2015 Mar;42(2):157-167 [FREE Full text] [CrossRef] [Medline]
  7. Marcolino MS, Oliveira JA, D'Agostino M, Ribeiro AL, Alkmim MB, Novillo-Ortiz D. The impact of mhealth interventions: systematic review of systematic reviews. JMIR Mhealth Uhealth 2018 Jan 17;6(1):e23 [FREE Full text] [CrossRef] [Medline]
  8. Wray T, Kahler C, Simpanen E, Operario D. User-centered, interaction design research approaches to inform the development of health risk behavior intervention technologies. Internet Interv 2019 Mar;15:1-9 [FREE Full text] [CrossRef] [Medline]
  9. Srinivas P, Cornet VP, Holden RJ. Human factors analysis, design, and evaluation of engage, a consumer health IT application for geriatric heart failure self-care. Int J Hum Comput Interact 2017;33(4):298-312 [FREE Full text] [CrossRef] [Medline]
  10. Giunti G, Mylonopoulou V, Romero OR. More stamina, a gamified mhealth solution for persons with multiple sclerosis: research through design. JMIR Mhealth Uhealth 2018 Mar 2;6(3):e51 [FREE Full text] [CrossRef] [Medline]
  11. Morita PP, Yeung MS, Ferrone M, Taite AK, Madeley C, Lavigne AS, et al. A patient-centered mobile health system that supports asthma self-management (breathe): design, development, and utilization. JMIR Mhealth Uhealth 2019 Jan 28;7(1):e10956 [FREE Full text] [CrossRef] [Medline]
  12. Vilardaga R, Rizo J, Zeng E, Kientz J, Ries R, Otis C, et al. User-centered design of learn to quit, a smoking cessation smartphone app for people with serious mental illness. JMIR Serious Games 2018 Jan 16;6(1):e2 [FREE Full text] [CrossRef] [Medline]
  13. Biagianti B, Hidalgo-Mazzei D, Meyer N. Developing digital interventions for people living with serious mental illness: perspectives from three mhealth studies. Evid Based Ment Health 2017 Nov;20(4):98-101 [FREE Full text] [CrossRef] [Medline]
  14. Nielsen J. Usability Engineering. Cambridge, MA: AP Professional; 1994.
  15. Rubin J, Chisnell D. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Indianapolis, IN: Wiley Publishing; 2008.
  16. Crandall B, Klein GA, Klein G, Hoffman RR. Working Minds: A Practitioner's Guide to Cognitive Task Analysis. Cambridge, MA: MIT Press; 2006.
  17. Simonsen J, Robertson T, editors. Routledge International Handbook of Participatory Design. New York, USA: Routledge; 2012.
  18. Holden RJ, Voida S, Savoy A, Jones JF, Kulanthaivel A. Human factors engineering and human–computer interaction: supporting user performance and experience. In: Finnel JT, Dixon BE, editors. Clinical Informatics Study Guide: Text and Review. Cham, Switzerland: Springer International Publishing; 2016:287-307.
  19. Cornet VP, Daley C, Cavalcanti LH, Parulekar A, Holden RJ. Design for self care. In: Sethumadhavan A, Sasangohar F, editors. Design for Health: Applications of Human Factors. Cambridge, MA: Academic Press; 2020:277-302.
  20. Kujala S, Kauppinen M. Identifying and Selecting Users for User-Centered Design. In: Proceedings of the third Nordic Conference on Human-Computer Interaction. 2004 Presented at: NordiCHI'04; October 23-37, 2004; Tampere, Finland. [CrossRef]
  21. Heinbokel T, Sonnentag S, Frese M, Stolte W, Brodbeck F. Don't underestimate the problems of user centredness in software development projects - there are many!. Behav Inf Technol 1996 Jan;15(4):226-236 [FREE Full text] [CrossRef]
  22. Shah LM, Yang WE, Demo RC, Lee MA, Weng D, Shan R, et al. Technical guidance for clinicians interested in partnering with engineers in mobile health development and evaluation. JMIR Mhealth Uhealth 2019 May 15;7(5):e14124 [FREE Full text] [CrossRef] [Medline]
  23. Matthew-Maich N, Harris L, Ploeg J, Markle-Reid M, Valaitis R, Ibrahim S, et al. Designing, implementing, and evaluating mobile health technologies for managing chronic conditions in older adults: a scoping review. JMIR Mhealth Uhealth 2016 Jun 9;4(2):e29 [FREE Full text] [CrossRef] [Medline]
  24. Arsand E, Demiris G. User-centered methods for designing patient-centric self-help tools. Inform Health Soc Care 2008 Sep;33(3):158-169. [CrossRef] [Medline]
  25. Kushniruk A, Nøhr C. Participatory design, user involvement and health IT evaluation. Stud Health Technol Inform 2019:139-151 [FREE Full text] [CrossRef]
  26. van Velsen L, Wentzel J, van Gemert-Pijnen JE. Designing eHealth that matters via a multidisciplinary requirements development approach. JMIR Res Protoc 2013 Jun 24;2(1):e21 [FREE Full text] [CrossRef] [Medline]
  27. Cornet VP, Holden RJ. Systematic review of smartphone-based passive sensing for health and wellbeing. J Biomed Inform 2018 Jan;77:120-132 [FREE Full text] [CrossRef] [Medline]
  28. Mann DM, Quintiliani LM, Reddy S, Kitos NR, Weng M. Dietary approaches to stop hypertension: lessons learned from a case study on the development of an mHealth behavior change system. JMIR Mhealth Uhealth 2014 Oct 23;2(4):e41 [FREE Full text] [CrossRef] [Medline]
  29. Fiordelli M, Diviani N, Schulz PJ. Mapping mHealth research: a decade of evolution. J Med Internet Res 2013 May 21;15(5):e95 [FREE Full text] [CrossRef] [Medline]
  30. Albrecht U, von Jan U. Safe, sound and desirable: development of mHealth apps under the stress of rapid life cycles. Mhealth 2017;3:27 [FREE Full text] [CrossRef] [Medline]
  31. Becker S, Miron-Shatz T, Schumacher N, Krocza J, Diamantidis C, Albrecht U. mHealth 2.0: experiences, possibilities, and perspectives. JMIR Mhealth Uhealth 2014 May 16;2(2):e24 [FREE Full text] [CrossRef] [Medline]
  32. Cornet VP, Daley C, Bolchini D, Toscos T, Mirro MJ, Holden RJ. Patient-centered design grounded in user and clinical realities: towards valid digital health. Proc Int Symp Hum Fact Erg Health Care 2019 Sep 15;8(1):100-104 [FREE Full text] [CrossRef]
  33. Boustani MA, van der Marck MA, Adams N, Azar JM, Holden RJ, Vollmar HC, et al. Developing the agile implementation playbook for integrating evidence-based health care services into clinical practice. Acad Med 2019;94(4):556-561. [CrossRef]
  34. Waterson P. Ergonomics and ergonomists: lessons for human factors and ergonomics practice from the past and present. In: Shorrock S, Williams C, editors. Human Factors and Ergonomics in Practice: Improving System Performance and Human Well-Being in the Real World. Boca Raton, FL: CRC Press; 2017:29-44.
  35. Roger VL. Epidemiology of heart failure. Circ Res 2013 Aug 30;113(6):646-659 [FREE Full text] [CrossRef] [Medline]
  36. Lainscak M, Blue L, Clark A, Dahlström U, Dickstein K, Ekman I, et al. Self-care management of heart failure: practical recommendations from the patient care committee of the heart failure association of the European society of cardiology. Eur J Heart Fail 2011 Feb;13(2):115-126 [FREE Full text] [CrossRef] [Medline]
  37. Holden RJ, Schubert CC, Mickelson RS. The patient work system: an analysis of self-care performance barriers among elderly heart failure patients and their informal caregivers. Appl Ergon 2015 Mar;47:133-150 [FREE Full text] [CrossRef] [Medline]
  38. Klersy C, Boriani G, de Silvestri A, Mairesse G, Braunschweig F, Scotti V, Health Economics Committee of the European Heart Rhythm Association. Effect of telemonitoring of cardiac implantable electronic devices on healthcare utilization: a meta-analysis of randomized controlled trials in patients with heart failure. Eur J Heart Fail 2016 Feb;18(2):195-204 [FREE Full text] [CrossRef] [Medline]
  39. Hawkins NM, Virani SA, Sperrin M, Buchan IE, McMurray JJ, Krahn AD. Predicting heart failure decompensation using cardiac implantable electronic devices: a review of practices and challenges. Eur J Heart Fail 2016 Aug;18(8):977-986 [FREE Full text] [CrossRef] [Medline]
  40. Mirro M, Daley C, Wagner S, Ghahari RR, Drouin M, Toscos T. Delivering remote monitoring data to patients with implantable cardioverter-defibrillators: does medium matter? Pacing Clin Electrophysiol 2018 Nov;41(11):1526-1535. [CrossRef] [Medline]
  41. Daley CN, Allmandinger A, Heral L, Toscos T, Plant R, Mirro M. Engagement of ICD patients: direct electronic messaging of remote monitoring data via a personal health record. EP Lab Digest 2015;15(5):- [FREE Full text]
  42. Daley CN, Chen EM, Roebuck AE, Ghahari RR, Sami AF, Skaggs CG, et al. Providing patients with implantable cardiac device data through a personal health record: a qualitative study. Appl Clin Inform 2017 Oct;8(4):1106-1116 [FREE Full text] [CrossRef] [Medline]
  43. Rohani Ghahari R, Holden RJ, Flanagan M, Wagner S, Martin E, Ahmed R, et al. Using cardiac implantable electronic device data to facilitate health decision making: a design study. Int J Ind Ergon 2018 Mar;64:143-154 [FREE Full text] [CrossRef]
  44. Daley CN, Bolchini D, Varrier A, Rao K, Joshi P, Blackburn J, et al. Naturalistic decision making by older adults with chronic heart failure: an exploratory study using the critical incident technique. Proc Hum Factors Ergon Soc Annu Meet 2018 Sep 27;62(1):568-572 [FREE Full text] [CrossRef]
  45. Daley CN, Cornet VP, Patekar G, Kosarabe S, Bolchini D, Toscos T, et al. Uncertainty management among older adults with heart failure: responses to receiving implanted device data using a fictitious scenario interview method. Proc Int Sym Hum Fact Erg Health Care 2019 Sep 15;8(1):127-130 [FREE Full text] [CrossRef]
  46. Holden RJ, Joshi P, Rao K, Varrier A, Daley CN, Bolchini D, et al. Modeling personas for older adults with heart failure. Proc Hum Factors Ergon Soc Annu Meet 2018 Sep 27;62(1):1072-1076 [FREE Full text] [CrossRef]
  47. Holden RJ, Daley CN, Mickelson RS, Bolchini D, Toscos T, Cornet VP, et al. Patient decision-making personas: an application of a patient-centered cognitive task analysis (P-CTA). Appl Ergon 2020 Apr 15;87:103107 [FREE Full text] [CrossRef] [Medline]
  48. Kain D. Owning significance: the critical incident technique in research. In: de Marrais K, Lapan SD, editors. Foundations for Research: Methods of Inquiry in Education and the Social Sciences. Mahwah, NJ: Lawrence Erlbaum Associates; 2004:69-85.
  49. Flanagan JC. The critical incident technique. Psychol Bull 1954 Jul;51(4):327-358. [CrossRef] [Medline]
  50. Ahmed R, Toscos T, Ghahari RR, Holden RJ, Martin E, Wagner S, et al. Visualization of cardiac implantable electronic device data for older adults using participatory design. Appl Clin Inform 2019 Aug;10(4):707-718. [CrossRef] [Medline]
  51. Hart S, Staveland L. Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv Psychol 1988;52:139-183 [FREE Full text] [CrossRef]
  52. Brooke J. SUS: a 'quick and dirty' usability scale. In: Jordan PW, Thomas B, McClelland IL, Weerdmeester B, editors. Usability Evaluation In Industry. London, England: Taylor & Francis; 1996:189-194.
  53. Boehmer JP, Hariharan R, Devecchi FG, Smith AL, Molon G, Capucci A, et al. A multisensor algorithm predicts heart failure events in patients with implanted devices: results from the multisense study. JACC Heart Fail 2017 Mar;5(3):216-225 [FREE Full text] [CrossRef] [Medline]
  54. Gardner RS, Singh JP, Stancak B, Nair DG, Cao M, Schulze C, et al. Heartlogic multisensor algorithm identifies patients during periods of significantly increased risk of heart failure events: results from the multisense study. Circ Heart Fail 2018 Jul;11(7):e004669. [CrossRef] [Medline]
  55. Nielsen J, Landauer TK. A Mathematical Model of the Finding of Usability Problems. In: Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems. 1993 Presented at: INTERCHI '93; April 24-29, 1993; Amsterdam, The Netherlands p. 206-213   URL: https://dl.acm.org/doi/10.1145/169059.169166 [CrossRef]
  56. Nielsen J. Iterative user-interface design. Computer 1993 Nov;26(11):32-41 [FREE Full text] [CrossRef]
  57. Baez M, Casati F. Agile Development for Vulnerable Populations: Lessons Learned and Recommendations. In: Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Society. 2018 Presented at: ICSE-SEIS'18; May 27-June 3, 2018; Gothenburg, Sweden. [CrossRef]
  58. Holden RJ, Bodke K, Tambe R, Comer RS, Clark DO, Boustani M. Rapid translational field research approach for eHealth R&D. Proc Int Sym Hum Fact Erg Health Care 2016 Jul 22;5(1):25-27 [FREE Full text] [CrossRef]
  59. Holden RJ, Scott AM, Hoonakker PLT, Hundt AS, Carayon P. Data collection challenges in community settings: insights from two field studies of patients with chronic disease. Qual Life Res 2015 May;24(5):1043-1055 [FREE Full text] [CrossRef] [Medline]
  60. Holden RJ, Or CK, Alper SJ, Joy Rivera A, Karsh BT. A change management framework for macroergonomic field research. Appl Ergon 2008 Jul;39(4):459-474 [FREE Full text] [CrossRef] [Medline]
  61. Valdez RS, Holden RJ. Health care human factors/ergonomics fieldwork in home and community settings. Ergon Des 2016 Oct;24(4):4-9 [FREE Full text] [CrossRef] [Medline]
  62. Toscos T, Drouin M, Pater J, Flanagan M, Pfafman R, Mirro MJ. Selection biases in technology-based intervention research: patients' technology use relates to both demographic and health-related inequities. J Am Med Inform Assoc 2019 Aug 1;26(8-9):835-839. [CrossRef] [Medline]
  63. Holden RJ, Toscos T, Daley CN. Researcher reflections on human factors and health equity. In: Roscoe R, Chiou EK, Wooldridge AR, editors. Advancing Diversity, Inclusion, and Social Justice Through Human Systems Engineering. Boca Raton, FL: CRC Press; 2019:51-62.
  64. Cosco TD, Firth J, Vahia I, Sixsmith A, Torous J. Mobilizing mHealth data collection in older adults: challenges and opportunities. JMIR Aging 2019 Mar 19;2(1):e10019 [FREE Full text] [CrossRef] [Medline]
  65. Pagliari C. Design and evaluation in ehealth: challenges and implications for an interdisciplinary field. J Med Internet Res 2007 May 27;9(2):e15 [FREE Full text] [CrossRef] [Medline]
  66. Poole ES. HCI and mobile health interventions: how human-computer interaction can contribute to successful mobile health interventions. Transl Behav Med 2013 Dec;3(4):402-405 [FREE Full text] [CrossRef] [Medline]
  67. Aidemark J, Askenäs L, Nygårdh A, Strömberg A. User involvement in the co-design of self-care support systems for heart failure patients. Procedia Comput Sci 2015;64:118-124 [FREE Full text] [CrossRef]
  68. Blandford A, Gibbs J, Newhouse N, Perski O, Singh A, Murray E. Seven lessons for interdisciplinary research on interactive digital health interventions. Digit Health 2018;4:2055207618770325 [FREE Full text] [CrossRef] [Medline]
  69. Calvo RA, Dinakar K, Picard R, Christensen H, Torous J. Toward impactful collaborations on computing and mental health. J Med Internet Res 2018 Feb 9;20(2):e49 [FREE Full text] [CrossRef] [Medline]
  70. Holden RJ, Binkheder S, Patel J, Viernes SH. Best practices for health informatician involvement in interprofessional health care teams. Appl Clin Inform 2018 Jan;9(1):141-148 [FREE Full text] [CrossRef] [Medline]
  71. Kujala S. User involvement: a review of the benefits and challenges. Behav Inf Technol 2003 Jan;22(1):1-16 [FREE Full text] [CrossRef]
  72. Baek EO, Cagiltay K, Boling E, Frick T. User-centered design and development. In: Spector JM, Merrill MD, van Merrienboer J, Driscoll MP, editors. Handbook of Research on Educational Communications and Technology: A Project of the Association for Educational Communications and Technology. New York, USA: Taylor and Francis; 2008:660-668.
  73. Kittur A, Chi EH, Suh BW. Crowdsourcing user studies with mechanical turk. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2008 Presented at: CHI'08; April 5-10, 2008; Florence, Italy   URL: https://dl.acm.org/doi/10.1145/1357054.1357127 [CrossRef]
  74. Guillory J, Kim A, Murphy J, Bradfield B, Nonnemaker J, Hsieh Y. Comparing Twitter and online panels for survey recruitment of e-cigarette users and smokers. J Med Internet Res 2016 Nov 15;18(11):e288 [FREE Full text] [CrossRef] [Medline]
  75. Walter SL, Seibert SE, Goering D, O’Boyle EH. A tale of two sample sources: do results from online panel data and conventional data converge? J Bus Psychol 2018 Jul 21;34(4):425-452. [CrossRef]
  76. Eshet E, Bouwman H. Context of use: the final frontier in the practice of user-centered design? Interact Comput 2016 Oct 13;29:368-390 [FREE Full text] [CrossRef]
  77. Sharma A, Angel L, Bui Q. Patient advisory councils: giving patients a seat at the table. Fam Pract Manag 2015;22(4):22-27 [FREE Full text] [Medline]
  78. Keil M, Carmel E. Customer-developer links in software development. Commun ACM 1995;38(5):33-44 [FREE Full text] [CrossRef]
  79. Béguin P. Design as a mutual learning process between users and designers. Interact Comput 2003 Oct;15(5):709-730 [FREE Full text] [CrossRef]
  80. Lorenzi NM, Riley RT. Managing change: an overview. J Am Med Inform Assoc 2000;7(2):116-124 [FREE Full text] [CrossRef] [Medline]
  81. Magrabi F, Ong M, Coiera E. Health IT for patient safety and improving the safety of health IT. Stud Health Technol Inform 2016;222:25-36. [Medline]
  82. Turner P, Turner S. Is stereotyping inevitable when designing with personas? Design Stud 2011 Jan;32(1):30-44 [FREE Full text] [CrossRef]
  83. Norman D. Living with Complexity. Cambridge, Massachusetts: The MIT Press; 2011.
  84. Ahmed I, Ahmad NS, Ali S, Ali S, George A, Danish HS, et al. Medication adherence apps: review and content analysis. JMIR Mhealth Uhealth 2018 Mar 16;6(3):e62 [FREE Full text] [CrossRef] [Medline]
  85. Cho H, Yen PY, Dowding D, Merrill JA, Schnall R. A multi-level usability evaluation of mobile health applications: a case study. J Biomed Inform 2018 Oct;86:79-89 [FREE Full text] [CrossRef] [Medline]
  86. Cornet VP, Daley CN, Srinivas P, Holden RJ. User-centered evaluations with older adults: testing the usability of a mobile health system for heart failure self-management. Proc Hum Factors Ergon Soc Annu Meet 2017 Sep;61(1):6-10 [FREE Full text] [CrossRef] [Medline]
  87. Holden RJ, Campbell NL, Abebe E, Clark DO, Ferguson D, Bodke K, Brain Health Patient Safety Laboratory. Usability and feasibility of consumer-facing technology to reduce unsafe medication use by older adults. Res Social Adm Pharm 2020 Jan;16(1):54-61. [CrossRef] [Medline]
  88. Bangor A, Kortum P, Miller J. An empirical evaluation of the system usability scale. Int J Hum-Comput Interact 2008 Jul 30;24(6):574-594 [FREE Full text] [CrossRef]
  89. Georgsson M, Staggers N. An evaluation of patients' experienced usability of a diabetes mHealth system using a multi-method approach. J Biomed Inform 2016 Feb;59:115-129 [FREE Full text] [CrossRef] [Medline]
  90. Maramba I, Chatterjee A, Newman C. Methods of usability testing in the development of eHealth applications: a scoping review. Int J Med Inform 2019 Jun;126:95-104. [CrossRef] [Medline]


AHRQ: Agency for Healthcare Research and Quality
CIED: cardiac implanted electronic device
mHealth: mobile health
P2P: Power to the Patient
SUS: system usability scale
UCD: user-centered design


Edited by G Eysenbach; submitted 06.01.20; peer-reviewed by A Khoja, K Yin; comments to author 10.04.20; revised version received 20.04.20; accepted 14.05.20; published 21.07.20

Copyright

©Victor Philip Cornet, Tammy Toscos, Davide Bolchini, Romisa Rohani Ghahari, Ryan Ahmed, Carly Daley, Michael J Mirro, Richard J Holden. Originally published in JMIR mHealth and uHealth (http://mhealth.jmir.org), 21.07.2020.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.