Publications by authors named "Alexandra Portenhauser"

3 Publications

  • Page 1 of 1

Systematic evaluation of content and quality of English and German pain apps in European app stores.

Internet Interv 2021 Apr 24;24:100376. Epub 2021 Feb 24.

Department of Rehabilitation Psychology and Psychotherapy, Institute of Psychology, Albert-Ludwigs-University Freiburg, Engelberger Str. 41, 79106 Freiburg im Breisgau, Germany.

Background And Objective: Pain spans a broad spectrum of diseases and types that are highly prevalent and cause substantial disease burden for individuals and society. Up to 40% of people affected by pain receive no or inadequate treatment. Providing a scalable, time-, and location-independent way for pain diagnostic, management, prevention and treatment mobile health applications (MHA) might be a promising approach to improve health care for pain. However, the commercial app market is rapidly growing and unregulated, resulting in an opaque market. Studies investigating the content, privacy and security features, quality and scientific evidence of the available apps are highly needed, to guide patients and clinicians to high quality MHA.Contributing to this challenge, the present study investigates the content, quality, and privacy features of pain apps available in the European app stores.

Methods: An automated search engine was used to identify pain apps in the European Google Play and Apple App store. Pain apps were screened and checked for systematic criteria (pain-relatedness, functionality, availability, independent usability, English or German). Content, quality and privacy features were assessed by two independent reviewers using the German Mobile Application Rating Scale (MARS-G). The MARS-G assesses quality on four objectives (engagement, functionality, aesthetics, information quality) and two subjective scales (perceived impact, subjective quality).

Results: Out of 1034 identified pain apps 218 were included. Pain apps covered eight different pain types. Content included basic information, advice, assessment and tracking, and stand-alone interventions. The overall quality of the pain apps was average M = 3.13 (SD = 0.56, min = 1, max = 4.69). The effectiveness of less than 1% of the included pain apps was evaluated in a randomized controlled trial. Major problems with data privacy were present: 59% provided no imprint, 70% had no visible privacy policy.

Conclusion: A multitude of pain apps is available. Most MHA lack scientific evaluation and have serious privacy issues, posing a potential threat to users. Further research on evidence and improvements privacy and security are needed. Overall, the potential of pain apps is not exploited.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.invent.2021.100376DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7933737PMC
April 2021

Mobile Apps for Older Adults: Systematic Search and Evaluation Within Online Stores.

JMIR Aging 2021 Feb 19;4(1):e23313. Epub 2021 Feb 19.

Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University of Ulm, Ulm, Germany.

Background: Through the increasingly aging population, the health care system is confronted with various challenges such as expanding health care costs. To manage these challenges, mobile apps may represent a cost-effective and low-threshold approach to support older adults.

Objective: This systematic review aimed to evaluate the quality, characteristics, as well as privacy and security measures of mobile apps for older adults in the European commercial app stores.

Methods: In the European Google Play and App Store, a web crawler systematically searched for mobile apps for older adults. The identified mobile apps were evaluated by two independent reviewers using the German version of the Mobile Application Rating Scale. A correlation between the user star rating and overall rating was calculated. An exploratory regression analysis was conducted to determine whether the obligation to pay fees predicted overall quality.

Results: In total, 83 of 1217 identified mobile apps were included in the analysis. Generally, the mobile apps for older adults were of moderate quality (mean 3.22 [SD 0.68]). Four mobile apps (5%) were evidence-based; 49% (41/83) had no security measures. The user star rating correlated significantly positively with the overall rating (r=.30, P=.01). Obligation to pay fees could not predict overall quality.

Conclusions: There is an extensive quality range within mobile apps for older adults, indicating deficits in terms of information quality, data protection, and security precautions, as well as a lack of evidence-based approaches. Central databases are needed to identify high-quality mobile apps.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://dx.doi.org/10.2196/23313DOI Listing
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC8081158PMC
February 2021

Validation of the Mobile Application Rating Scale (MARS).

PLoS One 2020 2;15(11):e0241480. Epub 2020 Nov 2.

Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany.

Background: Mobile health apps (MHA) have the potential to improve health care. The commercial MHA market is rapidly growing, but the content and quality of available MHA are unknown. Instruments for the assessment of the quality and content of MHA are highly needed. The Mobile Application Rating Scale (MARS) is one of the most widely used tools to evaluate the quality of MHA. Only few validation studies investigated its metric quality. No study has evaluated the construct validity and concurrent validity.

Objective: This study evaluates the construct validity, concurrent validity, reliability, and objectivity, of the MARS.

Methods: Data was pooled from 15 international app quality reviews to evaluate the metric properties of the MARS. The MARS measures app quality across four dimensions: engagement, functionality, aesthetics and information quality. Construct validity was evaluated by assessing related competing confirmatory models by confirmatory factor analysis (CFA). Non-centrality (RMSEA), incremental (CFI, TLI) and residual (SRMR) fit indices were used to evaluate the goodness of fit. As a measure of concurrent validity, the correlations to another quality assessment tool (ENLIGHT) were investigated. Reliability was determined using Omega. Objectivity was assessed by intra-class correlation.

Results: In total, MARS ratings from 1,299 MHA covering 15 different health domains were included. Confirmatory factor analysis confirmed a bifactor model with a general factor and a factor for each dimension (RMSEA = 0.074, TLI = 0.922, CFI = 0.940, SRMR = 0.059). Reliability was good to excellent (Omega 0.79 to 0.93). Objectivity was high (ICC = 0.82). MARS correlated with ENLIGHT (ps<.05).

Conclusion: The metric evaluation of the MARS demonstrated its suitability for the quality assessment. As such, the MARS could be used to make the quality of MHA transparent to health care stakeholders and patients. Future studies could extend the present findings by investigating the re-test reliability and predictive validity of the MARS.
View Article and Find Full Text PDF

Download full-text PDF

Source
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0241480PLOS
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7605637PMC
December 2020
-->