University of California San Francisco San Francisco, CA, United States
Disclosure information not submitted.
Emma Kersey1, Jing Li1, Julia Kay1, Julia Adler-Milstein1, Jinoos Yazdany2 and Gabriela Schmajuk3, 1University of California San Francisco, San Francisco, CA, 2University of California, General Department of Medicine, Division of Rheumatology, San Francisco, CA, 3UCSF / SFVA, San Francisco, CA
Background/Purpose: The American College of Rheumatology's Rheumatology Informatics System for Effectiveness (RISE) EHR-based registry facilitates quality measure calculation and reporting for rheumatology practices in national pay-for-performance programs. Participating practices can leverage RISE's web-based clinician dashboard to monitor performance on quality measures, benchmark performance against registry means, and explore patient-level data to identify gaps in care. Despite substantial investment in these dashboards, the extent to which they help improve performance on quality measures remains unclear. We investigated the relationship between practice engagement with the RISE web-based dashboard and performance on rheumatology-specific measures in 2021.
Methods: Eligible practices were categorized into 3 engagement groups based on practice personnel's dashboard interactions (number of sessions and actions performed, such as drilldowns and exports) during the study period: no engagement (0 sessions), some engagement (≤ median actions), high engagement ( > median actions) (Table). Practice performance was assessed on both individual and composite measures, utilizing all available rheumatology-specific measures (range 1 to 8) per practice that had ≥20 eligible patients. The composite measure incorporated all rheumatology-specific measures per practice and was denominator-weighted, i.e., based on the number of patients included in each individual measure denominator. Individual measure performance analysis was restricted to measures with data from ≥5 practices per engagement group. Linear regression models were used to calculate predictive margins and 95% confidence intervals for quality performance for each engagement group. Differences in performance across engagement groups were assessed via pairwise comparisons; linear trends were evaluated using orthogonal polynomial contrasts (Figure).
Results: Most of the 204 included practices were single-specialty (59.8%) or solo practices (29.4%), with a median of 2 providers and 4970 patients (Table). Among these practices, 11% had no dashboard engagement, 76% had some engagement, and 12% were highly engaged. Performance on individual rheumatology-specific measures ranged from 28%-72% (median (IQR) 60% (47-66%)). The majority of practices' (75%) composite scores included ≥3 out of 8 possible rheumatology-specific measures with a median (IQR) performance of 65% (40-79%). We observed a pattern of higher measure performance with more dashboard engagement: 3 out of 5 individual rheumatology-specific measures and the composite measure exhibited a significant linear trend at the 5% level (Figure).
Conclusion: This cross-sectional analysis revealed a dose-response between degree of dashboard engagement and practice-level quality performance: RISE practices with higher levels of dashboard engagement exhibited better quality performance. Further investigation is needed to determine whether more dashboard engagement yields meaningful quality improvement over time, and whether performance is determined by engagement or vice versa.
E. Kersey: None; J. Li: None; J. Kay: Pfizer, 12, Own Stock; J. Adler-Milstein: None; J. Yazdany: AstraZeneca, 2, 5, Aurinia, 5, Gilead, 5, Pfizer, 2; G. Schmajuk: None.