Preprint
Review

This version is not peer-reviewed.

A Scientometric Analysis of Bias and Fairness in Parkinson's Disease Clinical Assessment Scales

Submitted:

01 September 2025

Posted:

02 September 2025

You are already at the latest version

Abstract
Background: Clinical assessment scales are fundamental tools for evaluating Parkinson's disease (PD), yet potential biases in their development and validation may compromise their fairness across diverse populations. Objective: This study aimed to conduct a comprehensive scientometric analysis of bias and fairness in PD clinical assessment scales by examining demographic representation, geographic distribution, and methodological biases in the foundational literature. Methods: Following PRISMA guidelines, we systematically searched five databases (PubMed, Embase, Medline and Central) from inception to 2024. Studies included if they validated or developed PD assessment scales including UPDRS, MDS-UPDRS, MoCA, or MMSE. Data extraction focused on 17 predefined bias domains including demographic representation, geographic distribution, and methodological considerations. Bibliometric analysis was performed using Python 3.12 and Tableau Public. Results: From 3,836 initially identified studies, 109 met inclusion criteria, encompassing 655 authors across 34 countries and 47 journals. Geographic analysis revealed stark disparities: high-income countries contributed 99 publications (90.8%), while low/middle-income countries contributed only 10 publications (9.2%). Europe dominated with 48 publications (44.0%), followed by North America with 39 publications (35.8%). Critical bias domains showed concerning gaps: only 8 studies (7.3%) captured race/ethnicity data, 13 studies (11.9%) adjusted cognitive tests for education, and zero studies addressed digital literacy barriers. Female authorship remained underrepresented at 36.6% overall, with particularly low representation in senior positions (37.1% of last authors) versus 44 male authors (62.9%). Conclusions: This scientometric analysis provides robust evidence of persistent geographic, demographic, and methodological biases in PD assessment scale research, potentially compromising their fairness across diverse populations. Our findings highlight the urgent need for more inclusive research practices, culturally sensitive adaptations of existing scales, and development of novel assessment approaches that account for demographic and geographic diversity to ensure equitable clinical evaluation of PD patients worldwide.
Keywords: 
;  ;  ;  ;  ;  ;  

1. Introduction

Parkinson's disease (PD) affects over 10 million people worldwide, with prevalence projected to reach 25.2 million by 2050. This progressive neurodegenerative disorder predominantly affects older adults, with incidence increasing dramatically with age and showing slight male predominance. Clinical assessment scales serve as fundamental tools for diagnosis, monitoring disease progression, and evaluating treatment efficacy in PD. The most widely used scales include the Unified Parkinson's Disease Rating Scale (UPDRS) and its revised version (MDS-UPDRS) for motor symptoms, and cognitive assessments such as the Montreal Cognitive Assessment (MoCA) and Mini-Mental State Examination (MMSE).
The significance of this issue extends beyond academic interest, as biased assessment tools can contribute to healthcare disparities by affecting clinical decision-making, treatment allocation, and research participation. With PD prevalence expected to increase most dramatically in low- and middle-income countries due to aging populations, ensuring the cross-cultural validity and fairness of assessment scales becomes increasingly critical.
Recent studies have highlighted concerning disparities in clinical research, with underrepresentation of diverse populations potentially limiting the generalizability and fairness of medical interventions. In neurodegenerative diseases specifically, assessment tools developed and validated primarily in high-income, predominantly white populations may not perform equally across different demographic groups. These biases can manifest in multiple forms: geographic concentration of research in wealthy nations, underrepresentation of women and minorities in study populations, inadequate consideration of cultural and linguistic factors, and methodological gaps in addressing confounding variables.
Despite the critical importance of fair and unbiased assessment tools, no comprehensive analysis has examined the extent of bias and demographic representation in the foundational literature supporting PD clinical assessment scales. This knowledge gap represents a significant barrier to understanding the true scope of potential biases and developing targeted interventions to address them.

2. Materials and Methods

2.1. Study Design

We conducted a comprehensive scientometric analysis following established guidelines for bibliometric research. The study employed a systematic two-phase approach combining quantitative bibliometric analysis with qualitative assessment of methodological biases.

2.2. Search Strategy and Selection Criteria

A systematic literature search was conducted across five major databases (PubMed, Embase, Medline and Central) from inception to July 31, 2024. The search strategy was developed using the following terms: ("Parkinson*" OR "PD") AND ("UPDRS" OR "MDS-UPDRS" OR "Unified Parkinson Disease Rating Scale" OR "MoCA" OR "Montreal Cognitive Assessment" OR "MMSE" OR "Mini Mental State") AND ("valid*" OR "develop*" OR "assess*" OR "scale" OR "instrument").
Inclusion criteria: (1) Studies validating or developing PD assessment scales; (2) Articles in English; (3) Peer-reviewed publications; (4) Studies involving human subjects with PD.
Exclusion criteria: (1) Case reports and editorials; (2) Non-English publications; (3) Studies not focusing on PD assessment scales; (4) Animal studies; (5) Conference abstracts without full text.

2.3. Data Extraction and Bias Assessment Framework

Data extraction was performed independently by two reviewers using a standardized form. Seventeen predefined bias domains were assessed based on established frameworks for demographic bias in clinical research
Primary Bias Domains Methodological Bias Domains
  1. Race/ethnicity data capture and analysis   9. Assessment timing standardization
  2. Sex/gender distribution reporting and analysis   10. Treatment effect controls
  3. Age-specific normative values utilization   11. UPDRS ON/OFF state documentation
  4. Educational background adjustment   12. Cognitive domain specification
  5. Geographic representation   13. Institutional resource reporting
  6. Socioeconomic status consideration   14. Underrepresented subgroup analysis
  7. Administrator training documentation   15. Inclusion/exclusion criteria specification
  8. Digital literacy and technology access barriers   16. Confounding variable control
  17. Sample size adequacy

2.4. Bibliometric Analysis

Bibliometric analysis was conducted using Tableau Public and Python 3.12. Key metrics included:
  • Publication trends over time
  • Geographic distribution analysis
  • Author collaboration networks
  • Journal impact assessment
  • Citation pattern analysis
  • Keyword co-occurrence mapping

2.5. Statistical Analysis

Descriptive statistics were calculated for all variables. Chi-square tests were used to assess associations between categorical variables. Geographic disparities were analyzed using the Global North-South classification and World Bank income categories. Statistical significance was set at p < 0.05. All analyses were performed using Python.

3. Results

The systematic search yielded 3,836 potentially relevant studies. After duplicate removal and screening, 141 full-text articles were assessed for eligibility, with 109 studies ultimately meeting inclusion criteria for analysis (Figure 1). The included studies encompassed 655 authors across 34 countries and were published in 47 different journals between 1996 and 2024.

3.1. Publication Trends and Assessment Scale Development (1996-2024)

Historical Timeline and Scale Evolution
Parkinson's disease clinical assessment has been shaped by three key instruments developed across different eras. The Mini-Mental State Examination (MMSE), created by Folstein in 1975, served as the foundational cognitive screening tool. The Unified Parkinson's Disease Rating Scale (UPDRS), developed in 1987 by Fahn and Elton, specifically targeted PD motor and non-motor symptoms, with its significant 2008 revision creating the MDS-UPDRS. The Montreal Cognitive Assessment (MoCA), introduced by Nasreddine in 2005, addressed MMSE limitations in detecting mild cognitive impairment.
Scoring Framework Analysis
Assessment scales demonstrate significant variability in scoring ranges and severity thresholds, reflecting PD's complex clinical manifestations. The scoring systems are illustrated in Figure 2, which displays the comprehensive framework across motor, non-motor, and cognitive domains.
Motor Assessment Scales: The MDS-UPDRS Part III employs the widest range (0-132 points) with severity cutoffs at 32/33 points (mild/moderate) and 58/59 points (moderate/severe). This granular scale accommodates detailed motor function assessment with clinically important differences identified as 2.5 points (minimal), 5.2 points (moderate), and 10.8 points (large) changes.
Non-Motor Components: Parts I and II utilize identical 0-52 point ranges with distinct severity thresholds - Part I (non-motor experiences) cuts at 10/11 and 21/22 points, while Part II (motor experiences) uses 12/13 and 29/30 cutoffs. Motor complications (Part IV) employs a restricted 0-24 range with 4/5 and 12/13 thresholds.
Cognitive Assessments: Both MoCA and MMSE operate on 0-30 scales but with different paradigms. The MoCA establishes normal cognition at ≥26 points, with educational adjustments adding one point for ≤12 years of education. The MMSE typically uses ≥25 points for normal cognition, though race-specific adjustments recommend ≤25 for White individuals and ≤22 for Black individuals.
Publication Pattern Analysis
The publication trajectories of Parkinson's disease assessment scales reveal distinct temporal patterns that reflect their clinical adoption, methodological evolution, and research maturity phases. This analysis (illustrated in Figure 3 below) demonstrates three fundamentally different development pathways that illuminate the broader dynamics of clinical assessment tool utilization in neurological research.
UPDRS Dominance and Sustained Usage
The UPDRS demonstrates the most consistent and sustained publication trajectory throughout the analyzed period. Early Adoption began in the late 1990s, reflecting the scale's establishment as the standard for PD assessment. Peak Activity occurred during 2010-2019, coinciding with increased PD research activity and the introduction of the MDS-UPDRS revision. Sustained Interest is evidenced by UPDRS publications maintaining relatively high levels through 2024, indicating its continued clinical relevance and widespread acceptance as the gold standard for motor symptom evaluation.
MoCA's Rapid Rise and Plateau
The MoCA shows a distinctive pattern characterized by rapid adoption followed by recent decline. Initial Development Period showed minimal publications until 2005, corresponding to its development timeline. Rapid Adoption occurred between 2008-2016, reflecting growing recognition of its superiority over MMSE for mild cognitive impairment detection. Peak Performance reached maximum publication volume during 2014-2015, particularly for studies validating its use in Parkinson's disease populations. Recent Decline shows decreasing publication volumes after 2020, potentially indicating methodological maturity or market saturation.
MMSE's Established Presence
The MMSE demonstrates Consistent Baseline with steady but moderate publication levels throughout most periods. Mid-Period Surge occurred during 2014-2020, likely driven by comparative studies with newer cognitive assessments. Gradual Decline in recent years reflects limitations in sensitivity for early cognitive changes, as documented in multiple validation studies.
Temporal Correlations and Research Trends
Scale Interaction Patterns reveal complementary usage during periods of high MoCA publication often coinciding with sustained MMSE research, suggesting comparative validation studies. UPDRS publication patterns show less correlation with cognitive scales, reflecting its distinct motor assessment focus. Methodological Evolution suggests a shift from generic cognitive tools toward more specialized, sensitive instruments, as evidenced by MoCA's rapid adoption despite MMSE's established presence.
Research Maturity Phases
The data reveals three distinct phases:
  • Phase 1 (1996-2005) represents the establishment period with moderate UPDRS adoption and continued MMSE usage.
  • Phase 2 (2006-2016) marks the innovation and validation period characterized by MoCA introduction and peak research activity across all scales.
  • Phase 3 (2017-2024) shows the consolidation period with declining overall publication volumes, suggesting methodological maturity and established clinical utility.

3.2. Geographic Distribution Analysis

Geographic analysis (Figure 4) revealed significant disparities in research distribution. High-income countries (HIC) contributed 99 publications (90.8%), while low/middle-income countries (LMIC) contributed only 10 publications (9.2%). Europe dominated the research landscape with 48 publications (44.0%), followed by North America with 39 publications (35.8%). Asia showed pronounced disparities between HIC (9 publications, 8.3%) and LMIC contributions (4 publications, 3.7%). Africa and South America, despite having significant PD populations, contributed minimally with 2 (1.8%) and 3 (2.8%) publications respectively.

3.3. Authorship Patterns and Gender Distribution

Analysis of authorship patterns (Figure 5) revealed persistent gender disparities. Among the 655 total authors, 240 (36.6%) were female and 415 (63.4%) were male. Gender representation varied by authorship position: first authorship showed 25 female authors (32.9%) compared to 51 male authors (67.1%), while senior authorship (last author position) demonstrated 26 female authors (37.1%) versus 44 male authors (62.9%).

3.4. Bias Domain Assessment

Our comprehensive analysis of 109 Parkinson's disease assessment scale validation studies revealed significant variations in how different bias domains were addressed, with most domains showing concerning gaps in methodological rigor.
Most Addressed Bias Domains
The analysis identified cognitive impairment criteria specification as the most frequently addressed bias domain, with 49 studies (45.0%) providing clear inclusion/exclusion criteria for cognitive impairment. This was followed by sex/gender analysis in 47 studies (43.1%) and cognitive domain specification in 43 studies (39.4%). These three domains, all classified as moderate risk, represent the areas where researchers have shown the greatest awareness of potential bias.
Administrator training and knowledge was addressed in 33 studies (30.3%), classified as high risk, while UPDRS ON/OFF state documentation was reported in 24 studies (22.0%). Analysis of underrepresented demographic subgroups was conducted in 21 studies (19.3%), both also classified as high risk domains.
Critically Underaddressed Domains
The analysis revealed alarming gaps in several critical bias domains. Digital literacy and access barriers were not addressed in any of the 109 studies (0.0%), representing a critical oversight given the increasing use of technology-based assessments. Treatment effects on cognition were considered in only 1 study (0.9%), and age-specific normative values were referenced in merely 3 studies (2.8%).
Race and ethnicity data collection was documented in only 8 studies (7.3%), while OFF medication state testing was conducted in just 4 studies (3.7%). These five domains, all classified as critical risk, highlight fundamental gaps in ensuring equitable and comprehensive assessment across diverse populations.
Methodological Gaps
Several domains classified as very high risk showed concerning low adoption rates. Education adjustment for cognitive tests was performed in only 13 studies (11.9%), institutional resources reporting in 11 studies (10.1%), ON medication state testing in 18 studies (16.5%), and confounding variables control in 16 studies (14.7%).
Risk Level Distribution
The bias domain assessment revealed a troubling pattern: 33.3% of domains (5 out of 15) were classified as critical risk, 26.7% as very high risk, 20.0% as high risk, and only 20.0% as moderate risk. The mean percentage of studies addressing any given bias domain was only 17.8%, with a median of 14.7%, indicating widespread methodological inadequacies across the field.
This pattern suggests that while researchers have made progress in addressing some fundamental aspects of bias (particularly cognitive criteria specification and sex/gender analysis), there remain critical gaps in ensuring cultural sensitivity, technological accessibility, and comprehensive demographic representation in Parkinson's disease assessment validation research.
Table 1. Bias Domain Analysis Results (n=109).
Table 1. Bias Domain Analysis Results (n=109).
Preprints 174787 i001
Figure 6. Bias Domain Coverage in PD Assessment Scale Research.
Figure 6. Bias Domain Coverage in PD Assessment Scale Research.
Preprints 174787 g006

3.5. Journal and Citation Analysis

The 109 included studies were published across 47 journals. Specialized movement-disorders journals accounted for 45 of these publications (41.3%): Movement Disorders (n = 29), Parkinsonism & Related Disorders (n = 11), and Movement Disorders Clinical Practice (n = 5). General neurology journals contributed 5 publications (4.6%), all in Neurology (n = 5). Impact-factor quartile analysis revealed that 24 studies (19.8%) appeared in Q1 journals, 35 studies (32.1%) in Q2 journals, 38 studies (34.9%) in Q3 journals, and 12 studies (11.0%) in Q4 journals. Thus, while over half of the studies (51.9%) were published in higher-ranking venues (Q1–Q2), a substantial portion (48.1%) appeared in lower-quartile journals (Q3–Q4), without a clear relationship to the thoroughness of bias reporting.

3.6. Temporal Trends

Publication volume showed an increasing trend from 1996 to 2024, with 73% of studies published after 2015 (Figure 7). However, attention to bias domains did not improve proportionally over time, with recent studies (2020-2024) showing only marginal improvements in demographic data collection (42.3% vs. 35.1% in earlier periods, p=0.23).

4. Discussion

4.1. Principal Findings

This scientometric analysis provides compelling evidence of significant and persistent biases in PD clinical assessment scale research. The predominance of research from high-income countries, male authorship patterns, and critical gaps in methodological consideration of bias factors collectively threaten the global applicability and fairness of current assessment instruments.

4.2. Geographic and Economic Disparities

The stark geographic concentration of research in high-income countries (90.8% of publications) represents a fundamental threat to the global validity of PD assessment scales. This disparity is particularly concerning given epidemiological projections showing the fastest growth in PD prevalence in low- and middle-income countries. The near-absence of research from Africa (1.8% of publications) and limited representation from South America (2.8%) creates significant knowledge gaps about scale performance in these populations.
These geographic biases (Figure 8 and Table 2) have direct clinical implications. Assessment scales developed and validated primarily in Western, educated, industrialized, rich, and democratic (WEIRD) populations may not adequately capture the disease experience in other cultural contexts. Factors such as cultural attitudes toward neurological symptoms, linguistic nuances in symptom description, and healthcare system differences can all influence scale performance and interpretation.

4.3. Demographic Representation Gaps

The underrepresentation of critical demographic factors represents a significant methodological weakness in the current literature. Only 7.3% of studies captured race/ethnicity data, despite well-documented disparities in PD presentation and progression across racial groups. This gap is particularly problematic given evidence that cognitive assessment tools like the MoCA and MMSE demonstrate differential performance across racial and ethnic groups.
The finding that only 11.9% of studies adjusted cognitive assessments for educational background is especially concerning. Educational attainment significantly influences performance on cognitive tests, and failure to account for this factor can lead to systematic biases in diagnosis and severity assessment. This methodological gap may contribute to disparities in PD dementia diagnosis and treatment access.

4.4. Gender Disparities in Research Leadership

The persistent underrepresentation of women in research leadership positions (37.1% of senior authors) reflects broader patterns in neurological research. This disparity is significant because research leadership influences study design, population selection, and interpretation of findings. Given known sex differences in PD presentation and progression , diverse research leadership is crucial for ensuring comprehensive and unbiased assessment tool development.

4.5. Methodological Bias Implications

The identified methodological gaps have direct implications for clinical practice and research validity. The absence of studies addressing digital literacy barriers (0.0% of publications) is particularly concerning in an era of increasing digital health implementation. As PD assessment increasingly incorporates technology-based tools, failure to consider digital access and literacy creates new forms of bias that may exacerbate existing health disparities.
Similarly, inadequate reporting of administrator training (present in only 30.3% of studies) raises questions about inter-rater reliability and consistency across different clinical settings. This gap is particularly problematic for global implementation of assessment scales, where training resources and expertise may vary significantly.

4.6. Implications for Clinical Practice and Policy

These findings have immediate implications for clinical practice and health policy. First, clinicians must recognize that current PD assessment scales may not perform equally across all patient populations. This recognition should inform clinical decision-making, particularly when evaluating patients from underrepresented demographic groups.
Second, regulatory agencies and professional organizations should consider developing guidance for bias assessment and mitigation in clinical scale development and validation. The systematic gaps identified in this analysis suggest current validation standards are insufficient to ensure fairness across diverse populations.

4.7. Keyword Co-occurrence Network Analysis

The keyword co-occurrence network visualization from our scientometric analysis (Figure 9) provides insights into the research landscape of Parkinson's disease clinical assessment scales. This network analysis reveals several critical patterns that illuminate the structure and biases within the field.
Network Structure and Core Themes
The visualization demonstrates the interconnected nature of Parkinson's disease research, with dominant clusters representing core research domains. The central positioning of terms like "parkinson disease," "assessment," and "motor" indicates these serve as foundational concepts bridging multiple research areas. The network structure reveals how clinical assessment scales function as connecting nodes between different aspects of PD research, from basic motor symptoms to complex cognitive evaluations.
Cognitive Assessment Integration
A significant cluster focuses on cognitive assessment, with terms like "montreal cognitive assessment," "cognitive impairment," and "mild cognitive impairment" forming interconnected nodes. This clustering pattern aligns with current research priorities emphasizing non-motor aspects of PD, particularly cognitive decline. The prominence of MoCA-related terms reflects its widespread clinical adoption, though the network also reveals ongoing methodological concerns about cultural bias and educational adjustments.
Scale Validation Prominence
The network prominently features validation-related terms including "reliability," "validity," and "factor analysis". This clustering suggests the research community's recognition of methodological challenges in assessment tool development. The interconnections between validation terms and specific scales (UPDRS, MDS-UPDRS) highlight ongoing efforts to ensure psychometric soundness across diverse populations.
Research Gap Visualization
The network reveals notable absences that correlate with the demographic and geographic biases identified in the broader analysis. Terms related to cultural adaptation, linguistic validation, and diversity considerations appear less central, suggesting these remain peripheral concerns rather than core research priorities. The predominant clustering around established Western-developed scales may perpetuate assessment biases in diverse populations.

4.8. Future Research Directions

Our findings point to several critical research priorities:
  • Inclusive validation studies: Large-scale validation studies specifically designed to assess scale performance across diverse demographic groups are urgently needed.
  • Cultural adaptation research: Systematic investigation of cultural and linguistic factors affecting scale performance should be prioritized, particularly for global implementation.
  • Digital equity research: As assessment tools increasingly incorporate digital components, research addressing digital literacy and access barriers becomes essential.
  • Bias mitigation strategies: Development and testing of specific interventions to reduce bias in clinical assessment should be prioritized.

4.9. Limitations

This study has several limitations. First, our analysis was limited to English-language publications, potentially underrepresenting research from non-English speaking countries. Second, the retrospective nature of the analysis limits our ability to assess causality between identified biases and clinical outcomes. Third, publication bias may have influenced our findings, as studies with negative or null results regarding bias may be less likely to be published.
Additionally, our assessment of bias domains relied on information reported in published manuscripts, which may not fully capture all bias considerations addressed during study conduct. Finally, the binary assessment of bias domain presence or absence may not adequately capture the quality or depth of bias consideration in individual studies.

5. Conclusions

This comprehensive scientometric analysis reveals pervasive and systematic biases in the foundational literature supporting PD clinical assessment scales. The stark geographic concentration of research in high-income countries, persistent gender disparities in research leadership, and critical gaps in demographic and methodological bias consideration collectively threaten the fairness and global applicability of current assessment instruments.
These findings have immediate implications for clinical practice, requiring heightened awareness of potential biases when using PD assessment scales across diverse populations. More broadly, our results highlight the urgent need for systematic reform in clinical scale development and validation practices.
Moving forward, the research community must prioritize inclusive research practices, cross-cultural validation studies, and the development of bias-aware assessment tools. Only through such systematic efforts can we ensure that the fundamental tools used to evaluate PD provide fair and accurate assessments across all affected populations, regardless of geography, demographics, or socioeconomic status.
The path toward more equitable PD assessment requires coordinated action from researchers, clinicians, regulatory agencies, and funding organizations. The evidence presented here provides a roadmap for these efforts, highlighting both the scope of current challenges and the opportunities for meaningful improvement in the fairness and inclusivity of PD clinical assessment.
Table 3. Study Characteristics.
Table 3. Study Characteristics.
Characteristic Value
Total papers analyzed 109
Publication period 1996-2024
Total authors examined 655
Countries represented 34
Journals represented 47
Most common assessment scales UPDRS, MDS-UPDRS, MoCA, MMSE
Table 4. Distribution of Research Publications/Journal.
Table 4. Distribution of Research Publications/Journal.
JOURNAL TITLE COUNTS
1. Movement Disorders 29
2. Parkinsonism & Related Disorders 11
3. Movement Disorders Clinical Practice 5
4. Neurology 5
5. Neurological Sciences 4
6. Journal of Neurology 4
7. Journal of Parkinson’s Disease 4
8. International Journal of Geriatric Psychiatry 2
9. American Journal of Alzheimer's Disease & Othe... 2
10. European Journal of Neurology 2
11. Revue Neurologique 2
12. The Clinical Neuropsychologist 2
13. Parkinson's Disease 2
14. Neurología (English Edition) 2
15. Aging Clinical and Experimental Research 1
16. Health Informatics Journal 1
17. Frontiers in Neurology 1
18. Digital Biomarkers 1
19. Dementia and Geriatric Cognitive Disorders Extra 1
20. Clinical Parkinsonism & Related Disorders 1
21. Dementia & Neuropsychologia 1
22. Clinical Neuropharmacology 1
23. Brain and Cognition 1
24. Alzheimer Disease & Associated Disorders 1
25. Assessment 1
26. Archives of Clinical Neuropsychology 1
27. Applied Neuropsychology Adult 1
28. International Journal of Speech-Language Patho... 1
29. JAMA Neurology 1
30. IEEE Transactions on Neural Systems and Rehabi... 1
31. Health and Quality of Life Outcomes 1
32. Journal of the Neurological Sciences 1
33. Journal of the American Geriatrics Society 1
34. Journal of Neurology Neurosurgery & Psychiatry 1
35. Journal of Movement Disorders 1
36. Journal of Clinical Neurology 1
37. Journal of Clinical Neuroscience 1
38. Journal of Advanced Nursing 1
39. Journal of Clinical Medicine 1
40. Ideggyógyászati Szemle 1
41. Neurological Research 1
42. Medical Image Analysis 1
43. NeuroRehabilitation An International Interdisc... 1
44. Neurologia i Neurochirurgia Polska 1
45. Neurology India 1
46. PLOS ONE 1
47. Value in Health 1

Author Contributions

Authors are listed alphabetically. All authors contributed equally to the best of their abilities.

Funding

The authors declare no conflicts of interest.

Acknowledgements

AI-assisted tools were used to improve the clarity and coherence of certain sections of this manuscript. Their contribution was limited to refinement of language wherein it was ensured that no distortion of the research content or data interpretation was made.

Data Availability Statement

The datasets generated and analyzed during this study are available from the corresponding author upon reasonable request.

Ethics Statement

This study involved analysis of published literature and did not require ethics approval.

References

  1. Antonini, A.; Abbruzzese, G.; Ferini-Strambi, L.; et al. Validation of the Italian version of the Movement Disorder Society--Unified Parkinson’s Disease Rating Scale. Neurological Sciences 2013, 34(5), 683–687. [CrossRef]
  2. Abdolahi, A.; Bull, M. T.; Darwin, K. C.; et al. A feasibility study of conducting the Montreal Cognitive Assessment remotely in individuals with movement disorders. Health Informatics Journal 2016, 22(2), 304–311. [CrossRef]
  3. Uc, E. Y.; McDermott, M. P.; Marder, K. S.; et al. Incidence of and risk factors for cognitive impairment in an early Parkinson disease clinical trial cohort. Neurology 2009, 73(18), 1469–1477. [CrossRef]
  4. Foley, T.; McKinlay, A.; Warren, N.; et al. Assessing the sensitivity and specificity of cognitive screening measures for people with Parkinson’s disease. NeuroRehabilitation 2018, 43(4), 491–500. [CrossRef]
  5. Konstantopoulos, K.; Vogazianos, P.; Doskas, T. Normative data of the Montreal Cognitive Assessment in the Greek population and parkinsonian dementia. Archives of Clinical Neuropsychology 2016, 31(3), 246–253. [CrossRef]
  6. Palmer, J. L.; Coats, M. A.; Roe, C. M.; et al. Unified Parkinson’s Disease Rating Scale-Motor Exam: inter-rater reliability of advanced practice nurse and neurologist assessments. Journal of Advanced Nursing 2010, 66(6), 1382–1387. [CrossRef]
  7. Badrkhahan, S. Z.; Sikaroodi, H.; Sharifi, F.; et al. Validity and reliability of the Persian version of the Montreal Cognitive Assessment (MoCA-P) scale among subjects with Parkinson’s disease. Applied Neuropsychology: Adult 2020, 27(5), 431–439. [CrossRef]
  8. Hoops, S.; Nazem, S.; Siderowf, A. D.; et al. Validity of the MoCA and MMSE in the detection of MCI and dementia in Parkinson disease. Neurology 2009, 73(21), 1738–1745. [CrossRef]
  9. Allison, K. M.; Hustad, K. C. Impact of sentence length and phonetic complexity on intelligibility of 5-year-old children with cerebral palsy. International Journal of Speech-Language Pathology 2014, 16(4), 396–407. [CrossRef]
  10. Goetz, C. G.; Choi, D.; Guo, Y.; et al. It is as it was: MDS-UPDRS part III scores cannot be combined with other parts to give a valid sum. Movement Disorders 2023, 38(2), 342–347. [CrossRef]
  11. Martinez-Martin, P.; Rodriguez-Blazquez, C.; Alvarez-Sanchez, M.; et al. Expanded and independent validation of the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS). Journal of Neurology 2013, 260(1), 228–236. [CrossRef]
  12. Williams, S.; Wong, D.; Alty, J. E.; et al. Parkinsonian hand or clinician’s eye? Finger tap bradykinesia interrater reliability for 21 movement disorder experts. Journal of Parkinson’s Disease 2023, 13(4), 525–536. [CrossRef]
  13. Goldman, J. G.; Stebbins, G. T.; Leung, V.; et al. Relationships among cognitive impairment, sleep, and fatigue in Parkinson’s disease using the MDS-UPDRS. Parkinsonism & Related Disorders 2014, 20(11), 1135–1139. [CrossRef]
  14. Rabey, J. M.; Klein, C.; Molochnikov, A.; et al. Comparison of the Unified Parkinson’s Disease Rating Scale and the Short Parkinson’s Evaluation Scale in patients with Parkinson’s disease after levodopa loading. Clinical Neuropharmacology 2002, 25(2), 83–88. [CrossRef]
  15. Stebbins, G. T.; Goetz, C. G. Factor structure of the Unified Parkinson’s Disease Rating Scale: Motor Examination section. Movement Disorders 1998, 13(4), 633–636. [CrossRef]
  16. Dalrymple-Alford, J. C.; MacAskill, M. R.; Nakas, C. T.; et al. The MoCA: well-suited screen for cognitive impairment in Parkinson disease. Neurology 2010, 75(19), 1717–1725. [CrossRef]
  17. Park, J.; Oh, E.; Koh, S.-B.; et al. Evaluating the validity and reliability of the Korean version of the Scales for Outcomes in Parkinson’s Disease-cognition. Journal of Movement Disorders 2024, 17(3), 328–332. [CrossRef]
  18. Roalf, D. R.; Moore, T. M.; Wolk, D. A.; et al. Defining and validating a short form Montreal Cognitive Assessment (s-MoCA) for use in neurodegenerative disease. Journal of Neurology, Neurosurgery & Psychiatry 2016, 87(12), 1303–1310. [CrossRef]
  19. Parashos, S. A.; Elm, J.; Boyd, J. T.; et al. Validation of an ambulatory capacity measure in Parkinson disease: a construct derived from the Unified Parkinson’s Disease Rating Scale. Journal of Parkinson’s Disease 2015, 5(1), 67–73. [CrossRef]
  20. Pal, G.; Goetz, C. G. Assessing bradykinesia in parkinsonian disorders. Frontiers in Neurology 2013, 4, 54. [CrossRef]
  21. Siuda, J.; Boczarska-Jedynak, M.; Budrewicz, S.; et al. Validation of the Polish version of the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS). Neurologia i Neurochirurgia Polska 2020, 54(5), 416–425. [CrossRef]
  22. Smith, C. R.; Cavanagh, J.; Sheridan, M.; et al. Factor structure of the Montreal Cognitive Assessment in Parkinson disease. International Journal of Geriatric Psychiatry 2020, 35(2), 188–194. [CrossRef]
  23. Aarsland, D.; Brønnick, K.; Larsen, J. P.; et al. Cognitive impairment in incident, untreated Parkinson disease: the Norwegian ParkWest study. Neurology 2009, 72(13), 1121–1126. [CrossRef]
  24. Holroyd, S.; Currie, L. J.; Wooten, G. F. Validity, sensitivity and specificity of the mentation, behavior and mood subscale of the UPDRS. Neurological Research 2008, 30(5), 493–496. [CrossRef]
  25. Isella, V.; Mapelli, C.; Morielli, N.; et al. Validity and metric of MiniMental Parkinson and MiniMental State Examination in Parkinson’s disease. Neurological Sciences 2013, 34(10), 1751–1758. [CrossRef]
  26. Martinez-Martin, P.; Prieto, L.; Forjaz, M. J. Longitudinal metric properties of disability rating scales for Parkinson’s disease. Value in Health 2006, 9(6), 386–393. [CrossRef]
  27. Horváth, K.; Aschermann, Z.; Ács, P.; et al. Validation of the Hungarian Unified Dyskinesia Rating Scale. Ideggyogyaszati Szemle 2015, 68(5-6), 183–188. [CrossRef]
  28. Gülke, E.; Alsalem, M.; Kirsten, M.; et al. Comparison of Montreal cognitive assessment and Mattis dementia rating scale in the preoperative evaluation of subthalamic stimulation in Parkinson’s disease. PLoS One 2022, 17(4), e0265314. [CrossRef]
  29. Grill, S.; Weuve, J.; Weisskopf, M. G. Predicting outcomes in Parkinson’s disease: comparison of simple motor performance measures and The Unified Parkinson’s Disease Rating Scale-III. Journal of Parkinson’s Disease 2011, 1(3), 287–298. [CrossRef]
  30. Patrick, S. K.; Denington, A. A.; Gauthier, M. J.; et al. Quantification of the UPDRS rigidity scale. IEEE Transactions on Neural Systems and Rehabilitation Engineering 2001, 9(1), 31–41. [CrossRef]
  31. Martignoni, E.; Franchignoni, F.; Pasetti, C.; et al. Psychometric properties of the Unified Parkinson’s Disease Rating Scale and of the Short Parkinson’s Evaluation Scale. Neurological Sciences 2003, 24(3), 190–191. [CrossRef]
  32. Ohta, K.; Takahashi, K.; Gotoh, J.; et al. Screening for impaired cognitive domains in a large Parkinson’s disease population and its application to the diagnostic procedure for Parkinson’s disease dementia. Dementia and Geriatric Cognitive Disorders Extra 2014, 4(2), 147–159. [CrossRef]
  33. Hely, M. A.; Reid, W. G. J.; Adena, M. A.; et al. The Sydney multicenter study of Parkinson’s disease: the inevitability of dementia at 20 years. Movement Disorders 2008, 23(6), 837–844. [CrossRef]
  34. Qiao, J.; Wang, X.; Lu, W.; et al. Validation of neuropsychological tests to screen for dementia in Chinese patients with Parkinson’s disease. American Journal of Alzheimer’s Disease and Other Dementias 2016, 31(4), 368–374. [CrossRef]
  35. Benge, J. F.; Kiselica, A. M. Rapid communication: Preliminary validation of a telephone adapted Montreal Cognitive Assessment for the identification of mild cognitive impairment in Parkinson’s disease. The Clinical Neuropsychologist 2021, 35(1), 133–147. [CrossRef]
  36. Lu, M.; Zhao, Q.; Poston, K. L.; et al. Quantifying Parkinson’s disease motor severity under uncertainty using MDS-UPDRS videos. Medical Image Analysis 2021, 73, 102179. [CrossRef]
  37. Goetz, C. G.; Stebbins, G. T. Assuring interrater reliability for the UPDRS motor section: utility of the UPDRS teaching tape. Movement Disorders 2004, 19(12), 1453–1456. [CrossRef]
  38. Parveen, S. Comparison of self and proxy ratings for motor performance of individuals with Parkinson disease. Brain and Cognition 2016, 103, 62–69. [CrossRef]
  39. Movement Disorder Society Task Force on Rating Scales for Parkinson’s Disease. The Unified Parkinson’s Disease Rating Scale (UPDRS): status and recommendations. Movement Disorders 2003, 18(7), 738–750. [CrossRef]
  40. Louis, E. D.; Levy, G.; Côte, L. J.; et al. Diagnosing Parkinson’s disease using videotaped neurological examinations: validity and factors that contribute to incorrect diagnoses. Movement Disorders 2002, 17(3), 513–517. [CrossRef]
  41. Post, B.; Merkus, M. P.; de Bie, R. M. A.; et al. Unified Parkinson’s disease rating scale motor examination: are ratings of nurses, residents in neurology, and movement disorders specialists interchangeable? Movement Disorders 2005, 20(12), 1577–1584. [CrossRef]
  42. Vassar, S. D.; Bordelon, Y. M.; Hays, R. D.; et al. Confirmatory factor analysis of the motor unified Parkinson’s disease rating scale. Parkinson’s Disease 2012, 2012, 719167. [CrossRef]
  43. Abdolahi, A.; Scoglio, N.; Killoran, A.; et al. Potential reliability and validity of a modified version of the Unified Parkinson’s Disease Rating Scale that could be administered remotely. Parkinsonism & Related Disorders 2013, 19(2), 218–221. [CrossRef]
  44. Siderowf, A.; McDermott, M.; Kieburtz, K.; et al. Test-retest reliability of the unified Parkinson’s disease rating scale in patients with early Parkinson’s disease: results from a multicenter clinical trial. Movement Disorders 2002, 17(4), 758–763. [CrossRef]
  45. Pedersen, K. F.; Larsen, J. P.; Aarsland, D. Validation of the Unified Parkinson’s Disease Rating Scale (UPDRS) section I as a screening and diagnostic instrument for apathy in patients with Parkinson’s disease. Parkinsonism & Related Disorders 2008, 14(3), 183–186. [CrossRef]
  46. Krishnan, S.; Justus, S.; Meluveettil, R.; et al. Validity of Montreal Cognitive Assessment in non-english speaking patients with Parkinson’s disease. Neurology India 2015, 63(1), 63–67. [CrossRef]
  47. Hendershott, T. R.; Zhu, D.; Llanes, S.; et al. Comparative sensitivity of the MoCA and Mattis Dementia Rating Scale-2 in Parkinson’s disease. Movement Disorders 2019, 34(2), 285–291. [CrossRef]
  48. Goetz, C. G.; Stebbins, G. T.; Tilley, B. C. Calibration of unified Parkinson’s disease rating scale scores to Movement Disorder Society-unified Parkinson’s disease rating scale scores. Movement Disorders 2012, 27(10), 1239–1242. [CrossRef]
  49. Aarsland, D.; Andersen, K.; Larsen, J. P.; et al. Prevalence and characteristics of dementia in Parkinson disease. Archives of Neurology 2003, 60(3), 387. [CrossRef]
  50. Ozdilek, B.; Kenangil, G. Validation of the Turkish Version of the Montreal Cognitive Assessment Scale (MoCA-TR) in patients with Parkinson’s disease. The Clinical Neuropsychologist 2014, 28(2), 333–343. [CrossRef]
  51. Kirsch-Darrow, L.; Zahodne, L. B.; Hass, C.; et al. How cautious should we be when assessing apathy with the Unified Parkinson’s Disease Rating Scale? Movement Disorders 2009, 24(5), 684–688. [CrossRef]
  52. Martínez-Martín, P.; Benito-León, J.; Alonso, F.; et al. Patients’, doctors’, and caregivers’ assessment of disability using the UPDRS-ADL section: are these ratings interchangeable? Movement Disorders 2003, 18(9), 985–992. [CrossRef]
  53. Lipsmeier, F.; Taylor, K. I.; Kilchenmann, T.; et al. Evaluation of smartphone-based testing to generate exploratory outcome measures in a phase 1 Parkinson’s disease clinical trial. Movement Disorders 2018, 33(8), 1287–1297. [CrossRef]
  54. Sulzer, P.; Becker, S.; Maetzler, W.; et al. Validation of a novel Montreal Cognitive Assessment scoring algorithm in non-demented Parkinson’s disease patients. Journal of Neurology 2018, 265(9), 1976–1984. [CrossRef]
  55. Lawton, M.; Kasten, M.; May, M. T.; et al. Validation of conversion between mini-mental state examination and montreal cognitive assessment. Movement Disorders 2016, 31(4), 593–596. [CrossRef]
  56. Soares, T.; Vale, T. C.; Guedes, L. C.; et al. Validation of the Portuguese MDS-UPDRS: Challenges to obtain a scale applicable to different linguistic cultures. Movement Disorders Clinical Practice 2025, 12(1), 34–42. [CrossRef]
  57. Harvey, P. D.; Ferris, S. H.; Cummings, J. L.; et al. Evaluation of dementia rating scales in Parkinson’s disease dementia. American Journal of Alzheimer’s Disease and Other Dementias 2010, 25(2), 142–148. [CrossRef]
  58. Nazem, S.; Siderowf, A. D.; Duda, J. E.; et al. Montreal cognitive assessment performance in patients with Parkinson’s disease with “Normal” global cognition according to mini-mental state examination score. Journal of the American Geriatrics Society 2009, 57(2), 304–308. [CrossRef]
  59. Nie, K.; Zhang, Y.; Wang, L.; et al. A pilot study of psychometric properties of the Beijing version of Montreal Cognitive Assessment in patients with idiopathic Parkinson’s disease in China. Journal of Clinical Neuroscience 2012, 19(11), 1497–1500. [CrossRef]
  60. Khalil, H.; Aldaajani, Z. F.; Aldughmi, M.; et al. Validation of the Arabic version of the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale. Movement Disorders 2022, 37(4), 826–841. [CrossRef]
  61. Raciti, L.; Nicoletti, A.; Mostile, G.; et al. Accuracy of MDS-UPDRS section IV for detecting motor fluctuations in Parkinson’s disease. Neurological Sciences 2019, 40(6), 1271–1273. [CrossRef]
  62. Metman, L. V.; Myre, B.; Verwey, N.; et al. Test-retest reliability of UPDRS-III, dyskinesia scales, and timed motor tests in patients with advanced Parkinson’s disease: an argument against multiple baseline assessments. Movement Disorders 2004, 19(9), 1079–1084. [CrossRef]
  63. Bezdicek, O.; Červenková, M.; Moore, T. M.; et al. Determining a short form Montreal Cognitive Assessment (s-MoCA) Czech version: Validity in mild cognitive impairment Parkinson’s disease and cross-cultural comparison. Assessment 2020, 27(8), 1960–1970. [CrossRef]
  64. Kleiner-Fisman, G.; Stern, M. B.; Fisman, D. N. Health-related quality of life in Parkinson disease: correlation between Health Utilities Index III and Unified Parkinson’s Disease Rating Scale (UPDRS) in U.S. male veterans. Health and Quality of Life Outcomes 2010, 8(1), 91. [CrossRef]
  65. Starkstein, S. E.; Merello, M. The Unified Parkinson’s Disease Rating Scale: validation study of the mentation, behavior, and mood section. Movement Disorders 2007, 22(15), 2156–2161. [CrossRef]
  66. Kasten, M.; Bruggemann, N.; Schmidt, A.; et al. Validity of the MoCA and MMSE in the detection of MCI and dementia in Parkinson disease. Neurology 2010, 75(5), 478; author reply 478–9. [CrossRef]
  67. Buck, P. O.; Wilson, R. E.; Seeberger, L. C.; et al. Examination of the UPDRS bradykinesia subscale: equivalence, reliability and validity. Journal of Parkinson’s Disease 2011, 1(3), 253–258. [CrossRef]
  68. Ismail, Z.; Rajji, T. K.; Shulman, K. I. Brief cognitive screening instruments: an update. International Journal of Geriatric Psychiatry 2010, 25(2), 111–120. [CrossRef]
  69. Ruzafa-Valiente, E.; Fernández-Bobadilla, R.; García-Sánchez, C.; et al. Parkinson’s Disease--Cognitive Functional Rating Scale across different conditions and degrees of cognitive impairment. Journal of the Neurological Sciences 2016, 361, 66–71. [CrossRef]
  70. Tumas, V.; Borges, V.; Ballalai-Ferraz, H.; et al. Some aspects of the validity of the Montreal Cognitive Assessment (MoCA) for evaluating cognitive impairment in Brazilian patients with Parkinson’s disease. Dementia & Neuropsychologia 2016, 10(4), 333–338. [CrossRef]
  71. van Hilten, J. J.; van der Zwan, A. D.; Zwinderman, A. H.; et al. Rating impairment and disability in Parkinson’s disease: evaluation of the Unified Parkinson’s Disease Rating Scale. Movement Disorders 1994, 9(1), 84–88. [CrossRef]
  72. Martinez-Martin, P.; Chaudhuri, K. R.; Rojo-Abuin, J. M.; et al. Assessing the non-motor symptoms of Parkinson’s disease: MDS-UPDRS and NMS Scale. European Journal of Neurology 2015, 22(1), 37–43. [CrossRef]
  73. Ramsay, N.; Macleod, A. D.; Alves, G.; et al. Validation of a UPDRS-/MDS-UPDRS-based definition of functional dependency for Parkinson’s disease. Parkinsonism & Related Disorders 2020, 76, 49–53. [CrossRef]
  74. Isella, V.; Mapelli, C.; Siri, C.; et al. Validation and attempts of revision of the MDS-recommended tests for the screening of Parkinson’s disease dementia. Parkinsonism & Related Disorders 2014, 20(1), 32–36. [CrossRef]
  75. Bugalho, P.; da Silva, J. A.; Cargaleiro, I.; et al. Psychiatric symptoms screening in the early stages of Parkinson’s disease. Journal of Neurology 2012, 259(1), 124–131. [CrossRef]
  76. Freitas, S.; Simões, M. R.; Alves, L.; et al. Montreal cognitive assessment. Alzheimer Disease and Associated Disorders 2013, 27(1), 37–43. [CrossRef]
  77. Kremer, N. I.; Smid, A.; Lange, S. F.; et al. Supine MDS-UPDRS-III assessment: An explorative study. Journal of Clinical Medicine 2023, 12(9), 3108. [CrossRef]
  78. Goetz, C. G.; Tilley, B. C.; Shaftman, S. R.; et al. Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS): scale presentation and clinimetric testing results. Movement Disorders 2008, 23(15), 2129–2170. [CrossRef]
  79. Park, J.; Koh, S. B.; Kwon, K. Y.; et al. Validation study of the official Korean version of the Movement Disorder Society-Unified Parkinson’s Disease Rating Scale. Journal of Clinical Neurology 2020, 16(4), 633–645. [CrossRef]
  80. Wissel, B. D.; Mitsi, G.; Dwivedi, A. K.; et al. Tablet-based application for objective measurement of motor fluctuations in Parkinson disease. Digital Biomarkers 2017, 1(2), 126–135. [CrossRef]
  81. Statucka, M.; Cherian, K.; Fasano, A.; et al. Multiculturalism: A challenge for cognitive screeners in Parkinson’s disease. Movement Disorders Clinical Practice 2021, 8(5), 733–742. [CrossRef]
  82. Stocchi, F.; Radicati, F. G.; Chaudhuri, K. R.; et al. The Parkinson’s Disease Composite Scale: results of the first validation study. European Journal of Neurology 2018, 25(3), 503–511. [CrossRef]
  83. Forjaz, M. J.; Ayala, A.; Testa, C. M.; et al. Proposing a Parkinson’s disease-specific tremor scale from the MDS-UPDRS. Movement Disorders 2015, 30(8), 1139–1143. [CrossRef]
  84. Hariz, G.-M.; Fredricks, A.; Stenmark-Persson, R.; et al. Blinded versus unblinded evaluations of motor scores in patients with Parkinson’s disease randomized to deep brain stimulation or best medical therapy. Movement Disorders Clinical Practice 2021, 8(2), 286–287. [CrossRef]
  85. Goetz, C. G.; Luo, S.; Wang, L.; et al. Handling missing values in the MDS-UPDRS. Movement Disorders 2015, 30(12), 1632–1638. [CrossRef]
  86. Morinan, G.; Hauser, R. A.; Schrag, A.; et al. Abbreviated MDS-UPDRS for remote monitoring in PD identified using exhaustive computational search. Parkinson’s Disease 2022, 2022, 2920255. [CrossRef]
  87. Regnault, A.; Boroojerdi, B.; Meunier, J.; et al. Does the MDS-UPDRS provide the precision to assess progression in early Parkinson’s disease? Learnings from the Parkinson’s progression marker initiative cohort. Journal of Neurology 2019, 266(8), 1927–1936. [CrossRef]
  88. Jenkins, M. E.; Johnson, A. M.; Holmes, J. D.; et al. Predictive validity of the UPDRS postural stability score and the Functional Reach Test, when compared with ecologically valid reaching tasks. Parkinsonism & Related Disorders 2010, 16(6), 409–411. [CrossRef]
  89. Goetz, C. G.; Fahn, S.; Martinez-Martin, P.; et al. Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS): Process, format, and clinimetric testing plan. Movement Disorders 2007, 22(1), 41–47. [CrossRef]
  90. Raciti, L.; Nicoletti, A.; Mostile, G.; et al. Validation of the UPDRS section IV for detection of motor fluctuations in Parkinson’s disease. Parkinsonism & Related Disorders 2016, 27, 98–101. [CrossRef]
  91. Forjaz, M. J.; Martinez-Martin, P. Metric attributes of the unified Parkinson’s disease rating scale 3.0 battery: part II, construct and content validity. Movement Disorders 2006, 21(11), 1892–1898. [CrossRef]
  92. Zitser, J.; Peretz, C.; Ber David, A.; et al. Validation of the Hebrew version of the Movement Disorder Society—unified Parkinson’s disease rating scale. Parkinsonism & Related Disorders 2017, 45, 7–12. [CrossRef]
  93. Tosin, M. H. S.; Sanchez-Ferro, A.; Wu, R.-M.; et al. In-home remote assessment of the MDS-UPDRS part III: Multi-cultural development and validation of a guide for patients. Movement Disorders Clinical Practice 2024, 11(12), 1576–1581. [CrossRef]
  94. Benge, J. F.; Balsis, S.; Madeka, T.; et al. Factor structure of the Montreal Cognitive Assessment items in a sample with early Parkinson’s disease. Parkinsonism & Related Disorders 2017, 41, 104–108. [CrossRef]
  95. Gallagher, D. A.; Goetz, C. G.; Stebbins, G.; et al. Validation of the MDS-UPDRS Part I for nonmotor symptoms in Parkinson’s disease. Movement Disorders 2012, 27(1), 79–83. [CrossRef]
  96. Kenny, L.; Azizi, Z.; Moore, K.; et al. Inter-rater reliability of hand motor function assessment in Parkinson’s disease: Impact of clinician training. Clinical Parkinsonism & Related Disorders 2024, 11, 100278. [CrossRef]
  97. de Deus Fonticoba, T.; Santos García, D.; Macías Arribí, M. Variabilidad en la exploración motora de la enfermedad de Parkinson entre el neurólogo experto en trastornos del movimiento y la enfermera especializada. Neurología (English Edition) 2019, 34(8), 520–526. [CrossRef]
  98. Dujardin, K.; Duhem, S.; Guerouaou, N.; et al. Validation in French of the Montreal Cognitive Assessment 5-Minute, a brief cognitive screening test for phone administration. Revue Neurologique 2021, 177(8), 972–979. [CrossRef]
  99. Tosin, M. H. S.; Stebbins, G. T.; Comella, C.; et al. Does MDS-UPDRS provide greater sensitivity to mild disease than UPDRS in DE Novo Parkinson’s disease? Movement Disorders Clinical Practice 2021, 8(7), 1092–1099. [CrossRef]
  100. De Deus Fonticoba, T.; Santos García, D.; Macías Arribí, M. Inter-rater variability in motor function assessment in Parkinson’s disease between experts in movement disorders and nurses specialising in PD management. Neurología (English Edition) 2019, 34(8), 520–526. [CrossRef]
  101. Kletzel, S. L.; Hernandez, J. M.; Miskiel, E. F.; et al. Evaluating the performance of the Montreal Cognitive Assessment in early stage Parkinson’s disease. Parkinsonism & Related Disorders 2017, 37, 58–64. [CrossRef]
  102. Gill, D. J.; Freshman, A.; Blender, J. A.; et al. The Montreal cognitive assessment as a screening tool for cognitive impairment in Parkinson’s disease. Movement Disorders 2008, 23(7), 1043–1046. [CrossRef]
  103. Martínez-Martín, P.; Gil-Nagel, A.; Gracia, L. M.; et al. Unified Parkinson’s Disease Rating Scale characteristics and structure. The Cooperative Multicentric Group. Movement Disorders 1994, 9(1), 76–83. [CrossRef]
  104. Gasser, A.-I.; Calabrese, P.; Kalbe, E.; et al. Cognitive screening in Parkinson’s disease: Comparison of the Parkinson Neuropsychometric Dementia Assessment (PANDA) with 3 other short scales. Revue Neurologique 2016, 172(2), 138–145. [CrossRef]
  105. D’Iorio, A.; Aiello, E. N.; Amboni, M.; et al. Validity and diagnostics of the Italian version of the Montreal Cognitive Assessment (MoCA) in non-demented Parkinson’s disease patients. Aging Clinical and Experimental Research 2023, 35(10), 2157–2163. [CrossRef]
  106. Evers, L. J. W.; Krijthe, J. H.; Meinders, M. J.; et al. Measuring Parkinson’s disease over time: The real-world within-subject reliability of the MDS-UPDRS. Movement Disorders 2019, 34(10), 1480–1487. [CrossRef]
  107. Louis, E. D.; Lynch, T.; Marder, K.; et al. Reliability of patient completion of the historical section of the Unified Parkinson’s Disease Rating Scale. Movement Disorders 1996, 11(2), 185–192. [CrossRef]
  108. Fiorenzato, E.; Weis, L.; Falup-Pecurariu, C.; et al. MoCA vs. MMSE sensitivity as screening instruments of cognitive impairment in PD, MSA and PSP patients. Parkinsonism & Related Disorders 2016, 22, e59–e60. [CrossRef]
Figure 1. Bias Domain Analysis Results (n=109). PRISMA flow diagram showing study selection process for the scientometric analysis of bias and fairness in Parkinson's disease clinical assessment scales.
Figure 1. Bias Domain Analysis Results (n=109). PRISMA flow diagram showing study selection process for the scientometric analysis of bias and fairness in Parkinson's disease clinical assessment scales.
Preprints 174787 g001
Figure 2. Scoring Ranges and Severity Classifications for Major Parkinson's Disease Assessment Scales.
Figure 2. Scoring Ranges and Severity Classifications for Major Parkinson's Disease Assessment Scales.
Preprints 174787 g002
Figure 3. Publication Trends: UPDRS, MoCA, and MMSE (1996-2024).
Figure 3. Publication Trends: UPDRS, MoCA, and MMSE (1996-2024).
Preprints 174787 g003
Figure 4. Author Countries Publishing in All journals (All Years).
Figure 4. Author Countries Publishing in All journals (All Years).
Preprints 174787 g004
Figure 5. Gender Proportions by Category.
Figure 5. Gender Proportions by Category.
Preprints 174787 g005
Figure 7. Number of Papers Published Per Year.
Figure 7. Number of Papers Published Per Year.
Preprints 174787 g007
Figure 8. Proportion of Papers by Continent and Income.
Figure 8. Proportion of Papers by Continent and Income.
Preprints 174787 g008
Figure 9. Keyword Co-occurrence Network (from Abstracts).
Figure 9. Keyword Co-occurrence Network (from Abstracts).
Preprints 174787 g009
Table 2. Geographic Distribution of Research Publications.
Table 2. Geographic Distribution of Research Publications.
Region Publications (n) Percentage (%) Income Level
Europe (HIC) 48 44.0 High
North America (HIC) 39 35.8 High
Asia (HIC) 9 8.3 High
Asia (LMIC) 4 3.7 LMIC
Oceania (HIC) 3 2.8 High
South America (LMIC) 3 2.8 LMIC
Africa (LMIC) 2 1.8 LMIC
North America (LMIC) 1 0.9 LMIC
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated