Quick Search:

Explainable AI for Parkinson’s Disease Prediction: A Machine Learning Approach with Interpretable Models

Esan, Adebimpe O., Olawade, David ORCID logoORCID: https://orcid.org/0000-0003-0188-9836, Soladoye, Afeez A., Omodunbi, Bolaji A., Adeyanju, Ibrahim A. and Aderinto, Nicholas (2025) Explainable AI for Parkinson’s Disease Prediction: A Machine Learning Approach with Interpretable Models. Current Research in Translational Medicine. p. 103541.

Full text not available from this repository.

Abstract

Background

Parkinson’s Disease (PD) is a chronic, progressive neurological disorder with significant clinical and economic impacts globally. Early and accurate prediction remains challenging with traditional diagnostic methods due to subjectivity, delayed diagnosis, and variability. Machine Learning (ML) approaches offer potential solutions, yet their clinical adoption is hindered by limited interpretability. This study aimed to develop an interpretable ML model for early and accurate PD prediction using comprehensive multimodal datasets and Explainable Artificial Intelligence (XAI) techniques.

Methods

The study applied five ML algorithms: Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Logistic Regression (LR), Random Forest (RF), XGBoost, and a stacked ensemble method to a publicly available dataset (n=2105) from Kaggle. Data encompassed demographic, medical history, lifestyle, clinical symptoms, cognitive, and functional assessments with specific inclusion/exclusion criteria applied. Preprocessing involved normalization, Synthetic Minority Oversampling Technique (SMOTE), and Sequential Backward Elimination (SBE) for feature selection. Model performance was evaluated via accuracy, precision, recall, F1-score, and Area Under Curve (AUC). The best-performing model (RF with feature selection) was interpreted using SHAP and LIME methods.

Results

: Random Forest combined with Backward Elimination Feature Selection achieved the highest predictive accuracy (93%), precision (93%), recall (93%), F1-score (93%), and AUC (0.97). SHAP and LIME analyses indicated UPDRS scores, cognitive impairment, functional assessment, and motor symptoms as primary predictors, enhancing clinical interpretability.

Conclusion

: The study demonstrated the effectiveness of an interpretable RF model for accurate PD prediction. Integration of ML and XAI significantly improves clinical decision-making, diagnosis timing, and personalized patient care.

Item Type: Article
Status: Published
DOI: 10.1016/j.retram.2025.103541
School/Department: London Campus
URI: https://ray.yorksj.ac.uk/id/eprint/12676

University Staff: Request a correction | RaY Editors: Update this record