Abstract:
The transformative potential of Artificial Intelligence (AI) in medical diagnostics is hampered by the ”black-box” challenge,
where the complex workings of deep learning models obscure the clarity necessary for clinical trust. This research confronts the opacity
of AI systems by integrating Explainable Artificial Intelligence (XAI) in liver disease diagnosis, aiming to enhance interpretability and
foster healthcare professionals’ confidence in AI-driven decisions. This study focuses on whether XAI can demystify the predictive
mechanics of deep learning models in medical imaging and examines its effect on the trust and reliability perceived by healthcare
professionals. Employing empirical methodologies, a deep learning model was developed for diagnosing liver diseases from medical
imaging data, featuring XAI for transparency. The implementation yielded a deep learning model with an 81% accuracy rate, achieving
considerable interpretability through SHAP (SHapley Additive exPlanations) values without compromising diagnostic performance.
The integration of XAI provided insights, with features like Alkaline Phosphatase showing a significant mean SHAP value of +0.07,
underscoring its predictive prominence. The inclusion of XAI in AI diagnostics not only clarifies the decision-making process but also
enhances user trust, potentially leading to broader clinical application. The originality of this work lies in its approach to fusing deep
learning with XAI, contributing to the progressive vision of transparent, personalized medicine. This research can aid practitioners in
leveraging AI for liver disease diagnosis, advancing the domain of biomedical AI.