Our Research paper "Risk Prediction and Interpretation for Fall Events Using Explainable AI and Large Language Models", was accepted at International Conference on Medical and Health Informatics (ICMHI 2025) conference.
Our paper, “Risk Prediction and Interpretation for Fall Events Using Explainable AI and Large Language Models,” has been accepted at the International Conference on Medical and Health Informatics (ICMHI 2025). This research leverages explainable AI (XAI) and Large Language Models (LLMs) to enhance the interpretation of SHAP (SHapley Additive exPlanations) values for fall risk prediction models, ensuring that healthcare providers can make informed, trust-based decisions.
In many healthcare systems, predicting and preventing falls is crucial, particularly for elderly or high-risk patients. Our research focuses not only on generating accurate predictions but also on improving the interpretability of those predictions through LLM-driven explanations. By utilizing LLMs, clinicians receive simplified, natural-language interpretations of SHAP values that explain which features most contribute to a patient’s risk. This helps providers better understand AI-generated outputs, bridging the gap between complex data models and practical, real-world decision-making.
Key Innovations:
- Risk prediction models enhanced with SHAP value explanations for feature importance
- Integration of LLMs to provide clear, human-readable explanations of model outputs, reducing cognitive load for clinicians
- A focus on actionable AI, where predictions and their justifications empower healthcare providers to take preventative measures against fall events
Our system tackles one of the major challenges in healthcare AI: the black-box nature of machine learning models. By breaking down complex SHAP values using LLMs, we make it easier for clinicians to understand the “why” behind each prediction. This human-centric approach aligns with the goals of modern healthcare informatics—enabling transparency, trust, and timely interventions.
Why This Matters
Falls are a leading cause of injury and complications in healthcare settings, yet many predictive models provide limited insight into why a patient is classified as high-risk. Traditional approaches often leave clinicians guessing about which factors are driving risk scores. Our research directly addresses this issue by delivering interpretable predictions and easy-to-understand explanations, allowing clinicians to focus on prevention rather than reactive care.