IJFANS International Journal of Food and Nutritional Sciences

ISSN PRINT 2319-1775 Online 2320-7876

EXPLAINABLE AI: BRIDGING THE GAP BETWEEN BLACK BOX MODELS AND INTERPRETABILITY

Main Article Content

Malipatil Shivashankar A

Abstract

Artificial Intelligence (AI) has achieved remarkable success across a wide range of applications, from healthcare to finance, powered by complex machine learning models such as deep neural networks. However, these "black box" models, while highly accurate, lack transparency and interpretability, making it difficult for users to understand how decisions are made. This lack of clarity raises concerns in high-stakes domains, where understanding the rationale behind AI decisions is crucial for trust, accountability, and fairness. Explainable AI (XAI) seeks to bridge this gap by developing models that are not only accurate but also interpretable, enabling humans to understand and trust AI-driven decisions. XAI approaches are broadly categorized into two main strategies: interpretable models and post-hoc explanation methods. Interpretable models are designed with transparency in mind, where their decision-making processes are inherently easier to understand. Examples include decision trees and linear models, which balance simplicity with clarity. On the other hand, post-hoc explanation methods, such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), provide insights into the decision-making process of complex, black box models by offering approximations or feature importance analysis.

Article Details