Hopsworks
O'ReillySovereign AI
Start Free
← Back to Dictionary

Model Interpretability

What is model interpretability (Explainable AI)?

Model interpretability (also known as explainable AI) is the process by which a ML model's predictions can be explained and understood by humans. In MLOps, this typically requires logging inference data and predictions together, so that a library (such as Alibi) or framework (such as LIME or SHAP) can later process and produce explanations for the predictions.

Hopsworks

The AI Lakehouse
πŸ‡ΈπŸ‡ͺ πŸ‡ͺπŸ‡Ί

Product

AI LakehouseFeature StoreMLOpsIntegrations

Industries

Financial ServicesRetail & E-commerceGovernment & Defense

Learn

BlogDictionaryAcademyEventsResearch Papers

Company

About UsCustomersNewsPricingFAQContactDocumentation

Β© 2026 Hopsworks AB. All rights reserved.

TermsPrivacyCookies