Back to the Index

Model Interpretability

What is model interpretability (Explainable AI)?

Model interpretability (also known as explainable AI) is the process by which a ML model's predictions can be explained and understood by humans. In MLOps, this typically requires logging inference data and predictions together, so that a library (such as Alibi) or framework (such as LIME or SHAP) can later process and produce explanations for the predictions

Does this content look outdated? If you are interested in helping us maintain this, feel free to contact us.