ML
ML stands for Machine Learning, which is a subfield of artificial intelligence (AI) that focuses on the development of algorithms and statistical models.
ML Artifacts (ML Assets)
ML artifacts are outputs of ML pipelines that are needed for execution of subsequent pipelines or ML applications.
MLOps
Machine learning operations (MLOps) describes processes for automated testing of ML pipelines and ML artifact versioning that helps improve developer productivity.
MLOps Platform
MLOps platforms are a category of tools, services, and infrastructural architectures designed to streamline the development, deployment, and management of models in production
MVPS
A MVPS is a Minimal Viable Prediction Service.
Machine Learning Infrastructure
Machine learning infrastructure typically refers to the underlying framework, systems, and resources required to support the development, deployment, and operation of software applications
Machine Learning Observability
Machine Learning Observability involves closely monitoring and understanding how machine learning models perform once they're deployed into real-world environments.
Machine Learning Pipeline
A Machine Learning Pipeline is a program that takes input and produces one or more ML artifacts as output.
Machine Learning Systems
Machine Learning Systems can be categorized into four different types: interactive, batch, stream processing, and embedded/edge systems.
Model Architecture
A model architecture is the choice of a machine learning algorithm along with the underlying structure or design of the machine learning model.
Model Bias
Model bias refers to the presence of systematic errors in a model that can cause it to consistently make incorrect predictions.
Model Deployment
A model deployment enables clients to perform inference requests on the model over a network.
Model Development
Model development is the process of building and training a machine learning model using training data.
Model Governance
Model governance is the process for managing ML models to ensure they are secure, ethical, trustworthy, explainable, and comply with relevant regulations
Model Inference
Model inference (or machine learning inference) is when a model makes predictions on new, unseen input data (inference data) and produces predictions as output that are consumed by a user or service.
Model Interpretability
Model interpretability (also known as explainable AI) is the process by which a ML model's predictions can be explained and understood by humans.
Model Monitoring
Model monitoring involves continuously monitoring the performance of predictions made by models to identify potential problems.
Model Performance
Model performance in machine learning (ML) is a measurement of how accurate predictions or classifications a model makes on new, unseen data.
Model Quantization
Model quantization can reduce the memory footprint and computation requirements of deep neural network models.
Model Registry
A model registry is a version control system for models that provides APIs to store and retrieve models and model-related artifacts.
Model Serving
With Model Serving you take a trained ML model and make it accessible for real-world applications via a REST or gRPC API.
Model Training
Model training in MLOps happens as part of a model training pipeline.
Model-Centric ML
Model-centric ML is an approach to machine learning that focuses on iteratively improving model architecture and hyperparameters to enhance model performance.
Model-Dependent Transformations
A model-dependent transformation is a transformation of a feature that is specific to one model, and is consistently applied in training and inference pipelines.
Model-Independent Transformations
Model-independent data transformations produce features that can potentially be reused in training or inference by one or more models.
Monolithic Machine Learning Pipeline
A monolithic ML pipeline is a single program that can be run as either a feature pipeline followed by a training pipeline or a feature pipeline followed by a batch inference pipeline.