Generate interactive reports in the notebook or export them as an HTML file. Use them for visual evaluation, debugging and sharing with the team. Run the data and model checks as part of the pipeline. Integrate with tools like Mlflow or Airflow to schedule the tests and log the results. Collect the model quality metrics from the deployed ML service. Currently works through integration with Prometheus and Grafana.
Model Performance Management (MPM) is the foundation of model/MLOps, providing continuous visibility into your training and production ML, understanding why predictions are made, and enabling teams with actionable insights to refine and react to changes to improve your models. MPM is reliant not only on metrics but also on how well a model can be explained when something eventually goes wrong.
We monitor all Model Monitoring reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.