Generate interactive reports in the notebook or export them as an HTML file. Use them for visual evaluation, debugging and sharing with the team. Run the data and model checks as part of the pipeline. Integrate with tools like Mlflow or Airflow to schedule the tests and log the results. Collect the model quality metrics from the deployed ML service. Currently works through integration with Prometheus and Grafana.
NannyML empowers data scientists to detect and understand silent model failure, so you can end these worries in minutes!
NannyML turns the machine learning flow into a cycle, empowering data scientists to do meaningful and informed post-deployment data science to monitor and improve models in production through iterative deployments.
We monitor all Model Monitoring reviews to prevent fraudulent reviews and keep review quality high. We do not post reviews by company employees or direct competitors. We validate each review for authenticity via cross-reference with LinkedIn, and personal follow-up with the reviewer when necessary.