Why monitor metric change?
ML models performance often unexpectedly degrade when they are deployed in real-world domains. It is very important to track the true model performance metrics from real-world data and react in time, to avoid the consequences of poor model performance.
Causes of model's performance degradation include:
- Input data changes (various reasons)
- Concept drift
For this monitor type, you can select the following detection methods:
- Absolute Values - The metric value is lower or higher than a specific value.
- Anomaly Detection - Detects anomalies in the value of the metric in the inspected data and its value in a time period before the data was collected.
- Change In Percentage - Detects change in the ratio between the metric value of the inspected data and its value in a time period before the data was collected.
- Compared To Segment - Detects change in the ratio between the metric value of the inspected data and to its value in a different data segment.
- Compared To Training - Detects change in the ratio between the metric value of the inspected data and its value calculated from the reported training set.
Start from choosing the predictions you'd like to monitor. You can select as many as you want :-)
Next, choose the metric you'd like to monitor from the following options:
- Missing Count
- Standard Deviation
- Mean Squared Error
- Root Mean Squared Error
- Mean Absolute Error
- True Positive Count
- True Negative Count
- False Positive Count
- False Negative Count
Note that the monitor configuration may vary between the detection method you choose.