Ensure optimal performance of Natural Language Inference models in production by monitoring data drift and model health with Galileo NLP Studio.
Is there training<>production data drift? What unlabeled data should I select for my next training run? Is the model confidence dropping on an existing class in production? …To answer the above questions and more with Galileo, you will need:
dq.finish()
!
Note: If you’re extending a current training run, the list_of_labels
logged for your dataset must match exactly that used during training.