Learn how to monitor Named Entity Recognition models in production with Galileo NLP Studio, detecting data drift and maintaining model health effectively.
Is there training<>production data drift? What unlabeled data should I select for my next training run? Is the model confidence dropping on an existing class in production? …To answer the above questions and more with Galileo, you will need:
dq.finish()
!
Note: If you’re extending a current training run, the list_of_labels
logged for your dataset must match exactly that used during training.