Is there training<>production data drift? What unlabeled data should I select for my next training run? Is the model confidence dropping on an existing class in production? …To answer the above questions and more with Galileo, you will need:
- Your unlabeled production data
- Your model
Simply run an inference job on production data to view, inspect and select samples directly in the Galileo UI.
Here is what to expect: • Get the list of drifted data samples out of the box • Get the list of on-the-class-boundary samples out of the box • Quickly compare model confidence and class distributions between production and training runs • Find similar samples to low-confidence production data within less than a second … and a lot moreFull Walkthrough Tutorial
Follow our example notebook with Pytorch or read the full tutorial below.Google Colaboratory
Logging the Data Inputs
Log your inference dataset. Galileo will join these samples with the model’s outputs and present them in the Console. Note that unlike training, where ground truth labels are present for validation, during inference we assume that no ground truth labels exist.Pytorch
Logging the Inference Model Outputs
Log model outputs from within your model’s forward function.PyTorch
Putting it all together
Login and initialize a new project + run name or one matching an existing training run (this will add inference to that training run in the console). Then, load and log your inference dataset; load a pre-trained model; set the split to inference and run your inference run; finally calldq.finish()
!
Note: If you’re extending a current training run, the list_of_labels
logged for your dataset must match exactly that used during training.
PyTorch