Add Tags and Metadata to Prompt Runs
While you are experimenting with your prompts you will probably be tuning many parameters.
Maybe you will run experiments with different models, model versions, vector stores, embedding models, etc.
Run Tags are an easy way to log any details of your run, that you want to view later in the Galileo Evaluation UI.
Adding tags with promptquality
A tag has three key components:
-
key: the name of your tag i.e model name
-
value: the value in your run i.e. gpt-4
-
tag_type: the type of the tag. Currently tags can be RAG or GENERIC
If we wanted to run an experiment, using gpt with a 16k token max, we could create a tag, noting that our max tokens is 16k:
We could then add our tag to our run, however we are choosing to create a run:
Logging Workflows
If you are using a workflow, you can add tags to your workflow by adding the tag to the EvaluateRun object.
Prompt Run
We can add tags to a simple Prompt run. For info on creating Prompt runs, see Getting Started
Prompt Sweep
We can also add tags across a Prompt sweep, with multiple templates and/or models. For info on creating Prompt sweeps, see Prompt Sweeps
LangChain Callback
We can even add tags, through the GalileoPromptCallback, to more complex chain runs, with LangChain. For info on using Prompt with chains, see Using Prompt with Chains or multi-step workflows
Viewing Tags in the Galileo Evaluation UI
You can then view your tags in the Galileo Evaluation UI:
Was this page helpful?