Creating a simple chain with LangChain
First let’s build the components of our chain we want to ask a chat model a question about hallucinations, but we want to give it the context to answer correctly, so naturally we set up a vector db and use RAG. In this case we’ll get the context from a Galileo blog post- Take in a question.
- Feed that question to our retriever for some context.
- Fill out the prompt with the question and context.
- Feed the prompt to a chat model.
- output the answer from the model.
Integrating our chain with promptquality
Now all we have to do to integrate with promptquality
, is to add our callback. In just 3 lines of code we can integrate promptquality
into any existing LangChain experiments
Adding Tools and Agents
More complex chains including LangChain Tools and Agents, also integrate well with Galileo Evaluate. Here’s another example pulled from the LangChain Docs.Creating the tool from our retriever
First we can use our retriever, created above, and convert it to a tool.promptquality
, the exact same way as above.