Examples with LangChain
Let’s explore an example of integrating promptquality
with a LangChain chain
This example is pulled from Langchain Docs and most of the code is just LangChain implementation of a simple chain.
If you are using Vertex AI through langchain, concurrent requests to Vertex AI LLMs will fail to compute node outputs. Use one worker for best results.
Creating a simple chain with LangChain
First let’s build the components of our chain
we want to ask a chat model a question about hallucinations, but we want to give it the context to answer correctly, so naturally we set up a vector db and use RAG. In this case we’ll get the context from a Galileo blog post
Now we have the retriever, we can build our chain. The chain will
-
Take in a question.
-
Feed that question to our retriever for some context.
-
Fill out the prompt with the question and context.
-
Feed the prompt to a chat model.
-
output the answer from the model.
Integrating our chain with promptquality
Now all we have to do to integrate with promptquality
, is to add our callback. In just 3 lines of code we can integrate promptquality
into any existing LangChain experiments
Adding Tools and Agents
More complex chains including LangChain Tools and Agents, also integrate well with Galileo Evaluate.
Here’s another example pulled from the LangChain Docs.
Creating the tool from our retriever
First we can use our retriever, created above, and convert it to a tool.
Now let’s create a ReAct Agent that has access to this tool.
Now that we have that, we’re ready to integrate with promptquality
, the exact same way as above.
Was this page helpful?