⛓️ Langflow
⛓️ Langflow
⛓️ Langflow
⛓️ Langflow
Retrieval Augmented Generation (RAG) using Langflow
Retrieval Augmented Generation (RAG) using Langflow
Retrieval Augmented Generation (RAG) using Langflow
Retrieval Augmented Generation (RAG) using Langflow
![](https://framerusercontent.com/images/OdLKGa9nZVzL4O1WoWXFv3eIQIw.png)
Alexandre
Oct 31, 2023
Introduction
Retrieval Augmented Generation (RAG) was introduced to address problems where an LLM needs access to external knowledge sources in order to complete a task.
RAG takes the user query and finds relevant information included in a source, such as a database or PDF files. The retrieved information is combined with the original query and passed to the LLM as context in a prompt to generate a final answer.
LLMs’ parametric knowledge is static, and retrieval-based generation solves this problem by allowing a model to retrieve information that changes over time, avoiding retraining, and creating reliable outputs.
In this tutorial, we will cover how to use Langflow to build a system to retrieve information from a CSV file and generate an answer using RAG.
Langflow components using RAG are available to download at the end of this article.
Using RAG with Langflow
Let's start with the flow from the Community Examples under the name CSV Loader. The image below shows its components.
![](https://framerusercontent.com/images/PsJpOtrM4JT27Iyod8f8F5Ot3w.webp)
The process of Retrieval Augmented Generation consists of the explained steps below:
Loader: First we need to load information from a source in order to extract knowledge and generate an accurate answer. Langflow contains different types of loaders to start from, such as the WebBaseLoader, TextLoader, PyPDFLoader, etc. Here, we'll extract information from a CSV file using the CSVLoader component.
Text Splitter: After loading the information within a document, we need to break it down into chunks of a specific size to feed the vector store. Here, we use Langchain's Recursive Character Text Splitter.
Vector Store: The Vector Store is responsible for storing and embedding each chunk separately to allow for similarity searches. Langflow has a range of Vector Store integrations to choose from, such as Chroma, Pinecone, Vectara, and Weaviate.
Retrieval: A basic retrieval system will apply a similarity search using the user’s query and concatenate the retrieved chunks to feed an LLM prompt.
Generation: The LLM outputs the answer using a prompt that includes both the query and the retrieved knowledge from the document as a context.
The image below displays a table containing each topic of the Langflow documentation.
![](https://framerusercontent.com/images/Jar0j7o2WilIjUhoNNCfTSoHoo.webp)
To check if the LLM is giving the correct answer, let’s ask the question “What is Langflow?”.
![](https://framerusercontent.com/images/wvf6C03yABvhJL5dYhV5nD8ixXo.webp)
We can confirm that the agent retrieved the information from the csv file and output a correct answer. Indeed, this information can be found in Langflow’s documentation.
Now, let’s ask another question to test it again.
![](https://framerusercontent.com/images/K2KQIboa55cxhNWIDPR0tlzE.webp)
Once again, the knowledge was correctly retrieved from the document and the answer is correct.
Finally, we input a final question asking “What are agents in langflow?”.
![](https://framerusercontent.com/images/3MuYH3UufPoqFdWIFNvYn2H0sIM.webp)
We can note in the agent’s thoughts (white square content) that the agent didn’t know the answer in the first iteration, and then it called the Vector Store component named CSV to retrieve the knowledge needed to answer this question.
Thus, RAG is a powerful tool to combine with LLMs in order or retrieve information and knowledge about a specific topic.
The flow in Langflow to retrieve information from a csv file using RAG can be found in the Community Examples under the name CSV Loader. The json file containing the flow is found above.
This post has demonstrated how to construct a system (flow) to retrieve information from a csv file and generate the answer using RAG. Langflow allows you to load almost any type of document to retrieve information from it and to generate the LLM answer. Feel free to use the platform to adapt the flow for your specific needs.