Chat with PDF
Welcome to the Prem AI cookbook section. In this recipe, we will step by step build a simple chat with a PDF application (~ 100 lines of code) using Prem AI SDK and Streamlit. The best part is that you don’t need to understand complex mechanisms like RAG (Retrieval Augmented Generation) to make this app. So, without further ado, let’s get started. You can find the complete code here.
Objective
This recipe aims to show developers and users how to get started with Prem’s Generative AI Platform. The overall tutorial is going to be done in four simple steps:
- Setting up all the required packages.
- Introducing Prem repositories and how to upload documents (like PDF) into repositories.
- Writing a simple Streamlit chat function that will take user prompts, get the context from Prem repositories and then pass it to the LLM to provide us accurate results.
After completing this tutorial, you will be familiar with Prem SDK and some of its terms. You can check out our other recipes, which cover more intermediate and advanced GenAI use cases.
Setting up the project
Let’s start by creating a virtual environment and installing Prem SDK and Streamlit.
Before getting started, make sure you have an account at Prem AI Platform and a valid project ID and an API Key. As a last requirement, you also need to have at least one repository-id.
Now, create one folder named .streamlit
. Inside that folder, create a file called secrets.toml
. Inside secrets.toml
add your PREMAI_API_KEY
as shown here:
So now our folder structure looks something like this:
Inside app.py
, we import some basic libraries along with premai
and initialise some constants:
Uploading documents to Prem repositories
RAG is a powerful technique that combines the strengths of retrieval-based systems and generative models to enable assistants to provide accurate and contextually relevant responses based on the content of your documents.
With Prem Repositories, you can upload any kind of document (like .txt, .pdf, etc.), and we take care of the rest. Our recipe’s workflow is simple: We start by uploading some documents to a repository on-prem. Once uploaded, the repository will automatically index them using our internal state-of-the-art RAG technology. You can learn more about Prem repositories here.
Let’s write some code that will do the following:
- First, upload some PDF documents using streamlit
file_uploader
. - Save each file in a temporary directory.
- Use prem-SDK to upload those temporarily saved files to Prem repositories.
First, write a helper function to save an uploaded document using streamlit.
Now let’s make the file uploader object through which we save those files and upload it to repositories.
Writing the chat function
Streamlit has a concept of session_state which shares variables within a session. To maintain the history of the chat, we can either create a new session or display the history of chats and retrieved documents in the current session.
Now, we will write our main function to help us chat with the documents. We start by defining our repositories. We will use these repositories to retrieve context from relevant documents and add it to our LLM.
Let’s define the workflow from here:
-
We take the user’s prompt, display it using Streamlit, and add it to the Streamlit session_state object.
-
Next, start with our assistant block, where we get our response from Prem SDK and then stream the response in Streamlit until the streaming is finished.
-
Finally, we also show the chunks of documents retrieved to generate the answer, which helps with some interpretability.
With our workflow defined, let’s code it out.
Congratulations you have created your first application using Prem AI. To run this application you just need to run the following command:
You can check out more tutorials in our cookbook, as well as their full source code.
Was this page helpful?