Welcome to the Prem AI cookbook section. In this recipe, we will step by step build a simple chat with a PDF application (~ 100 lines of code) using Prem AI SDK and Streamlit. The best part is that you don’t need to understand complex mechanisms like RAG (Retrieval Augmented Generation) to make this app. So, without further ado, let’s get started. You can find the complete code here.

Objective

This recipe aims to show developers and users how to get started with Prem’s Generative AI Platform. The overall tutorial is going to be done in four simple steps:

  1. Setting up all the required packages.
  2. Introducing Prem repositories and how to upload documents (like PDF) into repositories.
  3. Writing a simple Streamlit chat function that will take user prompts, get the context from Prem repositories and then pass it to the LLM to provide us accurate results.

After completing this tutorial, you will be familiar with Prem SDK and some of its terms. You can check out our other recipes, which cover more intermediate and advanced GenAI use cases.

Setting up the project

Let’s start by creating a virtual environment and installing Prem SDK and Streamlit.

Before getting started, make sure you have an account at Prem AI Platform and a valid project ID and an API Key. As a last requirement, you also need to have at least one repository-id.

python3 -m venv venv
source venv/bin/activate
pip install -U streamlit premai

Now, create one folder named .streamlit. Inside that folder, create a file called secrets.toml. Inside secrets.toml add your PREMAI_API_KEY as shown here:

premai_api_key = "xxxx-xxxx-xxxx"

So now our folder structure looks something like this:

.
├── .streamlit
│   ├── config.toml
│   └── secrets.toml.template
├── app.py

Inside app.py, we import some basic libraries along with premai and initialise some constants:

import time 
import json 
import premai 
import streamlit as st 

premai_api_key = st.secrets.premai_api_key
premai_project_id = 123456 
premai_repository_id = 123456
prem_client = premai.Prem(api_key=premai_api_key)

Both premai_project_id and premai_repository_id are dummy ids. So, in your case, please ensure you have a valid project_id and repository_id. You can know more about project_id here and repository_id here.

Uploading documents to Prem repositories

RAG is a powerful technique that combines the strengths of retrieval-based systems and generative models to enable assistants to provide accurate and contextually relevant responses based on the content of your documents.

With Prem Repositories, you can upload any kind of document (like .txt, .pdf, etc.), and we take care of the rest. Our recipe’s workflow is simple: We start by uploading some documents to a repository on-prem. Once uploaded, the repository will automatically index them using our internal state-of-the-art RAG technology. You can learn more about Prem repositories here.

Let’s write some code that will do the following:

  1. First, upload some PDF documents using streamlit file_uploader.
  2. Save each file in a temporary directory.
  3. Use prem-SDK to upload those temporarily saved files to Prem repositories.

First, write a helper function to save an uploaded document using streamlit.

def temp_save_uploaded_file(uploadedfile):
    if not os.path.isdir(".tempdir"):
        os.makedirs(".tempdir")
    
    save_path = os.path.join(".tempdir", uploadedfile.name)
    try:
        with open(save_path, "wb") as f:
            f.write(uploadedfile.getbuffer())
        return  save_path
    except Exception:
        st.sidebar.toast("Failed to upload the file. Try again", icon="❌")
        return None  

Now let’s make the file uploader object through which we save those files and upload it to repositories.

with st.sidebar:
    uploaded_files = st.file_uploader(
        label="Upload PDF files",
        accept_multiple_files=True,
        type=[".pdf"]
    )

    for file in uploaded_files:
        file_path = utils.temp_save_uploaded_file(
            uploadedfile=file
        )
        try:
            _ = prem_client.repository.document.create(
                repository_id=premai_repository_id,
                file=file_path
            )
        except Exception as e:
            st.toast(f"Error with file: {file.name}", icon="❌")
    st.toast(body=f"Uploaded files", icon="🚀")

Writing the chat function

Streamlit has a concept of session_state which shares variables within a session. To maintain the history of the chat, we can either create a new session or display the history of chats and retrieved documents in the current session.

if "messages" not in st.session_state:
    st.session_state.messages = []

for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

Now, we will write our main function to help us chat with the documents. We start by defining our repositories. We will use these repositories to retrieve context from relevant documents and add it to our LLM.

repositories = dict(
    ids=[premai_repository_id], 
    similarity_threshold=0.25, 
    limit=5
)

Let’s define the workflow from here:

  1. We take the user’s prompt, display it using Streamlit, and add it to the Streamlit session_state object.

  2. Next, start with our assistant block, where we get our response from Prem SDK and then stream the response in Streamlit until the streaming is finished.

  3. Finally, we also show the chunks of documents retrieved to generate the answer, which helps with some interpretability.

With our workflow defined, let’s code it out.

# Get the prompt from the user 
if prompt := st.chat_input("Please write your query"):
    user_content = {"role": "user", "content": prompt}
    st.session_state.messages.append(user_content)
    with st.chat_message("user"):
        st.markdown(prompt)
    
    # start the assistant block
    with st.chat_message("assistant"):
        message_placeholder = st.empty()
        full_response = ""

        while not full_response:
            with st.spinner("Thinking ...."):
                try:
                    # get the response from LLM using prem sdk 
                    # also get the retrieved chunks 
                    response = prem_client.chat.completions.create(
                        project_id=premai_project_id,
                        messages=[user_content],
                        repositories=repositories,
                        stream=False
                    )
                    response_content = response.choices[0].message.content
                    response_doc_chunks = response.document_chunks
                except Exception:
                    response_content = "Failed to respond"
                    response_doc_chunks = []
                
            fr = ""
            full_response = str(response_content)
            
            # Stream the response here in streamlit 
            for i in full_response:
                time.sleep(0.01)
                fr += i
                message_placeholder.write(fr + "▌")
            message_placeholder.write(f"{full_response}")
            utils.see_repos(retrieved_docs=response_doc_chunks)
        
        # Append the history in the sessionstate object
        st.session_state.messages.append(
            {"role": "assistant", "content": full_response}
        )

Congratulations you have created your first application using Prem AI. To run this application you just need to run the following command:

streamlit run app.py

You can check out more tutorials in our cookbook, as well as their full source code.