db_utils.py
and main.py
. In the db_utils.py
file, we will write all the utility functions to connect llama-index with our SQL database and the table of our choice. The main.py
file contains the streamlit code to provide a nice front end.
Optional
SQLDatabase
and then builds the query engine using NLSQLTableQueryEngine
. This engine is basically prompted to generate proper SQL from the table schema and its contents.
.streamlit
folder. Inside that folder, create a file named secrets.toml
. Inside this toml file, save all the db-specific credentials. Here is an example:
get_all_tables_from_db
.
setup_index_before_chat
function.
query_engine
and get the result.Streamlit chat implementation
Optional
setup_index_before_chat
with a more advanced one. We will also introduce how to index multiple tables and chat with them using llama-index and Prem AI.
The workflow remains the same but with some changes.
VectorStoreIndex
, which essentially maps and performs an indexing to all the available table schemas. Here, our embedding model is used to get the schema embeddings and all the metadata related to the tables.setup_index_before_chat
with setup_index_before_chat_use_embedding
function inside main.py
and then we are now able to chat with multiple tables. You can check out the full source code to run and reproduce the results and extend it to your SQL based Generative AI workflows.