Embedchain
This example explains how to connect different LLMs and Embedding Models in Embedchain through PremAI SDK.
Embedchain is an Open Source Framework that makes it easy to create and deploy personalized AI apps. At its core, Embedchain follows the design principle of being “Conventional but Configurable” to serve both software engineers and machine learning engineers.
Installation and Setup
We start by installing embedchain
and premai-sdk
. Use the following commands to install them:
Before proceeding further, please make sure that you have made an account on PremAI and already created a project. If not, please refer to the quick start guide to get started with the PremAI platform. Create your first project and grab your API key.
Using Embedchain with PremAI
For now, let’s assume that we want to integrate our project_id
is 123. However, in your case, please make sure to use your actual project ID; otherwise, it will throw an error.
To use embedchain with prem, you do not need to pass any model name or set any parameters with our chat-client. By default it will use the model name and parameters used in the LaunchPad.
In Embedchain, we start by defining our embedchain App
and instantiate our LLM and embedding model through a config. This Config can come from a .yaml
file or you can define it using a Python dictionary. Here is an example of a Config:
If you change the model
or any other parameters like temperature
or max_tokens
while setting the client, it will override existing default configurations, that was used in LaunchPad.
Running our embedchain application
Once we have our config, let’s add a sample document source to embed and ask questions from it:
Was this page helpful?