In the Playground section, you can experiment with various language models simultaneously and test them with any text you desire. You can adjust settings such as creativity level (temperature) and output length (max tokens).

In this guide we will cover the following:

  1. How to select models: Choose from a variety of models, both open-source and closed-source, from multiple providers.
  2. Adjust settings: Experiment with temperature, max tokens, and system prompts to tailor the AI models’s behavior.
  3. Testing Text: Enter the text you want the AI to continue or generate new content from.
  4. Viewing Results: Analyze the output generated by the AI models.
  5. Sharing: Share the results with others for feedback or further analysis.

Get started


Create a new playground or use the default playground to get started.


Add more language models so you can test them simultaneously.

Prem gives you the ability to observe how different AI models behave under various conditions.


Trace & Launch from the lab

  • Use the Trace tool to explore the results and sub components of the language model’s response.

  • While in the lab, you can fast-track your way to launch. If you decide to launch a model from the lab, the model you choose will automatically load in the lab along with the params and repositories you’ve set with it.


Share your experiemts

Feel free to share your experiments with your colleagues or online communities.

Share Playground

Don’t worry about losing your tests; all activities in the Lab are saved. You can revisit them to analyze what worked well and what didn’t.


In the Chat section, you can interact with a language model in a conversational manner. Unlike the Playground, where you can experiment with multiple models simultaneously, here you can focus on testing out one model in a chat-like manner.


Similar to the Playground, you can adjust settings such as max tokens, temperature, and system prompts to customize the AI’s responses. Additionally, you can share the chat sessions with others for collaboration or feedback.

Status Page

We continuously monitor the performance and availability of all model providers. For detailed information on inference metrics and uptime, please refer to our status page.