Skip to main content

Why Use Function Calling?

Function calling lets you extend LLM capabilities by allowing models to invoke external functions and tools. This enables your applications to perform actions like fetching real-time data, interacting with APIs, or executing computations based on the model’s reasoning.

Prerequisites

  • An API key from Prem Studio
  • Python β‰₯ 3.8 or Node.js β‰₯ 18
  • premai or openai SDK installed

Supported Models

The list of supported models is constantly evolving. For the most up-to-date list of models that support function calling, please visit the models list page. Models that support function calling are clearly marked with a tools tag on the models page.

Generate Function Calls with Prem

This quick guide shows you how to implement a weather assistant that can fetch current weather information using function calling. You’ll define a weather function, let the model decide when to call it, and handle the tool responses. Why it’s useful: It enables your AI assistant to perform real-world actions and access live data β€” not just generate text responses. When to use it: Perfect for building AI assistants, chatbots, or automation tools that need to interact with external APIs, databases, or perform calculations based on user queries.
1

Setup Environment

npm install premai
npm install --save-dev @types/node
export PREMAI_API_KEY=your_api_key
2

Define Function Schema and Mock Implementation


const tools = [
    {
        type: "function",
        function: {
            name: "get_current_weather",
            description: "Get the current weather in a given location",
            parameters: {
                type: "object",
                properties: {
                    location: {
                        type: "string",
                        description: "The city and state, e.g. San Francisco, CA",
                    },
                    unit: {
                        type: "string",
                        enum: ["celsius", "fahrenheit"]
                    },
                },
                required: ["location"],
            },
        },
    }
];

// Mock function implementation
function getCurrentWeather(location: string, unit: string = "celsius"): object {
    // In a real app, you'd call a weather API here
    return {
        location: location,
        temperature: unit === "celsius" ? "15Β°C" : "59Β°F",
        condition: "Partly cloudy",
        humidity: "65%"
    };
}
3

Initialize Client

import PremAI from "premai";

const client = new PremAI({
    apiKey: process.env.PREMAI_API_KEY || "",
});
4

Send Initial Request with Tools

const messages: ChatCompletionsParams.Message[] = [
    { role: "user", content: "What's the weather like in Boston today?" }
] as any;

const response = await client.chat.completions({
    model: "claude-4-sonnet",
    messages,
    tools,
    tool_choice: "auto"
});
5

Handle Tool Calls and Execute Functions

// Check if the model wants to call a function
if (response.choices[0].message.tool_calls) {
    // Add the assistant's response to conversation
    messages.push({
        role: "assistant",
        content: response.choices[0].message.content,
        tool_calls: response.choices[0].message.tool_calls
    });

    // Execute each tool call
    for (const toolCall of response.choices[0].message.tool_calls) {
        const functionName = toolCall.function.name;
        const functionArgs = JSON.parse(toolCall.function.arguments);

        let result: object;
        if (functionName === "get_current_weather") {
            result = getCurrentWeather(functionArgs.location, functionArgs.unit);
        } else {
            result = { error: "Unknown function" };
        }

        // Add the function result to conversation
        messages.push({
            role: "tool",
            tool_call_id: toolCall.id,
            content: JSON.stringify(result),
            name: functionName
        });
    }

    // Get final response with function results
    const finalResponse = await client.chat.completions({
        model: "claude-4-sonnet",
        messages,
        tools
    });

    console.log(finalResponse.choices[0].message.content);
} else {
    console.log(response.choices[0].message.content);
}
The json_repair.loads() function automatically handles malformed JSON in function arguments, which is more robust than using JSON.parse() or json.loads() directly, as models sometimes generate slightly malformed JSON.

Full Copy-Paste Example

import { ChatCompletionsParams } from "premai/resources/chat";
import PremAI from "premai";

const tools = [
    {
        type: "function",
        function: {
            name: "get_current_weather",
            description: "Get the current weather in a given location",
            parameters: {
                type: "object",
                properties: {
                    location: {
                        type: "string",
                        description: "The city and state, e.g. San Francisco, CA",
                    },
                    unit: {
                        type: "string",
                        enum: ["celsius", "fahrenheit"]
                    },
                },
                required: ["location"],
            },
        },
    }
];

// Mock function implementation
function getCurrentWeather(location: string, unit: string = "celsius"): object {
    // In a real app, you'd call a weather API here
    return {
        location: location,
        temperature: unit === "celsius" ? "15Β°C" : "59Β°F",
        condition: "Partly cloudy",
        humidity: "65%"
    };
}

const client = new PremAI({
    apiKey: process.env.PREMAI_API_KEY || "",
});

const messages: ChatCompletionsParams.Message[] = [
    { role: "user", content: "What's the weather like in Boston today?" }
] as any;

const response = await client.chat.completions({
    model: "claude-4-sonnet",
    messages,
    tools,
    tool_choice: "auto"
});

// Check if the model wants to call a function
if (response.choices[0].message.tool_calls) {
    // Add the assistant's response to conversation
    messages.push({
        role: "assistant",
        content: response.choices[0].message.content,
        tool_calls: response.choices[0].message.tool_calls
    });

    // Execute each tool call
    for (const toolCall of response.choices[0].message.tool_calls) {
        const functionName = toolCall.function.name;
        const functionArgs = JSON.parse(toolCall.function.arguments);

        let result: object;
        if (functionName === "get_current_weather") {
            result = getCurrentWeather(functionArgs.location, functionArgs.unit);
        } else {
            result = { error: "Unknown function" };
        }

        // Add the function result to conversation
        messages.push({
            role: "tool",
            tool_call_id: toolCall.id,
            content: JSON.stringify(result),
            name: functionName
        });
    }

    // Get final response with function results
    const finalResponse = await client.chat.completions({
        model: "claude-4-sonnet",
        messages,
        tools
    });

    console.log(finalResponse.choices[0].message.content);
} else {
    console.log(response.choices[0].message.content);
}

Pro Tips

  • We use the json-repair package to automatically handle malformed JSON in function arguments, making the parsing much more robust than using json.loads().
  • Always validate function arguments before executing functions to prevent errors or security issues.
  • Use descriptive function names and detailed descriptions to help the model understand when to call each function.
  • Set tool_choice: "auto" to let the model decide when to use functions, or set it to "none" to disable function calling for a specific request.
  • For production applications, implement proper error handling and logging for function executions.
  • Consider implementing rate limiting and authentication for external API calls within your functions.

Common Use-Cases

  • Building AI assistants that can fetch real-time data (weather, stock prices, news)
  • Creating chatbots that can interact with databases or APIs
  • Implementing AI agents that can perform calculations or data processing
  • Developing automation tools that can execute actions based on natural language commands

Tool Choice Options

You can control when and how the model uses functions with the tool_choice parameter:
  • "auto" (default): Model decides whether to call functions
  • "none": Model will not call any functions
  • {"type": "function", "function": {"name": "function_name"}}: Force the model to call a specific function