Back to Blog

OpenAI Function Calling: Tool Use, Structured Outputs, and Multi-Step Agents

Master OpenAI function calling and tool use: define type-safe tools in TypeScript, handle multi-step agent loops, use structured outputs with JSON Schema, and build reliable agentic pipelines with error handling.

Viprasol Tech Team
November 11, 2026
14 min read

Function calling transforms an LLM from a text generator into an orchestrator. Instead of producing free-form text, the model decides which tools to call, with what arguments, and when it has enough information to answer. OpenAI's function calling API is the production-ready version of this: you define tools with JSON Schema, the model returns structured tool calls, you execute them, and the loop continues until the model signals it's done.

This post covers the complete implementation: type-safe tool definitions in TypeScript, the agent execution loop, structured outputs for guaranteed JSON, error handling patterns, and a real-world multi-step research agent.

How Function Calling Works

1. You send: messages + tool definitions (JSON Schema)
2. Model returns: tool_calls = [{name: "search_web", arguments: {query: "..."}}]
3. You execute: call your actual search function
4. You send back: tool results as messages
5. Model returns: final answer (or more tool calls)
6. Repeat until finish_reason = "stop"

The model never executes code. It only tells you what to call. You decide whether to actually do it.


1. Type-Safe Tool Definitions

// src/lib/ai/tools.ts
import { ChatCompletionTool } from 'openai/resources/chat/completions';

// Define tools as typed objects first, then convert to OpenAI format
export interface ToolDefinition<TInput, TOutput> {
  name: string;
  description: string;
  parameters: object; // JSON Schema
  execute: (input: TInput) => Promise<TOutput>;
}

// Search tool
interface SearchInput {
  query: string;
  maxResults?: number;
}

interface SearchResult {
  title: string;
  url: string;
  snippet: string;
}

export const searchWebTool: ToolDefinition<SearchInput, SearchResult[]> = {
  name: 'search_web',
  description:
    'Search the web for current information. Use for recent events, prices, or facts that may have changed.',
  parameters: {
    type: 'object',
    properties: {
      query: {
        type: 'string',
        description: 'The search query',
      },
      maxResults: {
        type: 'number',
        description: 'Maximum number of results to return (1-10)',
        default: 5,
      },
    },
    required: ['query'],
    additionalProperties: false,
  },
  execute: async ({ query, maxResults = 5 }) => {
    // Real implementation: Brave API, SerpAPI, Bing, etc.
    const results = await braveSearch(query, maxResults);
    return results;
  },
};

// Database query tool
interface QueryDbInput {
  table: 'users' | 'orders' | 'products';
  filters: Record<string, string | number | boolean>;
  limit?: number;
}

export const queryDatabaseTool: ToolDefinition<QueryDbInput, object[]> = {
  name: 'query_database',
  description:
    'Query the application database for structured data. Use for current user data, order history, or product information.',
  parameters: {
    type: 'object',
    properties: {
      table: {
        type: 'string',
        enum: ['users', 'orders', 'products'],
        description: 'The database table to query',
      },
      filters: {
        type: 'object',
        description: 'Key-value pairs to filter results',
        additionalProperties: { type: ['string', 'number', 'boolean'] },
      },
      limit: {
        type: 'number',
        description: 'Maximum rows to return (1-100)',
        default: 10,
      },
    },
    required: ['table', 'filters'],
    additionalProperties: false,
  },
  execute: async ({ table, filters, limit = 10 }) => {
    return db[table].findMany({ where: filters, take: limit });
  },
};

// Send email tool
interface SendEmailInput {
  to: string;
  subject: string;
  body: string;
}

export const sendEmailTool: ToolDefinition<SendEmailInput, { messageId: string }> = {
  name: 'send_email',
  description: 'Send an email to a user. Use only when explicitly requested.',
  parameters: {
    type: 'object',
    properties: {
      to: { type: 'string', description: 'Recipient email address' },
      subject: { type: 'string', description: 'Email subject line' },
      body: { type: 'string', description: 'Plain text email body' },
    },
    required: ['to', 'subject', 'body'],
    additionalProperties: false,
  },
  execute: async ({ to, subject, body }) => {
    const { messageId } = await emailService.send({ to, subject, body });
    return { messageId };
  },
};

// Convert ToolDefinition to OpenAI ChatCompletionTool format
export function toOpenAITool(tool: ToolDefinition<any, any>): ChatCompletionTool {
  return {
    type: 'function',
    function: {
      name: tool.name,
      description: tool.description,
      parameters: tool.parameters,
      strict: true, // Enforce JSON Schema (prevents hallucinated keys)
    },
  };
}

๐Ÿค– AI Is Not the Future โ€” It Is Right Now

Businesses using AI automation cut manual work by 60โ€“80%. We build production-ready AI systems โ€” RAG pipelines, LLM integrations, custom ML models, and AI agent workflows.

  • LLM integration (OpenAI, Anthropic, Gemini, local models)
  • RAG systems that answer from your own data
  • AI agents that take real actions โ€” not just chat
  • Custom ML models for prediction, classification, detection

2. The Agent Execution Loop

// src/lib/ai/agent.ts
import OpenAI from 'openai';
import type {
  ChatCompletionMessageParam,
  ChatCompletionToolMessageParam,
} from 'openai/resources/chat/completions';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const ALL_TOOLS = [searchWebTool, queryDatabaseTool, sendEmailTool];
const TOOL_MAP = new Map(ALL_TOOLS.map((t) => [t.name, t]));

interface AgentConfig {
  model?: string;
  systemPrompt: string;
  maxIterations?: number;
  tools?: ToolDefinition<any, any>[];
}

interface AgentResult {
  answer: string;
  toolCallCount: number;
  iterations: number;
  tokensUsed: number;
}

export async function runAgent(
  userMessage: string,
  config: AgentConfig
): Promise<AgentResult> {
  const {
    model = 'gpt-4o',
    systemPrompt,
    maxIterations = 10,
    tools = ALL_TOOLS,
  } = config;

  const messages: ChatCompletionMessageParam[] = [
    { role: 'system', content: systemPrompt },
    { role: 'user', content: userMessage },
  ];

  const openaiTools = tools.map(toOpenAITool);
  let totalTokens = 0;
  let toolCallCount = 0;
  let iterations = 0;

  while (iterations < maxIterations) {
    iterations++;

    const response = await openai.chat.completions.create({
      model,
      messages,
      tools: openaiTools,
      tool_choice: 'auto', // Model decides when to call tools
      parallel_tool_calls: true, // Allow multiple tools in one turn
      temperature: 0,  // Deterministic for agentic tasks
    });

    const choice = response.choices[0];
    totalTokens += response.usage?.total_tokens ?? 0;

    // Add assistant message to conversation
    messages.push(choice.message);

    // Check if model is done
    if (choice.finish_reason === 'stop') {
      return {
        answer: choice.message.content ?? '',
        toolCallCount,
        iterations,
        tokensUsed: totalTokens,
      };
    }

    // Handle tool calls
    if (choice.finish_reason === 'tool_calls' && choice.message.tool_calls) {
      const toolResultMessages: ChatCompletionToolMessageParam[] = await Promise.all(
        choice.message.tool_calls.map(async (toolCall) => {
          toolCallCount++;
          const tool = TOOL_MAP.get(toolCall.function.name);

          if (!tool) {
            return {
              role: 'tool' as const,
              tool_call_id: toolCall.id,
              content: JSON.stringify({ error: `Unknown tool: ${toolCall.function.name}` }),
            };
          }

          try {
            const args = JSON.parse(toolCall.function.arguments);
            console.log(`๐Ÿ”ง Calling ${toolCall.function.name}:`, args);

            const result = await tool.execute(args);
            console.log(`โœ… ${toolCall.function.name} returned:`, result);

            return {
              role: 'tool' as const,
              tool_call_id: toolCall.id,
              content: JSON.stringify(result),
            };
          } catch (err: any) {
            console.error(`โŒ ${toolCall.function.name} failed:`, err.message);
            return {
              role: 'tool' as const,
              tool_call_id: toolCall.id,
              content: JSON.stringify({ error: err.message }),
            };
          }
        })
      );

      messages.push(...toolResultMessages);
    }

    // Model returned something unexpected
    if (choice.finish_reason === 'length') {
      throw new Error('Agent response truncated โ€” increase max_tokens or reduce context');
    }
  }

  throw new Error(`Agent exceeded maximum iterations (${maxIterations})`);
}

3. Structured Outputs for Guaranteed JSON

Structured Outputs (available on gpt-4o-2024-08-06+) guarantee the response matches your JSON Schema exactly โ€” no parsing errors, no missing fields.

// src/lib/ai/structured.ts
import OpenAI from 'openai';
import { zodResponseFormat } from 'openai/helpers/zod';
import { z } from 'zod';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Define your output schema with Zod
const CustomerSentimentSchema = z.object({
  overallSentiment: z.enum(['positive', 'neutral', 'negative']),
  score: z.number().min(-1).max(1),
  topics: z.array(z.object({
    topic: z.string(),
    sentiment: z.enum(['positive', 'neutral', 'negative']),
    excerpt: z.string(),
  })),
  suggestedAction: z.enum(['no-action', 'follow-up', 'escalate', 'refund']),
  confidence: z.number().min(0).max(1),
});

type CustomerSentiment = z.infer<typeof CustomerSentimentSchema>;

export async function analyzeSentiment(
  reviewText: string
): Promise<CustomerSentiment> {
  const response = await openai.beta.chat.completions.parse({
    model: 'gpt-4o-2024-08-06',
    messages: [
      {
        role: 'system',
        content:
          'Analyze customer feedback and extract structured sentiment data. Be precise and conservative with scores.',
      },
      { role: 'user', content: reviewText },
    ],
    response_format: zodResponseFormat(CustomerSentimentSchema, 'sentiment'),
    temperature: 0,
  });

  const result = response.choices[0].message.parsed;
  if (!result) throw new Error('Structured output parsing failed');

  return result;
}

// Usage:
// const sentiment = await analyzeSentiment("Your product broke after 2 days...");
// โ†’ { overallSentiment: 'negative', score: -0.8, suggestedAction: 'refund', ... }

Structured Outputs Without Zod

// Direct JSON Schema approach (more control)
interface ExtractedEntities {
  companies: string[];
  people: string[];
  dates: string[];
  amounts: Array<{ value: number; currency: string }>;
}

export async function extractEntities(text: string): Promise<ExtractedEntities> {
  const response = await openai.chat.completions.create({
    model: 'gpt-4o-2024-08-06',
    messages: [
      { role: 'system', content: 'Extract named entities from text.' },
      { role: 'user', content: text },
    ],
    response_format: {
      type: 'json_schema',
      json_schema: {
        name: 'entities',
        strict: true,
        schema: {
          type: 'object',
          properties: {
            companies: { type: 'array', items: { type: 'string' } },
            people: { type: 'array', items: { type: 'string' } },
            dates: { type: 'array', items: { type: 'string' } },
            amounts: {
              type: 'array',
              items: {
                type: 'object',
                properties: {
                  value: { type: 'number' },
                  currency: { type: 'string' },
                },
                required: ['value', 'currency'],
                additionalProperties: false,
              },
            },
          },
          required: ['companies', 'people', 'dates', 'amounts'],
          additionalProperties: false,
        },
      },
    },
  });

  return JSON.parse(response.choices[0].message.content!) as ExtractedEntities;
}

โšก Your Competitors Are Already Using AI โ€” Are You?

We build AI systems that actually work in production โ€” not demos that die in a Colab notebook. From data pipeline to deployed model to real business outcomes.

  • AI agent systems that run autonomously โ€” not just chatbots
  • Integrates with your existing tools (CRM, ERP, Slack, etc.)
  • Explainable outputs โ€” know why the model decided what it did
  • Free AI opportunity audit for your business

4. Multi-Step Research Agent

A real-world agent that researches a topic, queries internal data, and produces a structured report.

// src/agents/research-agent.ts

const RESEARCH_SYSTEM_PROMPT = `You are a research assistant with access to web search and our internal database.

When answering questions:
1. Search the web for current information if needed
2. Query the database for internal data if relevant
3. Synthesize findings into a clear, structured answer
4. Always cite your sources

Do not send emails unless explicitly instructed.
If you cannot find reliable information, say so clearly.`;

interface ResearchReport {
  summary: string;
  keyFindings: string[];
  sources: string[];
  dataUsed: boolean;
  confidence: 'high' | 'medium' | 'low';
}

const ResearchReportSchema = z.object({
  summary: z.string(),
  keyFindings: z.array(z.string()),
  sources: z.array(z.string()),
  dataUsed: z.boolean(),
  confidence: z.enum(['high', 'medium', 'low']),
});

export async function researchTopic(
  question: string,
  tenantId: string
): Promise<ResearchReport> {
  // Step 1: Run the agentic loop to gather information
  const agentResult = await runAgent(question, {
    model: 'gpt-4o',
    systemPrompt: RESEARCH_SYSTEM_PROMPT,
    tools: [searchWebTool, queryDatabaseTool], // No email tool for research
    maxIterations: 8,
  });

  // Step 2: Structure the raw answer into a typed report
  const report = await openai.beta.chat.completions.parse({
    model: 'gpt-4o-2024-08-06',
    messages: [
      {
        role: 'system',
        content: 'Convert the research findings into a structured report.',
      },
      {
        role: 'user',
        content: `Research findings:\n\n${agentResult.answer}`,
      },
    ],
    response_format: zodResponseFormat(ResearchReportSchema, 'report'),
    temperature: 0,
  });

  return report.choices[0].message.parsed!;
}

5. Error Handling and Safety Patterns

// src/lib/ai/safe-agent.ts

// Tool execution with timeout
async function executeWithTimeout<T>(
  fn: () => Promise<T>,
  timeoutMs: number,
  toolName: string
): Promise<T> {
  const timeout = new Promise<never>((_, reject) =>
    setTimeout(() => reject(new Error(`Tool ${toolName} timed out after ${timeoutMs}ms`)), timeoutMs)
  );
  return Promise.race([fn(), timeout]);
}

// Tool call rate limiting per session
const toolCallCounts = new Map<string, number>();

function checkToolCallLimit(sessionId: string, maxCalls: number = 20): void {
  const count = toolCallCounts.get(sessionId) ?? 0;
  if (count >= maxCalls) {
    throw new Error(`Tool call limit (${maxCalls}) exceeded for session ${sessionId}`);
  }
  toolCallCounts.set(sessionId, count + 1);
}

// Sensitive tool guard: require explicit user confirmation
const SENSITIVE_TOOLS = new Set(['send_email', 'delete_record', 'charge_payment']);

async function guardSensitiveTool(
  toolName: string,
  args: unknown,
  confirmationCallback?: (tool: string, args: unknown) => Promise<boolean>
): Promise<void> {
  if (!SENSITIVE_TOOLS.has(toolName)) return;

  if (!confirmationCallback) {
    throw new Error(`Tool ${toolName} requires user confirmation but no callback provided`);
  }

  const confirmed = await confirmationCallback(toolName, args);
  if (!confirmed) {
    throw new Error(`User declined to execute ${toolName}`);
  }
}

// Input sanitization for tool arguments
function sanitizeToolArgs(toolName: string, args: Record<string, unknown>): Record<string, unknown> {
  // Prevent prompt injection via tool arguments
  if (typeof args.query === 'string') {
    // Remove potential injection patterns
    args.query = args.query.replace(/ignore previous instructions/gi, '[filtered]');
    args.query = args.query.slice(0, 500); // Max query length
  }
  return args;
}

6. Streaming Tool Calls

// src/lib/ai/streaming-agent.ts
import OpenAI from 'openai';

export async function* streamAgentResponse(
  messages: ChatCompletionMessageParam[],
  tools: ChatCompletionTool[]
): AsyncGenerator<{ type: 'text' | 'tool_call' | 'done'; content: string }> {
  const stream = openai.beta.chat.completions.stream({
    model: 'gpt-4o',
    messages,
    tools,
    tool_choice: 'auto',
  });

  for await (const chunk of stream) {
    const delta = chunk.choices[0]?.delta;

    if (delta?.content) {
      yield { type: 'text', content: delta.content };
    }

    if (delta?.tool_calls?.[0]?.function?.name) {
      yield {
        type: 'tool_call',
        content: `Calling ${delta.tool_calls[0].function.name}...`,
      };
    }
  }

  yield { type: 'done', content: '' };
}

// Next.js Route Handler: stream to client
export async function GET(req: NextRequest) {
  const { message } = await req.json();

  const encoder = new TextEncoder();
  const stream = new ReadableStream({
    async start(controller) {
      const gen = streamAgentResponse(
        [{ role: 'user', content: message }],
        ALL_TOOLS.map(toOpenAITool)
      );

      for await (const chunk of gen) {
        controller.enqueue(
          encoder.encode(`data: ${JSON.stringify(chunk)}\n\n`)
        );
      }
      controller.close();
    },
  });

  return new NextResponse(stream, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive',
    },
  });
}

Cost Reference

ModelInput (per 1M tokens)OutputTool call overheadBest for
gpt-4o$2.50$10.00~200 tokens per callProduction agents
gpt-4o-mini$0.15$0.60~200 tokens per callHigh-volume, simpler tasks
gpt-4o (structured)$2.50$10.00SameGuaranteed JSON output
o1-mini$1.10$4.40Limited tool supportComplex reasoning chains

Typical agent cost: A 5-step research agent with 2 tool calls averages ~3,000 tokens total = $0.03 per run with gpt-4o. At 10,000 runs/month = $300/month.


See Also


Working With Viprasol

Building an AI feature that needs to take actions โ€” search, query databases, send emails, call APIs? We design and implement production-grade function calling pipelines with type-safe tool definitions, structured output schemas, safety guardrails, and cost monitoring that makes agentic AI reliable in production.

Talk to our team โ†’ | Explore our AI/ML services โ†’

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Want to Implement AI in Your Business?

From chatbots to predictive models โ€” harness the power of AI with a team that delivers.

Free consultation โ€ข No commitment โ€ข Response within 24 hours

Viprasol ยท AI Agent Systems

Ready to automate your business with AI agents?

We build custom multi-agent AI systems that handle sales, support, ops, and content โ€” across Telegram, WhatsApp, Slack, and 20+ other platforms. We run our own business on these systems.