Back to Blog

OpenAI Assistants API: Threads, File Search, Code Interpreter, and Function Tools

Build production AI assistants with the OpenAI Assistants API: creating assistants with tools, managing threads and messages, streaming responses, file search with vector stores, code interpreter, and function calling in TypeScript.

Viprasol Tech Team
November 30, 2026
13 min read

The OpenAI Assistants API provides managed thread history, built-in file search, a code interpreter sandbox, and function calling โ€” without you building any of the orchestration infrastructure. Instead of tracking conversation history in your own database and implementing RAG from scratch, the API manages context windows, vector stores, and tool execution loops.

This post covers the production-ready implementation: creating assistants with tools, managing threads, streaming responses, attaching files for knowledge retrieval, using the code interpreter, and implementing function calling with your own APIs.

Architecture Overview

User message
      โ”‚
      โ–ผ
Create/retrieve Thread (conversation container)
      โ”‚
      โ–ผ
Add Message to Thread
      โ”‚
      โ–ผ
Create Run (execute assistant on thread)
      โ”‚
      โ–ผ
Run loop:
  โ”œโ”€โ”€ in_progress โ†’ poll or stream
  โ”œโ”€โ”€ requires_action โ†’ execute function tools โ†’ submit outputs
  โ””โ”€โ”€ completed โ†’ retrieve messages โ†’ return to user

1. Creating an Assistant

// src/lib/ai/assistant.ts
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

export async function createSupportAssistant(): Promise<string> {
  const assistant = await openai.beta.assistants.create({
    name: 'Viprasol Support Assistant',
    instructions: `You are a helpful technical support assistant for Viprasol's SaaS platform.

Your capabilities:
- Answer questions about the platform using the provided documentation
- Help debug API integration issues
- Generate code examples in TypeScript and Python
- Look up user account information using the provided tools

Guidelines:
- Be concise and technical โ€” users are developers
- Always provide runnable code examples when relevant
- If you can't find information in the docs, say so โ€” don't hallucinate
- Use the get_account_info tool before answering billing or account questions`,

    model: 'gpt-4o',

    tools: [
      { type: 'file_search' },          // RAG over uploaded docs
      { type: 'code_interpreter' },     // Python sandbox for calculations
      {
        type: 'function',
        function: {
          name: 'get_account_info',
          description: 'Get account information for the current user including plan, usage, and billing status',
          parameters: {
            type: 'object',
            properties: {
              include_usage: {
                type: 'boolean',
                description: 'Whether to include API usage statistics',
              },
            },
            required: [],
          },
        },
      },
      {
        type: 'function',
        function: {
          name: 'create_support_ticket',
          description: 'Create a support ticket for issues that require human intervention',
          parameters: {
            type: 'object',
            properties: {
              title: { type: 'string', description: 'Brief title of the issue' },
              description: { type: 'string', description: 'Detailed description' },
              priority: {
                type: 'string',
                enum: ['low', 'medium', 'high', 'critical'],
              },
            },
            required: ['title', 'description', 'priority'],
          },
        },
      },
    ],

    // Attach vector store with documentation (created separately)
    tool_resources: {
      file_search: {
        vector_store_ids: [process.env.OPENAI_VECTOR_STORE_ID!],
      },
    },

    temperature: 0.1,  // Lower temperature for support (more deterministic)
    top_p: 0.95,
  });

  return assistant.id;
}

// Store assistant ID in env โ€” create once, reuse across requests
// OPENAI_ASSISTANT_ID=asst_...

๐Ÿค– AI Is Not the Future โ€” It Is Right Now

Businesses using AI automation cut manual work by 60โ€“80%. We build production-ready AI systems โ€” RAG pipelines, LLM integrations, custom ML models, and AI agent workflows.

  • LLM integration (OpenAI, Anthropic, Gemini, local models)
  • RAG systems that answer from your own data
  • AI agents that take real actions โ€” not just chat
  • Custom ML models for prediction, classification, detection

2. Managing Threads and Messages

// src/lib/ai/threads.ts
import OpenAI from 'openai';
import { db } from '../db';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

// Map your user's conversation ID to an OpenAI thread ID
export async function getOrCreateThread(conversationId: string): Promise<string> {
  const existing = await db.conversation.findUnique({
    where: { id: conversationId },
    select: { openaiThreadId: true },
  });

  if (existing?.openaiThreadId) {
    return existing.openaiThreadId;
  }

  const thread = await openai.beta.threads.create({
    metadata: { conversationId },  // For debugging in OpenAI dashboard
  });

  await db.conversation.update({
    where: { id: conversationId },
    data: { openaiThreadId: thread.id },
  });

  return thread.id;
}

// Add a message to the thread
export async function addUserMessage(
  threadId: string,
  content: string,
  attachments?: Array<{ fileId: string }>
): Promise<string> {
  const message = await openai.beta.threads.messages.create(threadId, {
    role: 'user',
    content,
    attachments: attachments?.map((a) => ({
      file_id: a.fileId,
      tools: [{ type: 'file_search' }],
    })),
  });

  return message.id;
}

3. Streaming Runs

// src/lib/ai/runs.ts
import OpenAI from 'openai';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

interface RunHandlers {
  onText: (text: string) => void;
  onToolCall: (toolName: string, args: Record<string, unknown>) => Promise<string>;
  onComplete: () => void;
  onError: (error: Error) => void;
}

export async function streamAssistantResponse(
  threadId: string,
  assistantId: string,
  handlers: RunHandlers
): Promise<void> {
  const stream = openai.beta.threads.runs.stream(threadId, {
    assistant_id: assistantId,
  });

  stream
    .on('textDelta', (delta) => {
      if (delta.value) handlers.onText(delta.value);
    })
    .on('toolCallDone', async (toolCall) => {
      if (toolCall.type !== 'function') return;

      const args = JSON.parse(toolCall.function.arguments);
      const result = await handlers.onToolCall(toolCall.function.name, args);

      // Submit tool output back to continue the run
      await openai.beta.threads.runs.submitToolOutputs(
        threadId,
        stream.currentRun()!.id,
        {
          tool_outputs: [
            { tool_call_id: toolCall.id, output: result },
          ],
        }
      );
    })
    .on('runCompleted', () => handlers.onComplete())
    .on('error', (err) => handlers.onError(err instanceof Error ? err : new Error(String(err))));

  await stream.finalRun();
}

โšก Your Competitors Are Already Using AI โ€” Are You?

We build AI systems that actually work in production โ€” not demos that die in a Colab notebook. From data pipeline to deployed model to real business outcomes.

  • AI agent systems that run autonomously โ€” not just chatbots
  • Integrates with your existing tools (CRM, ERP, Slack, etc.)
  • Explainable outputs โ€” know why the model decided what it did
  • Free AI opportunity audit for your business

4. Function Tool Execution

// src/lib/ai/tools.ts
import { db } from '../db';

// Map tool names to handler functions
const toolHandlers: Record<
  string,
  (args: Record<string, unknown>, context: { userId: string }) => Promise<string>
> = {
  get_account_info: async (args, ctx) => {
    const account = await db.account.findUnique({
      where: { userId: ctx.userId },
      include: {
        subscription: true,
        usage: args.include_usage
          ? { where: { period: 'current_month' } }
          : false,
      },
    });

    if (!account) return JSON.stringify({ error: 'Account not found' });

    return JSON.stringify({
      plan: account.subscription?.plan ?? 'free',
      status: account.subscription?.status ?? 'none',
      apiCallsThisMonth: account.usage?.[0]?.apiCalls ?? 0,
      apiLimit: account.subscription?.apiLimit ?? 1000,
      billingEmail: account.billingEmail,
    });
  },

  create_support_ticket: async (args, ctx) => {
    const ticket = await db.supportTicket.create({
      data: {
        userId: ctx.userId,
        title: args.title as string,
        description: args.description as string,
        priority: args.priority as string,
        source: 'ai_assistant',
      },
    });

    // Notify support team
    await fetch(process.env.SLACK_WEBHOOK_URL!, {
      method: 'POST',
      body: JSON.stringify({
        text: `๐ŸŽซ New ${args.priority} ticket: ${args.title} (${ticket.id})`,
      }),
    });

    return JSON.stringify({
      ticketId: ticket.id,
      message: `Ticket created successfully. Our team will respond within ${
        args.priority === 'critical' ? '1 hour' : '24 hours'
      }.`,
    });
  },
};

export async function executeTool(
  toolName: string,
  args: Record<string, unknown>,
  userId: string
): Promise<string> {
  const handler = toolHandlers[toolName];

  if (!handler) {
    return JSON.stringify({ error: `Unknown tool: ${toolName}` });
  }

  try {
    return await handler(args, { userId });
  } catch (err) {
    console.error(`Tool execution failed: ${toolName}`, err);
    return JSON.stringify({ error: 'Tool execution failed. Please try again.' });
  }
}

5. File Search: Vector Store Setup

// src/scripts/sync-docs-to-vector-store.ts
// Run this when documentation is updated

import OpenAI from 'openai';
import { readFileSync, readdirSync } from 'fs';
import path from 'path';

const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function syncDocumentation() {
  const DOCS_DIR = './docs';
  const files = readdirSync(DOCS_DIR).filter((f) => f.endsWith('.md') || f.endsWith('.txt'));

  console.log(`Uploading ${files.length} documentation files...`);

  // Upload files
  const uploadedFiles = await Promise.all(
    files.map(async (filename) => {
      const content = readFileSync(path.join(DOCS_DIR, filename));
      const file = new File([content], filename, { type: 'text/plain' });

      return openai.files.create({
        file,
        purpose: 'assistants',
      });
    })
  );

  // Create or update vector store
  let vectorStoreId = process.env.OPENAI_VECTOR_STORE_ID;

  if (!vectorStoreId) {
    const store = await openai.beta.vectorStores.create({
      name: 'Viprasol Documentation',
      expires_after: { anchor: 'last_active_at', days: 90 },
    });
    vectorStoreId = store.id;
    console.log(`Created vector store: ${vectorStoreId}`);
    console.log('Add to .env: OPENAI_VECTOR_STORE_ID=' + vectorStoreId);
  }

  // Add files to vector store (batch for efficiency)
  await openai.beta.vectorStores.fileBatches.createAndPoll(vectorStoreId, {
    file_ids: uploadedFiles.map((f) => f.id),
  });

  console.log('โœ… Documentation synced to vector store');
}

syncDocumentation().catch(console.error);

6. Next.js API Route with SSE Streaming

// src/app/api/chat/route.ts
import { NextRequest } from 'next/server';
import { getOrCreateThread, addUserMessage } from '../../../lib/ai/threads';
import { streamAssistantResponse } from '../../../lib/ai/runs';
import { executeTool } from '../../../lib/ai/tools';
import { getServerSession } from 'next-auth';

export const runtime = 'nodejs';
export const dynamic = 'force-dynamic';

export async function POST(req: NextRequest) {
  const session = await getServerSession();
  if (!session?.user) {
    return new Response('Unauthorized', { status: 401 });
  }

  const { message, conversationId } = await req.json();

  const threadId = await getOrCreateThread(conversationId);
  await addUserMessage(threadId, message);

  // Server-Sent Events stream
  const encoder = new TextEncoder();
  const stream = new ReadableStream({
    async start(controller) {
      const send = (data: string) => {
        controller.enqueue(encoder.encode(`data: ${JSON.stringify({ text: data })}\n\n`));
      };

      await streamAssistantResponse(threadId, process.env.OPENAI_ASSISTANT_ID!, {
        onText: send,
        onToolCall: async (toolName, args) => {
          send(`\n[Checking ${toolName}...]\n`);
          return executeTool(toolName, args, session.user.id);
        },
        onComplete: () => {
          controller.enqueue(encoder.encode('data: [DONE]\n\n'));
          controller.close();
        },
        onError: (err) => {
          controller.enqueue(
            encoder.encode(`data: ${JSON.stringify({ error: err.message })}\n\n`)
          );
          controller.close();
        },
      });
    },
  });

  return new Response(stream, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      Connection: 'keep-alive',
    },
  });
}

Cost Reference

UsageModelMonthly costNotes
100K messages, avg 500 tokensgpt-4o-mini~$30Good for support bots
100K messages, avg 500 tokensgpt-4o~$250Complex reasoning needed
File search (per query)Any+$0.10/1K queriesVector store queries
Code interpreter (per session)Any$0.03/sessionPython execution time
Storage (vector store)โ€”$0.10/GB/dayDocumentation size

See Also


Working With Viprasol

Building an AI-powered support assistant, internal knowledge bot, or document analysis tool? We implement production OpenAI Assistants with file search over your documentation, custom function tools connected to your APIs, streaming responses, and cost controls โ€” so your AI actually knows your product and can take actions on behalf of users.

Talk to our team โ†’ | Explore our AI/ML services โ†’

Share this article:

About the Author

V

Viprasol Tech Team

Custom Software Development Specialists

The Viprasol Tech team specialises in algorithmic trading software, AI agent systems, and SaaS development. With 100+ projects delivered across MT4/MT5 EAs, fintech platforms, and production AI systems, the team brings deep technical experience to every engagement. Based in India, serving clients globally.

MT4/MT5 EA DevelopmentAI Agent SystemsSaaS DevelopmentAlgorithmic Trading

Want to Implement AI in Your Business?

From chatbots to predictive models โ€” harness the power of AI with a team that delivers.

Free consultation โ€ข No commitment โ€ข Response within 24 hours

Viprasol ยท AI Agent Systems

Ready to automate your business with AI agents?

We build custom multi-agent AI systems that handle sales, support, ops, and content โ€” across Telegram, WhatsApp, Slack, and 20+ other platforms. We run our own business on these systems.