The Jonajo Blog

Building Smarter Apps: A Practical Guide to Using OpenAI Assistants

Bulding smater apps

Today, many people are integrating AI into their products and workflows. Even if we are not developers, we need to understand the type of technology and software that powers these products.

In this article, we will explain how we used OpenAI Assistants for our Private Knowledge Base App.

Assistants Overview

OpenAI offers a range of APIs, including basic completions, text-to-speech, embeddings, and more. Additionally, the OpenAI Assistants API enables developers to create and integrate customized AI agents that can perform a variety of tasks beyond the capabilities of the base model. Users can upload documents and custom data to each AI assistant to provide it with additional context and information. Furthermore, other functionalities, such as response format and temperature, can also be customized.

Essentially, this gives us the ability to create multiple AI assistants, each designed for different functions and equipped with their own knowledge databases and contexts. We can develop these new AI assistants using OpenAI’s online platform or through API requests, providing us with significant flexibility in integrating them into our applications.

Step-by-Step Guide

Here is an example of the interface for creating new assistants. Currently, OpenAI assistants are in beta, which means more functionality might be available. We can use OpenAI’s 2 tools, the file_search, and code_interpreter, or we can add more tools using function calling (Describing a function to the assistants API, and it will be usable by the assistants). 

Once an assistant is created, we can initiate threads, which are chat instances similar to those in ChatGPT. We can create multiple threads for a single assistant, and the assistant will have access to the data uploaded in each thread. This feature enables users to review past chat instances and easily create new ones. Our goal for this project is to utilize the Assistant API to develop assistants for various use cases that are accessible to all Jonajo employees. Additionally, we want to ensure that each person has access only to their chat history.

First Time Usage
  1. Getting Our Own API Key

To access OpenAI API services, we need to sign up for openAI and go to https://platform.openai.com/assistants Go to API keys and create an API key. We would also need to add credits to our openAI account to be able to make requests to the API. 

  1. Installing OpenAI library in node

Simply run 

npm install 

Which will install the OpenAI library in our project.

  1. Setting the OpenAI API Key 

In the root directory of the project, create a .env.local file, and type in
OPENAI_API_KEY = “Your API Key”
Inserting your own API Key

Creating Our First Assistant

In the OpenAI platform, go to assistants and create your first Assistant.

You will see your new assistant and its assistant ID. We can copy this ID directly, or we can fetch all our assistants ’ IDs through the API.

  1. Creating an OpenAI object

To make our API requests more modular, we create an OpenAI object in our  /config directory. 

Creating an OpenAI object

We will be able to then import this openAI instance anywhere within our application.

  1. Getting our list of Assistants

First, we import our openAI instance:

In our project we are using a router to handle application requests for assistants, getting our list of assistants and then returning them as a JSON object response.

Getting a list of OpenAI Assistants

Within our React Application, once we’ve gotten the assistants list, we would either use the ID stored in our local storage to select the assistant, or we would select the first assistant listed.

React Application and OpenAI assistants
  1. Creating a new thread for the Assistant

Once we’ve gotten our assistant object, we can then create a new thread for the assistant to chat with.

  1. Sending messages to the Assistant


We start by creating a message to the new thread, containing the user message (The question we would like to ask the LLM).

Sending messages to the Open AI Assistant


Next, we use the createAndPoll function on runs which essentially executes the thread to the LLM for completion/inference. The LLM will now process the message and generate a corresponding response.

Custom AI assistant's setup

We use the run ID from the previous instance to retrieve the message created by the LLM, along with any other messages sent through that thread. This allows us to display the information in our application.

OpenAI threads
  1. Final Steps

We have successfully created our OpenAI Assistant and sent a message. The next step is to scale this process of repeatedly creating threads sending messages and providing functionality for switching between threads.

Final Thoughts

OpenAI Assistants make it easy to add smart, custom AI to your apps, whether it’s for private knowledge bases or better user experiences. This guide shows how simple it is to get started and create assistants tailored to your needs.

AI is evolving fast, and tools like this let you stay ahead by building smarter, more intuitive systems that work for you. Chat with us, we would love to hear your thoughts!

About author View all posts

Daniela Gonzalez

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.