Employers are expecting developers to deliver AI solutions faster than ever. According to Gartner, 77% of service and support leaders feel pressure to deliver on AI use cases soon.
That's led developers to look for ways to speed up the creation of AI-enabled and AI-native solutions. GenAI coding tools certainly lead the charge, with almost 50% of developers saying they use such AI tools daily.
AI agent builders and development kits also promise to accelerate the development process for AI agents. ChatKit from OpenAI is one such solution designed to help developers quickly ship chat-based AI experiences. But is it the right choice for your project?
In this article, we'll look at ChatKit, what it can do, and when to use ChatKit (and, more importantly, when not to use ChatKit).
What is ChatKit?
An AI agent is an AI-driven system that can autonomously perform tasks. They break down complex requests into discrete, smaller tasks, using a combination of Large Language Models (LLMs) and tools (internal and external APIs, etc.) to make decisions and automate common business processes through intelligent automation
Some AI agents can work without a user interface. Think, for example, of an agent that detects anomalies in an application's performance and performs automated remediation (e.g., by restarting virtual servers).
By contrast, other agents—e.g., an internal knowledge base assistant, a financial planning assistant, etc.—will require a UI for users to ask questions. Such UIs need the ability to correspond with the agent's backend, manage chat state, and display the agent's train of thought, among other features.
ChatKit is OpenAI's framework for embedding agentic chat experiences in your user-facing applications. The framework includes key features, such as:
- Response streaming
- Tool integration
- Rich interactive widgets
- Attachment handling
- Thread management
- Message history
These capabilities come pre-built, saving developers from implementing low-level chat state management themselves.
ChatKit also supports more advanced features that would take extra time to code by hand. Response streaming lets users see the AI's output as it generates, creating a more engaging chat experience. Interactive widgets enable actions directly within the chat interface, such as buttons, forms, or custom visualizations.
ChatKit works as a framework-agnostic web component compatible with React, Next.js, or vanilla JavaScript. This compatibility means you can integrate it into existing applications regardless of your frontend stack. The component handles responsive design automatically, adapting to different screen sizes without additional configuration.
Under the hood: How ChatKit works
ChatKit requires a three-layer infrastructure setup. First, you need an agent workflow, which can be built using OpenAI's Agent Builder or your own custom backend. Second, you need a backend authentication endpoint that generates short-lived client tokens. Third, you implement the ChatKit component on your frontend.
The authentication layer uses session-based tokens to protect your OpenAI API keys. Your server communicates directly with the OpenAI API while the client only receives temporary credentials. This server-to-server approach keeps your API key secure. Additionally, the platform requires you to configure a domain allowlist for security purposes.
Ways to use ChatKit
You can implement ChatKit in two distinct ways, each with different tradeoffs.
The first approach uses Agent Builder to create your backend workflow. The visual builder lets you develop agents easily using a drag-and-drop interface. This option leaves scaling, infrastructure, and hosting to the platform. You focus on building the agent logic while the service handles the operational concerns.
The second approach involves building your own ChatKit-compatible server. This requires implementing the custom ChatKit server protocol using the Python SDK. You gain more control over your backend architecture, but take on the responsibility of building and maintaining the entire server infrastructure.
When to use ChatKit
You're already committed to the OpenAI ecosystem. If your organization has standardized on GPT models and OpenAI's infrastructure, ChatKit integrates seamlessly with your existing setup. You won't face friction switching between different vendor systems. In this case, the combination of AgentKit and ChatKit is a strong one-two punch that enables you to deploy both the frontend and backend of new agentic solutions quickly.
You need a chat interface delivered fast with minimal frontend development. ChatKit's pre-built UI components eliminate weeks of development work. The production-ready chat UI handles streaming responses, file uploads, and interactive widgets out of the box.
Your use case fits the agent workflow patterns. ChatKit excels at customer support chatbots, knowledge base assistants, and internal AI tools that leverage agentic capabilities. If your application maps cleanly to these patterns, implementation becomes straightforward.
You have resources to build and maintain backend infrastructure. Despite being marketed as a frontend solution, ChatKit requires full-stack development. Your team needs backend expertise to implement the authentication layer and agent server.
Your team values production-ready UI over customization flexibility. ChatKit provides a polished, tested interface with sensible defaults. If you prefer shipping quickly over fine-tuning every visual detail, this tradeoff favors ChatKit.
When NOT to use ChatKit
The biggest reason not to use ChatKit is if you're either not using a GPT model or need the flexibility that comes with a vendor-agnostic approach to LLM integration.
By default, OpenAI's tools are wired to work with the GPT LLMs. Theoretically, they can be decoupled. For example, in the case of ChatKit, you could build a ChatKit-compatible backend that responds to client requests and translates calls to other LLMs.
The issue is that this probably isn't worth your time. A key benefit of ChatKit is that it saves you from building your own frontend toolkit for chats. Building your own scalable backend requires even more of an engineering commitment.
Additionally, as of this writing, there are few (no?) good repositories containing sample code that demonstrate integrating non-GPT LLMs with ChatKit. So you'd be starting from scratch without a solid repo or tutorial to guide you.
None of this is to knock on GPT, which remains an industry-leading LLM. However, there are various reasons that GPT may not be the best choice for a specific project. GPT may not produce the best results relative to other models, for example, or it may prove too expensive for your use case.
Alternatives to ChatKit
If ChatKit's limitations concern you, several alternatives exist.
Starting from an existing open-source project provides a middle ground. Projects like chatbot-ui offer customizable interfaces. These solutions give you more flexibility than ChatKit while avoiding a complete build-from-scratch approach.
Building your own chat interface from scratch, however, offers maximum flexibility. You control every aspect of the user experience and can optimize for your exact requirements.
The downside is that this approach requires building all the plumbing and scalability infrastructure yourself. You're responsible for handling streaming, state management, and production readiness.
Building your backend without vendor lock-in
ChatKit makes building a chat frontend straightforward, but its tight coupling to OpenAI can create overhead if you're not using GPT LLMs. Teams that want to avoid this constraint need a different approach.
The better option is using a vendor-neutral agent builder for your backend while maintaining control over your frontend. This strategy lets you build sophisticated agent workflows without committing to a single LLM provider. You can switch models as needed or use different models for different tasks.
Langflow is a low-code visual builder for AI agents that solves the vendor lock-in problem. Unlike OpenAI's Agent Builder, Langflow supports all major LLMs out of the box without any additional coding, including OpenAI, Anthropic, Google (Gemini), Azure OpenAI, and various open-source models. You're not tied to any single provider's ecosystem and can switch out LLMs during development for best results.
Langflow provides more components and tools than OpenAI's Agent Builder. You can build retrieval-augmented generation (RAG) systems, integrate with databases, connect to APIs, and orchestrate complex multi-agent workflows with intelligent orchestration. The visual builder makes it easy to prototype rapidly while maintaining the flexibility to drop into code when needed. You can start with a visual workflow, then switch to Python for fine-tuning or integration with existing codebases.
For complex research tasks, Langflow supports building multi-agent systems where specialized agents collaborate on different aspects of a problem. This architecture enables more sophisticated AI applications than single-agent systems can achieve. Each agent can focus on a specific responsibility, improving both accuracy and maintainability.
The platform supports both cloud-hosted and self-hosted deployments. This flexibility is critical for regulated industries like healthcare and finance that can't send data to external services. You maintain control over where your data lives and how it's processed.
Getting started with Langflow
To get started with Langflow, download Langflow Desktop for a containerized environment on your machine.
Once you're set up, explore Langflow's capabilities through practical examples and tutorials. The chatbot walkthrough demonstrates building a knowledge base assistant using RAG. You can also query your PDFs, run any workflow via a chatbot API trigger, or expose your workflow as a tool to other agents by creating an MCP server.
For developers who want to extend their workflows, Langflow's toolkit integrates with popular automation platforms like Zapier, enabling you to connect your AI agents to hundreds of apps. You can also find real-world examples and pre-built templates on GitHub to accelerate your development.
Whether you're building a simple AI assistant or a complex multi-agent system, Langflow's no-code visual interface combined with the ability to embed Python code gives you the flexibility to tackle real-world use cases. The platform handles conversation history automatically, making it easy to build AI-powered chatbots and AI assistants that remember context across interactions.
Conclusion
ChatKit is a great choice if you're committed to the OpenAI ecosystem. Most projects, however, will want the flexibility of switching out LLMs with minimal overhead.
For these projects, an LLM vendor-agnostic solution like Langflow is a better fit. With out-of-the-box support for the most popular LLM providers, you can leverage Langflow to build AI agents and even multi-agent workflows quickly, while retaining the ability to switch out LLMs on the fly for greater accuracy and a lower price point.




