How You Can Create an AI Chat Bot in 200 Lines of Code

August 14, 2025 (4d ago)
AI Chatbot with Vercel AI Elements

Building an AI chatbot has never been easier. With Vercel AI Elements and AI SDK 5, you can create a sophisticated chatbot with reasoning capabilities, web search integration, and multiple model support in just around 200 lines of code.

In this comprehensive guide, we'll walk through building a production-ready chatbot that includes:

  • 🧠 Reasoning display - Show the AI's thought process
  • 🔍 Web search integration - Real-time information with citations
  • 🎛️ Model selection - Switch between OpenAI GPT-4o and Google Gemini
  • 📱 Responsive design - Works perfectly on all devices
  • Streaming responses - Real-time message streaming

What Are Vercel AI Elements?

Vercel AI Elements are pre-built, customizable React components specifically designed for AI applications. They handle the complex UI patterns common in AI interfaces - like streaming text, reasoning displays, source citations, and conversation management.

AI SDK 5 provides the backend infrastructure with standardized APIs for working with different AI models, making it easy to switch between providers like OpenAI, Google, Anthropic, and more.

Project Setup

Let's start by setting up a new Next.js project with all the necessary dependencies:


# Create a new Next.js project

npx create-next-app@latest ai-chatbot && cd ai-chatbot

# Install AI Elements (this also sets up shadcn/ui)

npx ai-elements@latest

# Install AI SDK dependencies

npm install ai @ai-sdk/react @ai-sdk/openai @ai-sdk/google zod

Environment Configuration

You have two options for API access:

Using Vercel AI Gateway (Recommended)

Create a .env.local file and add your Vercel AI Gateway token:


# Get your token from https://vercel.com/dashboard/ai-gateway

VERCEL_AI_GATEWAY_TOKEN=your_gateway_token_here

Direct API Keys

If you prefer to use your own API keys directly:


# For OpenAI

OPENAI_API_KEY=your_openai_key_here

# For Google Gemini

GOOGLE_API_KEY=your_google_key_here

Building the Frontend (Client Component)

Create ‌app/‌page.tsx with our main chatbot interface:

"use client";

import {
  Conversation,
  ConversationContent,
  ConversationScrollButton,
} from "@/components/ai-elements/conversation";
import { Message, MessageContent } from "@/components/ai-elements/message";
import {
  PromptInput,
  PromptInputButton,
  PromptInputModelSelect,
  PromptInputModelSelectContent,
  PromptInputModelSelectItem,
  PromptInputModelSelectTrigger,
  PromptInputModelSelectValue,
  PromptInputSubmit,
  PromptInputTextarea,
  PromptInputToolbar,
  PromptInputTools,
} from "@/components/ai-elements/prompt-input";
import { useState } from "react";
import { useChat } from "@ai-sdk/react";
import { Response } from "@/components/ai-elements/response";
import { GlobeIcon } from "lucide-react";
import {
  Source,
  Sources,
  SourcesContent,
  SourcesTrigger,
} from "@/components/ai-elements/source";
import {
  Reasoning,
  ReasoningContent,
  ReasoningTrigger,
} from "@/components/ai-elements/reasoning";
import { Loader } from "@/components/ai-elements/loader";

const models = [
  {
    name: "GPT 4o",
    value: "openai/gpt-4o",
  },
  {
    name: "Gemini 2.0 Flash",
    value: "google/gemini-2.0-flash",
  },
];

export default function ChatBot() {
  const [input, setInput] = useState("");
  const [model, setModel] = useState<string>(models[0].value);
  const [webSearch, setWebSearch] = useState(false);
  const { messages, sendMessage, status } = useChat();

  const handleSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    if (input.trim()) {
      sendMessage(
        { text: input },
        {
          body: {
            model: model,
            webSearch: webSearch,
          },
        }
      );
      setInput("");
    }
  };

  return (
    <div className="max-w-4xl mx-auto p-6 relative size-full h-screen">
      <div className="flex flex-col h-full">
        <Conversation className="h-full">
          <ConversationContent>
            {messages.map((message) => (
              <div key={message.id}>
                {message.role === "assistant" && (
                  <Sources>
                    {message.parts.map((part, i) => {
                      switch (part.type) {
                        case "source-url":
                          return (
                            <>
                              <SourcesTrigger
                                count={
                                  message.parts.filter(
                                    (part) => part.type === "source-url"
                                  ).length
                                }
                              />
                              <SourcesContent key={`${message.id}-${i}`}>
                                <Source
                                  key={`${message.id}-${i}`}
                                  href={part.url}
                                  title={part.url}
                                />
                              </SourcesContent>
                            </>
                          );
                      }
                    })}
                  </Sources>
                )}
                <Message from={message.role} key={message.id}>
                  <MessageContent>
                    {message.parts.map((part, i) => {
                      switch (part.type) {
                        case "text":
                          return (
                            <Response key={`${message.id}-${i}`}>
                              {part.text}
                            </Response>
                          );
                        case "reasoning":
                          return (
                            <Reasoning
                              key={`${message.id}-${i}`}
                              className="w-full"
                              isStreaming={status === "streaming"}
                            >
                              <ReasoningTrigger />
                              <ReasoningContent>{part.text}</ReasoningContent>
                            </Reasoning>
                          );
                        default:
                          return null;
                      }
                    })}
                  </MessageContent>
                </Message>
              </div>
            ))}
            {status === "submitted" && <Loader />}
          </ConversationContent>
          <ConversationScrollButton />
        </Conversation>

        <PromptInput onSubmit={handleSubmit} className="mt-4">
          <PromptInputTextarea
            onChange={(e) => setInput(e.target.value)}
            value={input}
            placeholder="What would you like to know?"
          />
          <PromptInputToolbar>
            <PromptInputTools>
              <PromptInputButton
                variant={webSearch ? "default" : "ghost"}
                onClick={() => setWebSearch(!webSearch)}
              >
                <GlobeIcon size={16} />
                <span>Search</span>
              </PromptInputButton>
              <PromptInputModelSelect
                onValueChange={(value) => setModel(value)}
                value={model}
              >
                <PromptInputModelSelectTrigger>
                  <PromptInputModelSelectValue />
                </PromptInputModelSelectTrigger>
                <PromptInputModelSelectContent>
                  {models.map((model) => (
                    <PromptInputModelSelectItem
                      key={model.value}
                      value={model.value}
                    >
                      {model.name}
                    </PromptInputModelSelectItem>
                  ))}
                </PromptInputModelSelectContent>
              </PromptInputModelSelect>
            </PromptInputTools>
            <PromptInputSubmit disabled={!input} status={status} />
          </PromptInputToolbar>
        </PromptInput>
      </div>
    </div>
  );
}

Backend API Route

Now create the server-side logic. You have two options depending on your setup:

Using Vercel AI Gateway

Create ‌app/‌api/‌chat/‌route.ts:

import { streamText, UIMessage, convertToModelMessages } from "ai";

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

export async function POST(req: Request) {
  const {
    messages,
    model,
    webSearch,
  }: { messages: UIMessage[]; model: string; webSearch: boolean } =
    await req.json();

  const result = streamText({
    model: webSearch ? "perplexity/sonar" : model,
    messages: convertToModelMessages(messages),
    system:
      "You are a helpful assistant that can answer questions and help with tasks",
  });

  // Send sources and reasoning back to the client
  return result.toUIMessageStreamResponse({
    sendSources: true,
    sendReasoning: true,
  });
}

Using Direct API Keys

Create ‌app/‌api/‌chat/‌route.ts:

import {
  streamText,
  type UIMessage,
  convertToModelMessages,
  type LanguageModel,
} from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { createGoogleGenerativeAI } from "@ai-sdk/google";

// Allow streaming responses up to 30 seconds
export const maxDuration = 30;

const openai = createOpenAI({
  apiKey: process.env.OPENAI_API_KEY || "",
});

const google = createGoogleGenerativeAI({
  apiKey: process.env.GOOGLE_API_KEY || "",
});

export async function POST(req: Request) {
  const { messages, model }: { messages: UIMessage[]; model: string } =
    await req.json();

  let selectedModel: LanguageModel;

  switch (model) {
    case "openai/gpt-4o":
      selectedModel = openai("gpt-4o");
      break;
    case "google/gemini-2.0-flash":
      selectedModel = google("gemini-2.0-flash");
      break;
    default:
      console.warn(`Unhandled model: ${model}. Falling back to Gemini 2.0.`);
      selectedModel = google("gemini-2.0-flash");
      break;
  }

  const result = streamText({
    model: selectedModel,
    messages: convertToModelMessages(messages),
    system:
      "You are a helpful assistant that can answer questions and help with tasks",
  });

  return result.toUIMessageStreamResponse({
    sendSources: true,
    sendReasoning: true,
  });
}

Key Features Explained

Conversation Management

The Conversation component handles the chat interface, including automatic scrolling and message organization.

Message Parts System

AI SDK 5 uses a "parts" system where each message can contain different types of content:

  • ‌text - Regular response text
  • ‌reasoning - AI's thought process
  • ‌source-‌url - Web search citations

Streaming Responses

The ‌useChat hook automatically handles streaming, showing responses as they're generated in real-time.

Model Selection

Users can switch between different AI models on the fly, with the backend handling the routing automatically.

Web Search Integration

When enabled, the chatbot can search the web and provide citations for its responses.

Running Your Chatbot

Start your development server:

npm run dev

Visit ‌http:/‌/‌localhost:3000 and start chatting! Your AI chatbot now supports:

  • ✅ Real-time streaming responses
  • ✅ Multiple AI model selection
  • ✅ Web search with citations
  • ✅ Reasoning display
  • ✅ Responsive design
  • ✅ Professional UI components

Extending Your Chatbot

Want to add more features? Here are some ideas:

Add More Models

const models = [
  { name: "GPT 4o", value: "openai/gpt-4o" },
  { name: "Gemini 2.0 Flash", value: "google/gemini-2.0-flash" },
  { name: "Claude 3.5 Sonnet", value: "anthropic/claude-3-5-sonnet" },
  { name: "Llama 3.1", value: "meta/llama-3.1-70b" },
];

Add File Upload Support

import { PromptInputAttachment } from "@/components/ai-elements/prompt-input";

// Add to your PromptInputTools

<PromptInputAttachment />;

Add Tool Calling

import { Tool } from '@/components/ai-elements/tool';

// Handle tool calls in your message rendering
case 'tool-call':
return (

<Tool key={`${message.id}-${i}`} call={part} />
); ```

## Conclusion

In just around 200 lines of code, we've built a sophisticated AI chatbot that rivals commercial solutions. **Vercel AI Elements** and **AI SDK 5** handle the complex parts, letting you focus on building great user experiences.

The combination of pre-built UI components, standardized AI APIs, and streaming capabilities makes it incredibly easy to create production-ready AI applications.

**What's next?** Try adding authentication, conversation history, or custom tools to make your chatbot even more powerful!

---

_Ready to deploy? Push your code to GitHub and deploy instantly with Vercel. Your AI chatbot will be live in minutes!_