Convert Figma logo to code with AI

vercel logoai

Build AI-powered applications with React, Svelte, Vue, and Solid

9,330
1,358
9,330
241

Top Related Projects

Examples and guides for using the OpenAI API

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

185,446

An Open Source Machine Learning Framework for Everyone

37,810

TensorFlow code and pre-trained models for BERT

Quick Overview

Vercel/ai is an open-source library that provides React and Svelte components for building AI-powered user interfaces. It offers a set of tools and utilities to easily integrate AI capabilities into web applications, focusing on streaming responses and enhancing user experience with AI-driven features.

Pros

  • Easy integration with popular frontend frameworks (React and Svelte)
  • Built-in support for streaming responses, improving perceived performance
  • Comprehensive set of components and hooks for common AI-related tasks
  • Well-documented and actively maintained by Vercel

Cons

  • Limited to specific frontend frameworks (React and Svelte)
  • Requires familiarity with AI concepts and APIs for effective use
  • May have a learning curve for developers new to AI integration
  • Dependency on external AI services for full functionality

Code Examples

Example 1: Using the useChat hook in React

import { useChat } from 'ai/react';

function ChatComponent() {
  const { messages, input, handleInputChange, handleSubmit } = useChat();

  return (
    <div>
      {messages.map(m => (
        <div key={m.id}>{m.content}</div>
      ))}
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Say something..."
        />
        <button type="submit">Send</button>
      </form>
    </div>
  );
}

Example 2: Using the useCompletion hook in React

import { useCompletion } from 'ai/react';

function CompletionComponent() {
  const { completion, input, handleInputChange, handleSubmit } = useCompletion();

  return (
    <div>
      <form onSubmit={handleSubmit}>
        <input
          value={input}
          onChange={handleInputChange}
          placeholder="Enter a prompt..."
        />
        <button type="submit">Generate</button>
      </form>
      <div>{completion}</div>
    </div>
  );
}

Example 3: Using the AIStream utility for custom streaming

import { AIStream } from 'ai';

async function streamCompletion() {
  const response = await fetch('/api/completion', { method: 'POST' });
  const stream = AIStream(response);
  
  for await (const chunk of stream) {
    console.log(chunk);
  }
}

Getting Started

To get started with vercel/ai, follow these steps:

  1. Install the package:

    npm install ai
    
  2. Import and use the desired components or hooks in your React or Svelte application:

    import { useChat } from 'ai/react';
    
    function ChatApp() {
      const { messages, input, handleInputChange, handleSubmit } = useChat();
    
      // Use the hook in your component
    }
    
  3. Configure your AI provider (e.g., OpenAI) in your backend API route:

    import { Configuration, OpenAIApi } from 'openai-edge';
    import { OpenAIStream, StreamingTextResponse } from 'ai';
    
    export const runtime = 'edge';
    
    const openai = new OpenAIApi(new Configuration({
      apiKey: process.env.OPENAI_API_KEY,
    }));
    
    export default async function POST(req) {
      const { messages } = await req.json();
      const response = await openai.createChatCompletion({
        model: 'gpt-3.5-turbo',
        stream: true,
        messages,
      });
      const stream = OpenAIStream(response);
      return new StreamingTextResponse(stream);
    }
    

Competitor Comparisons

Examples and guides for using the OpenAI API

Pros of openai-cookbook

  • Comprehensive collection of OpenAI API usage examples and best practices
  • Covers a wide range of topics, including prompt engineering and model fine-tuning
  • Maintained directly by OpenAI, ensuring up-to-date and accurate information

Cons of openai-cookbook

  • Focused solely on OpenAI's products, limiting its applicability to other AI platforms
  • Less emphasis on frontend integration and user interface components
  • Primarily educational, with fewer ready-to-use code snippets for production

Code Comparison

openai-cookbook:

import openai

response = openai.Completion.create(
  engine="text-davinci-002",
  prompt="Translate the following English text to French: '{}'",
  max_tokens=60
)

ai:

import { OpenAIStream, StreamingTextResponse } from 'ai'

export const runtime = 'edge'

export async function POST(req: Request) {
  const { messages } = await req.json()
  const stream = await OpenAIStream(messages)
  return new StreamingTextResponse(stream)
}

The openai-cookbook example demonstrates a basic API call, while the ai example showcases streaming capabilities and integration with web frameworks.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive library of pre-trained models for various NLP tasks
  • Comprehensive documentation and community support
  • Flexibility to work with multiple deep learning frameworks (PyTorch, TensorFlow)

Cons of transformers

  • Steeper learning curve for beginners
  • Requires more computational resources for training and inference
  • Less focus on deployment and production-ready solutions

Code comparison

transformers:

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")

ai:

import { OpenAIStream, StreamingTextResponse } from 'ai'

export const runtime = 'edge'

export async function POST(req: Request) {
  const { messages } = await req.json()
  const stream = await OpenAIStream(messages)
  return new StreamingTextResponse(stream)
}

The transformers library provides a high-level API for various NLP tasks, while ai focuses on integrating AI models into web applications, particularly with streaming responses. transformers offers more flexibility for custom model development, whereas ai simplifies the process of adding AI capabilities to web projects, especially those using Vercel's infrastructure.

34,658

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Highly optimized for large-scale distributed training of deep learning models
  • Supports a wide range of hardware configurations and model architectures
  • Offers advanced features like ZeRO optimizer and pipeline parallelism

Cons of DeepSpeed

  • Steeper learning curve and more complex setup compared to AI
  • Primarily focused on training, with less emphasis on inference and deployment
  • Requires more in-depth knowledge of distributed systems and deep learning

Code Comparison

DeepSpeed:

import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(args=args,
                                                     model=model,
                                                     model_parameters=params)

AI:

import { OpenAIStream, StreamingTextResponse } from 'ai'
const response = await OpenAIStream(payload)
return new StreamingTextResponse(response)

Summary

DeepSpeed is a powerful library for large-scale deep learning training, offering advanced optimization techniques and distributed computing capabilities. AI, on the other hand, provides a more user-friendly approach to working with AI models, focusing on inference and integration with web applications. While DeepSpeed excels in performance and scalability for training, AI offers simplicity and ease of use for developers looking to incorporate AI functionality into their projects.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • Comprehensive deep learning framework with extensive capabilities
  • Large, active community and ecosystem of tools/libraries
  • Robust support for GPU acceleration and distributed computing

Cons of PyTorch

  • Steeper learning curve for beginners
  • Larger codebase and installation size
  • More complex setup and configuration for some use cases

Code Comparison

PyTorch example:

import torch

x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.add(x, y)
print(z)

Vercel AI example:

import { OpenAIStream } from 'ai'

const stream = await OpenAIStream({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: 'Hello' }]
})

PyTorch focuses on low-level tensor operations and neural network building blocks, while Vercel AI provides high-level abstractions for working with AI models and APIs. PyTorch is more suitable for developing and training custom models, whereas Vercel AI simplifies the integration of pre-trained models into applications.

185,446

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • Comprehensive machine learning ecosystem with extensive tools and libraries
  • Robust performance for large-scale deployments and distributed computing
  • Strong community support and extensive documentation

Cons of TensorFlow

  • Steeper learning curve, especially for beginners
  • Can be overkill for simpler AI projects or rapid prototyping
  • Slower development cycle compared to more lightweight frameworks

Code Comparison

TensorFlow:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

Vercel AI:

import { OpenAIStream } from 'ai'

export const runtime = 'edge'

export async function POST(req: Request) {
  const { messages } = await req.json()
  const stream = await OpenAIStream(messages)
  return new Response(stream)
}

TensorFlow offers a more traditional machine learning approach with explicit model definition, while Vercel AI focuses on simplifying AI integration, particularly for web applications. TensorFlow provides greater flexibility and control over model architecture, whereas Vercel AI emphasizes ease of use and quick implementation of AI features, especially in serverless environments.

37,810

TensorFlow code and pre-trained models for BERT

Pros of BERT

  • Established and widely adopted in the NLP community
  • Extensive pre-training on large datasets
  • Proven performance on various language understanding tasks

Cons of BERT

  • Primarily focused on natural language processing tasks
  • Requires more computational resources for fine-tuning
  • Less versatile for general AI applications

Code Comparison

BERT example:

import tensorflow as tf
from transformers import BertTokenizer, TFBertModel

tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained('bert-base-uncased')

AI example:

import { OpenAIStream, StreamingTextResponse } from 'ai'
import { Configuration, OpenAIApi } from 'openai-edge'

const openai = new OpenAIApi(new Configuration({ apiKey: process.env.OPENAI_API_KEY }))

Summary

BERT is a powerful NLP model with extensive pre-training, while AI is a more versatile toolkit for building AI-powered applications. BERT excels in language understanding tasks but requires more resources, whereas AI offers a broader range of AI capabilities with easier integration for developers.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

hero illustration

Vercel AI SDK

The Vercel AI SDK is a TypeScript toolkit designed to help you build AI-powered applications using popular frameworks like Next.js, React, Svelte, Vue and runtimes like Node.js.

To learn more about how to use the Vercel AI SDK, check out our API Reference and Documentation.

Installation

You will need Node.js 18+ and pnpm installed on your local development machine.

npm install ai

Usage

AI SDK Core

The AI SDK Core module provides a unified API to interact with model providers like OpenAI, Anthropic, Google, and more.

You will then install the model provider of your choice.

npm install @ai-sdk/openai
@/index.ts (Node.js Runtime)
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai'; // Ensure OPENAI_API_KEY environment variable is set

async function main() {
  const { text } = await generateText({
    model: openai('gpt-4-turbo'),
    system: 'You are a friendly assistant!',
    prompt: 'Why is the sky blue?',
  });

  console.log(text);
}

main();

AI SDK UI

The AI SDK UI module provides a set of hooks that help you build chatbots and generative user interfaces. These hooks are framework agnostic, so they can be used in Next.js, React, Svelte, Vue, and SolidJS.

@/app/page.tsx (Next.js App Router)
'use client';

import { useChat } from 'ai/react';

export default function Page() {
  const { messages, input, handleSubmit, handleInputChange, isLoading } =
    useChat();

  return (
    <div>
      {messages.map(message => (
        <div key={message.id}>
          <div>{message.role}</div>
          <div>{message.content}</div>
        </div>
      ))}

      <form onSubmit={handleSubmit}>
        <input
          value={input}
          placeholder="Send a message..."
          onChange={handleInputChange}
          disabled={isLoading}
        />
      </form>
    </div>
  );
}
@/app/api/chat/route.ts (Next.js App Router)
import { CoreMessage, streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { messages }: { messages: CoreMessage[] } = await req.json();

  const result = await streamText({
    model: openai('gpt-4'),
    system: 'You are a helpful assistant.',
    messages,
  });

  return result.toDataStreamResponse();
}

AI SDK RSC

The AI SDK RSC module provides an alternative API that also helps you build chatbots and generative user interfaces for frameworks that support React Server Components (RSC).

This API leverages the benefits of Streaming and Server Actions offered by RSC, thus improving the developer experience of managing states between server/client and building generative user interfaces.

@/app/actions.tsx (Next.js App Router)
import { streamUI } from 'ai/rsc';
import { z } from 'zod';

async function submitMessage() {
  'use server';

  const stream = await streamUI({
    model: openai('gpt-4-turbo'),
    messages: [
      { role: 'system', content: 'You are a friendly bot!' },
      { role: 'user', content: input },
    ],
    text: ({ content, done }) => {
      return <div>{content}</div>;
    },
    tools: {
      deploy: {
        description: 'Deploy repository to vercel',
        parameters: z.object({
          repositoryName: z
            .string()
            .describe('The name of the repository, example: vercel/ai-chatbot'),
        }),
        generate: async function* ({ repositoryName }) {
          yield <div>Cloning repository {repositoryName}...</div>;
          await new Promise(resolve => setTimeout(resolve, 3000));
          yield <div>Building repository {repositoryName}...</div>;
          await new Promise(resolve => setTimeout(resolve, 2000));
          return <div>{repositoryName} deployed!</div>;
        },
      },
    },
  });

  return {
    ui: stream.value,
  };
}

export const AI = createAI({
  initialAIState: {},
  initialUIState: {},
  actions: {
    submitMessage,
  },
});
@/app/layout.tsx (Next.js App Router)
import { ReactNode } from 'react';
import { AI } from '@/app/actions';

export default function Layout({ children }: { children: ReactNode }) {
  <AI>{children}</AI>;
}
@/app/page.tsx (Next.js App Router)
'use client';

import { useActions } from 'ai/rsc';
import { ReactNode, useState } from 'react';

export default function Page() {
  const [input, setInput] = useState('');
  const [messages, setMessages] = useState<ReactNode[]>([]);
  const { submitMessage } = useActions();

  return (
    <div>
      <input
        value={input}
        onChange={event => {
          setInput(event.target.value);
        }}
      />
      <button
        onClick={async () => {
          const { ui } = await submitMessage(input);
          setMessages(currentMessages => [...currentMessages, ui]);
        }}
      >
        Submit
      </button>
    </div>
  );
}

Templates

We've built templates that include AI SDK integrations for different use cases, providers, and frameworks. You can use these templates to get started with your AI-powered application.

Community

The Vercel AI SDK community can be found on GitHub Discussions where you can ask questions, voice ideas, and share your projects with other people.

Contributing

Contributions to the Vercel AI SDK are welcome and highly appreciated. However, before you jump right into it, we would like you to review our Contribution Guidelines to make sure you have smooth experience contributing to Vercel AI SDK.

Authors

This library is created by Vercel and Next.js team members, with contributions from the Open Source Community.

NPM DownloadsLast 30 Days