Convert Figma logo to code with AI

acheong08 logoChatGPT

Reverse engineered ChatGPT API

28,009
4,483
28,009
11

Top Related Projects

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.

16,363

AI agent stdlib that works with any LLM and TypeScript AI SDK.

23,607

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

166,386

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.

GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.

52,615

🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

Quick Overview

The acheong08/ChatGPT repository is an unofficial Python library for interacting with OpenAI's ChatGPT API. It provides a simple interface for developers to integrate ChatGPT functionality into their applications, allowing for easy access to the powerful language model's capabilities.

Pros

  • Easy to use and integrate into existing Python projects
  • Supports both free and paid (Plus) ChatGPT accounts
  • Regularly updated to keep up with changes in the ChatGPT API
  • Includes features like conversation management and proxy support

Cons

  • Unofficial library, which may lead to potential instability if OpenAI changes their API
  • Requires API keys and authentication, which may not be suitable for all use cases
  • Limited documentation compared to official libraries
  • May be affected by OpenAI's usage policies and rate limits

Code Examples

  1. Basic usage:
from revChatGPT.V3 import Chatbot

chatbot = Chatbot(api_key="your_api_key_here")
response = chatbot.ask("Hello, how are you?")
print(response)
  1. Conversation management:
from revChatGPT.V3 import Chatbot

chatbot = Chatbot(api_key="your_api_key_here")
conversation_id = chatbot.ask("What is the capital of France?")
follow_up = chatbot.ask("What is its population?", conversation_id=conversation_id)
print(follow_up)
  1. Using a proxy:
from revChatGPT.V3 import Chatbot

chatbot = Chatbot(api_key="your_api_key_here", proxy="http://your_proxy_here")
response = chatbot.ask("What's the weather like today?")
print(response)

Getting Started

To get started with the acheong08/ChatGPT library, follow these steps:

  1. Install the library using pip:

    pip install revChatGPT
    
  2. Import the Chatbot class and create an instance:

    from revChatGPT.V3 import Chatbot
    chatbot = Chatbot(api_key="your_api_key_here")
    
  3. Start asking questions:

    response = chatbot.ask("Tell me a joke")
    print(response)
    

Remember to replace "your_api_key_here" with your actual OpenAI API key. You can obtain an API key by signing up for an account on the OpenAI website.

Competitor Comparisons

The ChatGPT Retrieval Plugin lets you easily find personal or work documents by asking questions in natural language.

Pros of chatgpt-retrieval-plugin

  • Official OpenAI plugin, ensuring compatibility and support
  • Designed specifically for document retrieval and integration with ChatGPT
  • Includes features for document processing, embedding, and vector search

Cons of chatgpt-retrieval-plugin

  • More complex setup and configuration required
  • Limited to retrieval tasks, less versatile than ChatGPT
  • Requires additional infrastructure for document storage and indexing

Code Comparison

ChatGPT:

chatbot = Chatbot(config={
    "access_token": "<access_token>"
})
response = chatbot.ask("Hello, how are you?")
print(response)

chatgpt-retrieval-plugin:

from datastore.factory import get_datastore
datastore = get_datastore()
results = datastore.query("What is the capital of France?")
print(results)

The ChatGPT repository provides a simpler interface for interacting with the ChatGPT model, while the chatgpt-retrieval-plugin focuses on document retrieval and integration with ChatGPT. The ChatGPT code example shows a straightforward way to send a query and receive a response, whereas the chatgpt-retrieval-plugin code demonstrates how to query a datastore for relevant information.

16,363

AI agent stdlib that works with any LLM and TypeScript AI SDK.

Pros of Agentic

  • Built on top of LangChain, providing a more comprehensive framework for AI agents
  • Supports multiple AI models beyond just ChatGPT
  • Offers a more flexible and extensible architecture for building AI applications

Cons of Agentic

  • More complex setup and learning curve compared to ChatGPT's simpler implementation
  • Less focused on ChatGPT-specific features and optimizations
  • May have higher computational requirements due to its broader scope

Code Comparison

ChatGPT:

async def get_chat_response(self, prompt, output="text", conversation_id=None, parent_id=None):
    # ... (implementation details)
    async with httpx.AsyncClient() as client:
        response = await client.post(url, json=data, timeout=timeout)
    # ... (response handling)

Agentic:

class Agent:
    def __init__(self, llm, tools, memory=None):
        self.llm = llm
        self.tools = tools
        self.memory = memory or ConversationBufferMemory()

    def run(self, input_text):
        # ... (agent execution logic)

The code snippets highlight the different approaches: ChatGPT focuses on direct API interactions, while Agentic provides a more abstract agent-based structure using LangChain components.

23,607

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

Pros of JARVIS

  • More comprehensive AI system with multiple capabilities beyond just chat
  • Designed for multimodal interactions (text, speech, vision)
  • Actively developed by Microsoft with regular updates

Cons of JARVIS

  • More complex to set up and use compared to ChatGPT
  • Requires more computational resources
  • Less focused on pure conversational AI

Code Comparison

ChatGPT (Python):

from revChatGPT.V1 import Chatbot

chatbot = Chatbot(config={
    "email": "<your email>",
    "password": "<your password>"
})

response = chatbot.ask("Hello, how are you?")
print(response["message"])

JARVIS (Python):

from jarvis import JARVIS

jarvis = JARVIS()
jarvis.load_plugins()

response = jarvis.process("What's the weather like today?")
print(response)

While ChatGPT focuses on providing a simple interface for conversational AI, JARVIS offers a more extensible framework for building complex AI systems with multiple capabilities. ChatGPT is easier to use for basic chat interactions, while JARVIS provides a broader range of AI functionalities at the cost of increased complexity.

166,386

AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.

Pros of AutoGPT

  • Autonomous task execution with minimal human intervention
  • Supports multiple AI models and can switch between them
  • Includes memory management and web browsing capabilities

Cons of AutoGPT

  • More complex setup and configuration required
  • Higher computational resources needed
  • Potential for unexpected or uncontrolled actions

Code Comparison

AutoGPT:

def start_interaction_loop(self):
    # Interaction loop
    while True:
        # Get user input
        user_input = self.user_interface.prompt_user_input()
        # ...

ChatGPT:

def ask(self, prompt, conversation_id=None, parent_id=None):
    # Send a prompt to ChatGPT and return the response
    return self.chatbot.ask(prompt, conversation_id, parent_id)

AutoGPT focuses on autonomous operation with a continuous interaction loop, while ChatGPT provides a simpler interface for direct question-answering. AutoGPT offers more advanced features but requires more setup, whereas ChatGPT is easier to use but has more limited functionality. The choice between them depends on the specific use case and desired level of automation.

GUI for ChatGPT API and many LLMs. Supports agents, file-based QA, GPT finetuning and query with web search. All with a neat UI.

Pros of ChuanhuChatGPT

  • Provides a user-friendly graphical interface for interacting with ChatGPT
  • Supports multiple language models and APIs, including OpenAI, Claude, and local models
  • Offers advanced features like conversation history, custom prompts, and model fine-tuning

Cons of ChuanhuChatGPT

  • More complex setup and installation process compared to ChatGPT
  • Requires more system resources due to the graphical interface
  • May have a steeper learning curve for users unfamiliar with GUI-based applications

Code Comparison

ChatGPT (Python):

from revChatGPT.V1 import Chatbot

chatbot = Chatbot(config={
    "email": "<your email>",
    "password": "<your password>"
})

response = chatbot.ask("Hello, how are you?")
print(response["message"])

ChuanhuChatGPT (Python):

import gradio as gr
from modules import config
from modules.utils import *
from modules.presets import *

gr.Chatbot.postprocess = postprocess
gr.Chatbot.update = update

The code snippets show that ChatGPT focuses on a simple API interaction, while ChuanhuChatGPT utilizes the Gradio library for creating a web-based interface. ChuanhuChatGPT's code suggests a more complex structure with multiple modules and customizations.

52,615

🔮 ChatGPT Desktop Application (Mac, Windows and Linux)

Pros of ChatGPT (lencx)

  • User-friendly desktop application with a graphical interface
  • Cross-platform support (Windows, macOS, Linux)
  • Additional features like prompt library and export options

Cons of ChatGPT (lencx)

  • Larger file size and resource usage due to being a full desktop application
  • May require more frequent updates to maintain compatibility with OpenAI's API

Code Comparison

ChatGPT (acheong08):

async def get_chat_response(self, prompt, output="text", conversation_id=None, parent_id=None):
    async with httpx.AsyncClient() as client:
        response = await client.post(
            "https://chat.openai.com/backend-api/conversation",
            headers=self.headers,
            json={
                "action": "next",
                "messages": [
                    {
                        "id": str(uuid.uuid4()),
                        "role": "user",
                        "content": {"content_type": "text", "parts": [prompt]},
                    }
                ],
                "conversation_id": conversation_id,
                "parent_message_id": parent_id or str(uuid.uuid4()),
                "model": "text-davinci-002-render-sha"
            },
        )

ChatGPT (lencx):

const handleSubmit = async () => {
  if (!inputValue.trim()) return;
  const userMessage = { role: 'user', content: inputValue.trim() };
  setMessages((prevMessages) => [...prevMessages, userMessage]);
  setInputValue('');
  try {
    const response = await fetch('/api/chat', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ messages: [...messages, userMessage] }),
    });
    const data = await response.json();
    setMessages((prevMessages) => [...prevMessages, data.message]);
  } catch (error) {
    console.error('Error:', error);
  }
};

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

ChatGPT

English - 中文 - Spanish - 日本語 - 한국어

PyPi Support_Platform Downloads

Reverse Engineered ChatGPT API by OpenAI. Extensible for chatbots etc.

Installation

python -m pip install --upgrade revChatGPT

Suport Python Version

  • Minimum - Python3.9
  • Recommend - Python3.11+

V1 Standard ChatGPT

V1 uses a cloudflare bypass proxy to make life convenient for everyone. The proxy is open source: https://github.com/acheong08/ChatGPT-Proxy-V4

To set your own deployed proxy, set the environment variable CHATGPT_BASE_URL to https://yourproxy.com/api/

Rate limits

  • Proxy server: 5 requests / 10 seconds
  • OpenAI: 50 requests / hour for each account

Configuration

  1. Create account on OpenAI's ChatGPT
  2. Save your email and password

Authentication method: (Choose 1)

- Email/Password

Not supported for Google/Microsoft accounts.

{
  "email": "email",
  "password": "your password"
}

- Access token

https://chat.openai.com/api/auth/session

{
  "access_token": "<access_token>"
}

- Optional configuration:

{
  "conversation_id": "UUID...",
  "parent_id": "UUID...",
  "proxy": "...",
  "model": "gpt-4", // gpt-4-browsing, text-davinci-002-render-sha, gpt-4, gpt-4-plugins
  "plugin_ids": ["plugin-d1d6eb04-3375-40aa-940a-c2fc57ce0f51"], // Wolfram Alpha example
  "disable_history": true,
  "PUID": "<_puid cookie for plus accounts>", // Only if you have a plus account and use GPT-4
  "unverified_plugin_domains":["showme.redstarplugin.com"] // Unverfied plugins to install
}
  1. Save this as $HOME/.config/revChatGPT/config.json
  2. If you are using Windows, you will need to create an environment variable named HOME and set it to your home profile for the script to be able to locate the config.json file.

Plugin IDs can be found here. Remember to set model to gpt-4-plugins if plugins are enabled. Plugins may or may not work if you haven't installed them from the web interface. You can call chatbot.install_plugin(plugin_id=plugin_id) to install any one of them from code. Call chatbot.get_plugins() to get a list of all plugins available.

Usage

Command line

python3 -m revChatGPT.V1

        ChatGPT - A command-line interface to OpenAI's ChatGPT (https://chat.openai.com/chat)
        Repo: github.com/acheong08/ChatGPT
Type '!help' to show a full list of commands
Logging in...
You:
(Press Esc followed by Enter to finish)

The command line interface supports multi-line inputs and allows navigation using arrow keys. Besides, you can also edit history inputs by arrow keys when the prompt is empty. It also completes your input if it finds matched previous prompts. To finish input, press Esc and then Enter as solely Enter itself is used for creating new line in multi-line mode.

Set the environment variable NO_COLOR to true to disable color output.

Developer API

Basic example (streamed):

from revChatGPT.V1 import Chatbot
chatbot = Chatbot(config={
  "access_token": "<your access_token>"
})
print("Chatbot: ")
prev_text = ""
for data in chatbot.ask(
    "Hello world",
):
    message = data["message"][len(prev_text) :]
    print(message, end="", flush=True)
    prev_text = data["message"]
print()

Basic example (single result):

from revChatGPT.V1 import Chatbot
chatbot = Chatbot(config={
  "access_token": "<your access_token>"
})
prompt = "how many beaches does portugal have?"
response = ""
for data in chatbot.ask(
  prompt
):
    response = data["message"]
print(response)

All API methods

Refer to the wiki for advanced developer usage.

V3 Official Chat API

Recently released by OpenAI

  • Paid

Get API key from https://platform.openai.com/account/api-keys

Command line

python3 -m revChatGPT.V3 --api_key <api_key>

  $ python3 -m revChatGPT.V3

    ChatGPT - Official ChatGPT API
    Repo: github.com/acheong08/ChatGPT
    Version: 6.2

Type '!help' to show a full list of commands
Press Esc followed by Enter or Alt+Enter to send a message.

usage: V3.py [-h] --api_key API_KEY [--temperature TEMPERATURE] [--no_stream]
             [--base_prompt BASE_PROMPT] [--proxy PROXY] [--top_p TOP_P]
             [--reply_count REPLY_COUNT] [--enable_internet] [--config CONFIG]
             [--submit_key SUBMIT_KEY]
             [--model {gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-0301,gpt-3.5-turbo-0613,gpt-4,gpt-4-0314,gpt-4-32k,gpt-4-32k-0314,gpt-4-0613}]
             [--truncate_limit TRUNCATE_LIMIT]

Developer API

Basic example

from revChatGPT.V3 import Chatbot
chatbot = Chatbot(api_key="<api_key>")
chatbot.ask("Hello world")

Streaming example

from revChatGPT.V3 import Chatbot
chatbot = Chatbot(api_key="<api_key>")
for data in chatbot.ask_stream("Hello world"):
    print(data, end="", flush=True)

Awesome ChatGPT

My list

If you have a cool project you want added to the list, open an issue.

Disclaimers

This is not an official OpenAI product. This is a personal project and is not affiliated with OpenAI in any way. Don't sue me.

Contributors

This project exists thanks to all the people who contribute.

Additional credits