gpt4free
The official gpt4free repository | various collection of powerful language models | o4, o3 and deepseek r1, gpt-4.1, gemini 2.5
Top Related Projects
Reverse engineered ChatGPT API
Your API ⇒ Paid MCP. Instantly.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
The official Python library for the OpenAI API
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Quick Overview
gpt4free is an open-source project that provides free access to various AI models, including GPT-4, through reverse-engineered APIs. It aims to make advanced language models accessible to developers and researchers without the need for paid subscriptions or API keys.
Pros
- Free access to powerful AI models
- Multiple providers and models available
- Active community and frequent updates
- Useful for testing and prototyping AI applications
Cons
- Potential legal and ethical concerns regarding API usage
- Reliability issues due to dependence on third-party services
- May not be suitable for production environments
- Limited support and documentation compared to official APIs
Code Examples
- Using the Forefront provider:
import g4f
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.Forefront,
messages=[{"role": "user", "content": "Hello, how are you?"}],
stream=True,
)
for message in response:
print(message, flush=True, end='')
- Using the You provider:
import g4f
response = g4f.ChatCompletion.create(
model=g4f.models.gpt_35_turbo,
messages=[{"role": "user", "content": "Write a poem about AI"}],
provider=g4f.Provider.You,
)
print(response)
- Using the Bing provider:
import g4f
response = g4f.ChatCompletion.create(
model="gpt-4",
provider=g4f.Provider.Bing,
messages=[{"role": "user", "content": "Explain quantum computing"}],
cookies=g4f.get_cookies(".bing.com"),
)
print(response)
Getting Started
To get started with gpt4free, follow these steps:
- Install the library:
pip install -U g4f
- Import the library and use a provider:
import g4f
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
provider=g4f.Provider.OpenaiChat,
messages=[{"role": "user", "content": "Hello, world!"}],
)
print(response)
Note: Make sure to check the project's documentation for the latest updates and provider-specific instructions.
Competitor Comparisons
Reverse engineered ChatGPT API
Pros of ChatGPT
- More established project with a larger community and longer development history
- Offers a wider range of features, including support for multiple ChatGPT models
- Better documentation and more comprehensive setup instructions
Cons of ChatGPT
- Requires authentication and API keys, which may be less accessible for some users
- More complex setup process compared to gpt4free
- May have higher usage costs due to reliance on official OpenAI APIs
Code Comparison
ChatGPT:
from revChatGPT.V3 import Chatbot
chatbot = Chatbot(api_key="your_api_key")
response = chatbot.ask("Hello, how are you?")
print(response)
gpt4free:
import g4f
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response)
Both repositories provide access to ChatGPT-like functionality, but gpt4free aims to offer free access without authentication, while ChatGPT focuses on providing a more robust and official integration with OpenAI's services. The choice between them depends on the user's needs, budget, and ethical considerations regarding API usage.
Your API ⇒ Paid MCP. Instantly.
Pros of agentic
- Focuses on building autonomous AI agents, offering a more specialized and advanced approach
- Provides a framework for creating complex, goal-oriented AI systems
- Emphasizes ethical considerations and responsible AI development
Cons of agentic
- Less accessible for users seeking simple, ready-to-use GPT-like functionality
- Requires more technical knowledge and setup compared to gpt4free
- Smaller community and fewer contributors, potentially leading to slower development
Code Comparison
gpt4free:
from g4f import ChatCompletion
response = ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response)
agentic:
from agentic import Agent, Task
agent = Agent()
task = Task("Greet the user and ask how they are")
result = agent.run(task)
print(result)
The code comparison shows that gpt4free provides a more straightforward interface for generating responses, while agentic focuses on creating autonomous agents to perform tasks. gpt4free is better suited for quick, chat-like interactions, whereas agentic is designed for more complex, goal-oriented AI applications.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Pros of AutoGPT
- Autonomous task completion with minimal human intervention
- Versatile application across various domains (e.g., coding, research, analysis)
- Active development and community support
Cons of AutoGPT
- Requires API key and potentially higher costs for extended use
- More complex setup and configuration process
- May produce inconsistent or unexpected results due to its autonomous nature
Code Comparison
AutoGPT:
def start_interaction_loop(self):
# Interaction loop
while True:
# Get user input
user_input = input("Human: ")
if user_input.lower() == "exit":
break
gpt4free:
def create_chat(self, model="gpt-3.5-turbo", messages=None, **kwargs):
if messages is None:
messages = []
return ChatCompletion.create(
model=model, messages=messages, **kwargs
)
AutoGPT focuses on creating an autonomous agent that can perform tasks with minimal human intervention, while gpt4free aims to provide free access to various language models. AutoGPT offers more advanced features but requires more setup, while gpt4free is simpler to use but may have limitations in terms of available models and functionality.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Pros of DeepSpeed
- Focuses on optimizing deep learning training and inference
- Offers advanced techniques like ZeRO (Zero Redundancy Optimizer)
- Supports various AI frameworks and models
Cons of DeepSpeed
- More complex setup and configuration
- Primarily targets large-scale AI training scenarios
- Steeper learning curve for beginners
Code Comparison
DeepSpeed:
import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(args=args,
model=model,
model_parameters=params)
gpt4free:
import g4f
response = g4f.ChatCompletion.create(model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}])
Summary
DeepSpeed is a powerful library for optimizing deep learning workflows, particularly suited for large-scale AI training. It offers advanced techniques like ZeRO and supports various AI frameworks. However, it has a steeper learning curve and is more complex to set up compared to gpt4free.
gpt4free, on the other hand, provides a simpler interface for accessing GPT-like models, making it more accessible for quick prototyping and smaller projects. It lacks the advanced optimization features of DeepSpeed but offers easier integration for basic AI text generation tasks.
The official Python library for the OpenAI API
Pros of openai-python
- Official library maintained by OpenAI, ensuring reliability and up-to-date features
- Comprehensive documentation and support from OpenAI
- Seamless integration with OpenAI's API and services
Cons of openai-python
- Requires an API key and associated costs for usage
- Limited to OpenAI's models and services
Code Comparison
openai-python:
import openai
openai.api_key = "your-api-key"
response = openai.Completion.create(engine="text-davinci-002", prompt="Hello, world!")
print(response.choices[0].text)
gpt4free:
import g4f
response = g4f.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hello, world!"}])
print(response)
Key Differences
- openai-python is the official library, while gpt4free is a third-party alternative
- gpt4free aims to provide free access to AI models, while openai-python requires an API key and associated costs
- openai-python offers more extensive features and model options, while gpt4free focuses on providing free alternatives
- gpt4free may have potential legal and ethical concerns due to its nature of bypassing official APIs
Use Cases
- openai-python: Ideal for professional and commercial applications requiring reliable and official API access
- gpt4free: Suitable for personal projects, experimentation, or scenarios where API costs are a concern, but with potential limitations and risks
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Pros of transformers
- Comprehensive library with support for numerous pre-trained models
- Extensive documentation and community support
- Seamless integration with PyTorch and TensorFlow
Cons of transformers
- Steeper learning curve for beginners
- Larger library size and potentially higher resource requirements
Code Comparison
transformers:
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
result = classifier("I love this library!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")
gpt4free:
import g4f
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response)
Summary
transformers is a robust, well-documented library for working with various pre-trained models, offering extensive functionality and integration with popular deep learning frameworks. It's ideal for advanced users and large-scale projects.
gpt4free, on the other hand, provides a simpler interface for accessing GPT models, making it more accessible for quick implementations and experimentation. However, it may lack the comprehensive features and community support of transformers.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
GPT4Free (g4f)
Created by @xtekky,
maintained by @hlohaus
Support the project on GitHub Sponsors â¤ï¸
Live demo & docs: https://g4f.dev | Documentation: https://g4f.dev/docs
GPT4Free (g4f) is a community-driven project that aggregates multiple accessible providers and interfaces to make working with modern LLMs and media-generation models easier and more flexible. GPT4Free aims to offer multi-provider support, local GUI, OpenAI-compatible REST APIs, and convenient Python and JavaScript clients â all under a community-first license.
This README is a consolidated, improved, and complete guide to installing, running, and contributing to GPT4Free.
Table of contents
- Whatâs included
- Quick links
- Requirements & compatibility
- Installation
- Running the app
- Using the Python client
- Using GPT4Free.js (browser JS client)
- Providers & models (overview)
- Local inference & media
- Configuration & customization
- Running on smartphone
- Interference API (OpenAIâcompatible)
- Examples & common patterns
- Contributing
- Security, privacy & takedown policy
- Credits, contributors & attribution
- Powered-by highlights
- Changelog & releases
- Manifesto / Project principles
- License
- Contact & sponsorship
- Appendix: Quick commands & examples
Whatâs included
- Python client library and async client.
- Optional local web GUI.
- FastAPI-based OpenAI-compatible API (Interference API).
- Official browser JS client (g4f.dev distribution).
- Docker images (full and slim).
- Multi-provider adapters (LLMs, media providers, local inference backends).
- Tooling for image/audio/video generation and media persistence.
Quick links
- Website & docs: https://g4f.dev | https://g4f.dev/docs
- PyPI: https://pypi.org/project/g4f
- Docker image: https://hub.docker.com/r/hlohaus789/g4f
- Releases: https://github.com/xtekky/gpt4free/releases
- Issues: https://github.com/xtekky/gpt4free/issues
- Community: Telegram (https://telegram.me/g4f_channel) · Discord News (https://discord.gg/5E39JUWUFa) · Discord Support (https://discord.gg/qXA4Wf4Fsm)
Requirements & compatibility
- Python 3.10+ recommended.
- Google Chrome/Chromium for providers using browser automation.
- Docker for containerized deployment.
- Works on x86_64 and arm64 (slim image supports both).
- Some provider adapters may require platform-specific tooling (Chrome/Chromium, etc.). Check provider docs for details.
Installation
Docker (recommended)
- Install Docker: https://docs.docker.com/get-docker/
- Create persistent directories:
- Example (Linux/macOS):
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media sudo chown -R 1200:1201 ${PWD}/har_and_cookies ${PWD}/generated_media
- Example (Linux/macOS):
- Pull image:
docker pull hlohaus789/g4f
- Run container:
docker run -p 8080:8080 -p 7900:7900 \ --shm-size="2g" \ -v ${PWD}/har_and_cookies:/app/har_and_cookies \ -v ${PWD}/generated_media:/app/generated_media \ hlohaus789/g4f:latest
Notes:
- Port 8080 serves GUI/API; 7900 can expose a VNC-like desktop for provider logins (optional).
- Increase --shm-size for heavier browser automation tasks.
Slim Docker image (x64 & arm64)
mkdir -p ${PWD}/har_and_cookies ${PWD}/generated_media
chown -R 1000:1000 ${PWD}/har_and_cookies ${PWD}/generated_media
docker run \
-p 1337:8080 -p 8080:8080 \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_media:/app/generated_media \
hlohaus789/g4f:latest-slim
Notes:
- The slim image can update the g4f package on startup and installs additional dependencies as needed.
- In this example, the Interference API is mapped to 1337.
Windows Guide (.exe)
- Download the release artifact
g4f.exe.zip
from: https://github.com/xtekky/gpt4free/releases/latest - Unzip and run
g4f.exe
. - Open GUI at: http://localhost:8080/chat/
- If Windows Firewall blocks access, allow the application.
Python Installation (pip / from source / partial installs)
Prerequisites:
- Python 3.10+ (https://www.python.org/downloads/)
- Chrome/Chromium for some providers.
Install from PyPI (recommended):
pip install -U g4f[all]
Partial installs
- To install only specific functionality, use optional extras groups. See docs/requirements.md in the project docs.
Install from source:
git clone https://github.com/xtekky/gpt4free.git
cd gpt4free
pip install -r requirements.txt
pip install -e .
Notes:
- Some features require Chrome/Chromium or other tools; follow provider-specific docs.
Running the app
GUI (web client)
- Run via Python:
from g4f.gui import run_gui
run_gui()
- Or via CLI:
python -m g4f.cli gui --port 8080 --debug
FastAPI / Interference API
- Start FastAPI server:
python -m g4f --port 8080 --debug
- If using slim docker mapping, Interference API may be available at
http://localhost:1337/v1
- Swagger UI:
http://localhost:1337/docs
CLI
- Start GUI server:
python -m g4f.cli gui --port 8080 --debug
Optional provider login (desktop within container)
- Accessible at:
http://localhost:7900/?autoconnect=1&resize=scale&password=secret
- Useful for logging into web-based providers to obtain cookies/HAR files.
Using the Python client
Install:
pip install -U g4f[all]
Synchronous text example:
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, how are you?"}],
web_search=False
)
print(response.choices[0].message.content)
Expected:
Hello! How can I assist you today?
Image generation example:
from g4f.client import Client
client = Client()
response = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="url"
)
print(f"Generated image URL: {response.data[0].url}")
Async client example:
from g4f.client import AsyncClient
import asyncio
async def main():
client = AsyncClient()
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Explain quantum computing briefly"}],
)
print(response.choices[0].message.content)
asyncio.run(main())
Notes:
- See the full API reference for streaming, tool-calling patterns, and advanced options: https://g4f.dev/docs/client
Using GPT4Free.js (browser JS client)
Use the official JS client in the browserâno backend required.
Example:
<script type="module">
import Client from 'https://g4f.dev/dist/js/client.js';
const client = new Client();
const result = await client.chat.completions.create({
model: 'gpt-4.1', // Or "gpt-4o", "deepseek-v3", etc.
messages: [{ role: 'user', content: 'Explain quantum computing' }]
});
console.log(result.choices[0].message.content);
</script>
Notes:
- The JS client is distributed via the g4f.dev CDN for easy usage. Review CORS considerations and usage limits.
Providers & models (overview)
- GPT4Free integrates many providers including (but not limited to) OpenAI-compatible endpoints, PerplexityLabs, Gemini, MetaAI, Pollinations (media), and local inference backends.
- Model availability and behavior depend on provider capabilities. See the providers doc for current, supported provider/model lists: https://g4f.dev/docs/providers-and-models
Provider requirements may include:
- API keys or tokens (for authenticated providers)
- Browser cookies / HAR files for providers scraped via browser automation
- Chrome/Chromium or headless browser tooling
- Local model binaries and runtime (for local inference)
Local inference & media
- GPT4Free supports local inference backends. See docs/local.md for supported runtimes and hardware guidance.
- Media generation (image, audio, video) is supported through providers (e.g., Pollinations). See docs/media.md for formats, options, and sample usage.
Configuration & customization
- Configure via environment variables, CLI flags, or config files. See docs/config.md.
- To reduce install size, use partial requirement groups. See docs/requirements.md.
- Provider selection: learn how to set defaults and override per-request at docs/selecting_a_provider.md.
- Persistence: HAR files, cookies, and generated media persist in mapped directories (e.g., har_and_cookies, generated_media).
Running on smartphone
- The web GUI is responsive and can be accessed from a phone by visiting your host IP:8080 or via a tunnel. See docs/guides/phone.md.
Interference API (OpenAIâcompatible)
- The Interference API enables OpenAI-like workflows routed through GPT4Free provider selection.
- Docs: docs/interference-api.md
- Default endpoint (example slim docker):
http://localhost:1337/v1
- Swagger UI:
http://localhost:1337/docs
Examples & common patterns
- Streaming completions, stopping criteria, system messages, and tool-calling patterns are documented in:
- Integrations (LangChain, PydanticAI): docs/pydantic_ai.md
- Legacy examples: docs/legacy.md
Contributing
Contributions are welcome â new providers, features, docs, and fixes are appreciated.
How to contribute:
- Fork the repository.
- Create a branch for your change.
- Run tests and linters.
- Open a Pull Request with a clear description and tests/examples if applicable.
Repository: https://github.com/xtekky/gpt4free
How to create a new provider
- Read the guide: docs/guides/create_provider.md
- Typical steps:
- Implement a provider adapter in
g4f/Provider/
- Add configuration and dependency notes
- Include tests and usage examples
- Respect thirdâparty code licenses and attribute appropriately
- Implement a provider adapter in
How AI can help you write code
- See: docs/guides/help_me.md for prompt templates and workflows to accelerate development.
Security, privacy & takedown policy
- Do not store or share sensitive credentials. Use per-provider recommended security practices.
- If your site appears in the projectâs links and you want it removed, send proof of ownership to takedown@g4f.ai and it will be removed promptly.
- For production, secure the server with HTTPS, authentication, and firewall rules. Limit access to provider credentials and cookie/HAR storage.
Credits, contributors & attribution
- Core creators: @xtekky (original), maintained by @hlohaus.
- Full contributor graph: https://github.com/xtekky/gpt4free/graphs/contributors
- Notable code inputs and attributions:
har_file.py
â input from xqdoo00o/ChatGPT-to-APIPerplexityLabs.py
â input from nathanrchn/perplexityaiGemini.py
â input from dsdanielpark/Gemini-API and HanaokaYuzu/Gemini-APIMetaAI.py
â inspired by meta-ai-api by Strvmproofofwork.py
â input from missuo/FreeGPT35
Many more contributors are acknowledged in the repository.
Powered-by highlights
- Pollinations AI â generative media: https://github.com/pollinations/pollinations
- MoneyPrinter V2 â example project using GPT4Free: https://github.com/FujiwaraChoki/MoneyPrinterV2
- For a full list of projects and sites using GPT4Free, see: docs/powered-by.md
Changelog & releases
- Releases and full changelog: https://github.com/xtekky/gpt4free/releases
- Subscribe to Discord/Telegram for announcements.
Manifesto / Project principles
GPT4Free is guided by community principles:
- Open access to AI tooling and models.
- Collaboration across providers and projects.
- Opposition to monopolistic, closed systems that restrict creativity.
- Community-centered development and broad access to AI technologies.
- Promote innovation, creativity, and accessibility.
License
This program is licensed under the GNU General Public License v3.0 (GPLv3). See the full license: https://www.gnu.org/licenses/gpl-3.0.txt
Summary:
- You may redistribute and/or modify under the terms of GPLv3.
- The program is provided WITHOUT ANY WARRANTY.
Copyright notice
xtekky/gpt4free: Copyright (C) 2025 xtekky
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
Contact & sponsorship
- Maintainers: https://github.com/hlohaus
- Sponsorship: https://github.com/sponsors/hlohaus
- Issues & feature requests: https://github.com/xtekky/gpt4free/issues
- Takedown requests: takedown@g4f.ai
Appendix: Quick commands & examples
Install (pip):
pip install -U g4f[all]
Run GUI (Python):
python -m g4f.cli gui --port 8080 --debug
# or
python -c "from g4f.gui import run_gui; run_gui()"
Docker (full):
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 7900:7900 \
--shm-size="2g" \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_media:/app/generated_media \
hlohaus789/g4f:latest
Docker (slim):
docker run -p 1337:8080 -p 8080:8080 \
-v ${PWD}/har_and_cookies:/app/har_and_cookies \
-v ${PWD}/generated_media:/app/generated_media \
hlohaus789/g4f:latest-slim
Python usage patterns:
client.chat.completions.create(...)
client.images.generate(...)
- Async variants via
AsyncClient
Docs & deeper reading
- Full docs: https://g4f.dev/docs
- Client API docs: https://g4f.dev/docs/client
- Async client docs: https://g4f.dev/docs/async_client
- Provider guides: https://g4f.dev/docs/guides
- Local inference: https://g4f.dev/docs/local
Thank you for using and contributing to GPT4Free â together we make powerful AI tooling accessible, flexible, and community-driven.
Top Related Projects
Reverse engineered ChatGPT API
Your API ⇒ Paid MCP. Instantly.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
The official Python library for the OpenAI API
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot