refly
🎨 Refly is an open-source AI-native creation engine. Its intuitive free-form canvas interface combines multi-threaded dialogues, artifacts, AI knowledge base integration, chrome extension clip & save, contextual memory, intelligent search, WYSIWYG AI editor and more, empowering you to effortlessly transform ideas into production-ready content.
Top Related Projects
Robust Speech Recognition via Large-Scale Weak Supervision
Port of OpenAI's Whisper model in C/C++
High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Faster Whisper transcription with CTranslate2
Quick Overview
Refly is an open-source AI-powered code review tool designed to enhance the code review process. It leverages large language models to provide automated code suggestions, identify potential issues, and offer explanations for complex code segments, aiming to improve code quality and developer productivity.
Pros
- Automated code review process, saving time for developers
- Utilizes advanced AI models to provide intelligent suggestions and explanations
- Integrates seamlessly with popular version control systems like GitHub
- Supports multiple programming languages and frameworks
Cons
- May require fine-tuning or customization for specific project needs
- Potential for false positives or missed issues in complex codebases
- Relies on external AI services, which may raise privacy concerns for some organizations
- Learning curve for developers to effectively use and interpret AI-generated suggestions
Code Examples
# Initialize Refly client
from refly import ReflyClient
client = ReflyClient(api_key="your_api_key")
# Submit code for review
review = client.review_code(
code="def add(a, b):\n return a + b",
language="python"
)
# Print review suggestions
for suggestion in review.suggestions:
print(f"Line {suggestion.line}: {suggestion.message}")
# Get explanation for a code snippet
explanation = client.explain_code(
code="lambda x: x**2 + 2*x + 1",
language="python"
)
print(explanation.text)
# Analyze code complexity
complexity = client.analyze_complexity(
code="def factorial(n):\n return 1 if n == 0 else n * factorial(n-1)",
language="python"
)
print(f"Cyclomatic complexity: {complexity.cyclomatic}")
print(f"Cognitive complexity: {complexity.cognitive}")
Getting Started
To get started with Refly, follow these steps:
-
Install the Refly library:
pip install refly
-
Set up your API key as an environment variable:
export REFLY_API_KEY=your_api_key
-
Create a new Python file and import the Refly client:
from refly import ReflyClient client = ReflyClient() # Use Refly features here
-
Start using Refly's features in your code review process or integrate it into your CI/CD pipeline for automated code analysis.
Competitor Comparisons
Robust Speech Recognition via Large-Scale Weak Supervision
Pros of Whisper
- Highly accurate speech recognition across multiple languages
- Open-source with extensive documentation and community support
- Capable of handling various audio formats and noisy environments
Cons of Whisper
- Requires significant computational resources for optimal performance
- Limited real-time processing capabilities due to model size
- Primarily focused on speech-to-text, lacking additional NLP features
Code Comparison
Whisper:
import whisper
model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])
Refly:
from refly import Refly
refly = Refly(api_key="your_api_key")
response = refly.generate(prompt="Summarize this text:")
print(response.text)
Key Differences
- Whisper specializes in speech recognition, while Refly focuses on text generation and summarization
- Whisper is a standalone model, whereas Refly is an API-based service
- Whisper processes audio files, while Refly works with text input
- Whisper is open-source and locally deployable, Refly requires an API key and cloud access
Use Cases
- Whisper: Transcription, subtitling, voice command systems
- Refly: Content creation, text summarization, language translation
Both tools serve different purposes in the AI ecosystem, with Whisper excelling in speech-to-text tasks and Refly offering broader text-based AI capabilities.
Port of OpenAI's Whisper model in C/C++
Pros of whisper.cpp
- Lightweight C++ implementation, offering better performance and lower resource usage
- Supports various platforms including mobile and embedded systems
- Provides real-time audio transcription capabilities
Cons of whisper.cpp
- Limited to speech recognition and transcription tasks
- Requires more technical expertise to integrate and use effectively
- Less comprehensive feature set compared to Refly's AI-powered writing assistant
Code Comparison
whisper.cpp:
#include "whisper.h"
int main(int argc, char ** argv) {
struct whisper_context * ctx = whisper_init_from_file("model.bin");
whisper_full_default(ctx, params, pcmf32.data(), pcmf32.size());
whisper_print_timings(ctx);
whisper_free(ctx);
}
Refly:
from refly import Refly
refly = Refly(api_key="your_api_key")
response = refly.generate(prompt="Write a blog post about AI")
print(response.text)
Summary
whisper.cpp is a specialized speech recognition library focused on performance and portability, while Refly is a more comprehensive AI-powered writing assistant. whisper.cpp excels in lightweight, real-time transcription tasks across various platforms, but requires more technical expertise. Refly offers a broader range of writing-related features with easier integration, but may have higher resource requirements and less flexibility for speech recognition tasks.
High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
Pros of Whisper
- Focuses on efficient speech recognition using GPU acceleration
- Implements the Whisper model in C++ for improved performance
- Provides a command-line interface for easy use
Cons of Whisper
- Limited to speech recognition tasks only
- Requires specific hardware (NVIDIA GPU) for optimal performance
- Less versatile compared to Refly's broader AI capabilities
Code Comparison
Whisper (C++):
void CTranscribeTask::Process()
{
const auto& mel = m_context.melSpectrogram();
const auto& model = m_context.model();
model.runEncoder( mel, m_encoderBegin, m_encoderEnd );
// ... (additional processing code)
}
Refly (Python):
def process_audio(audio_file):
audio = whisper.load_audio(audio_file)
result = model.transcribe(audio)
return result["text"]
Summary
Whisper is a specialized C++ implementation of the Whisper speech recognition model, optimized for GPU acceleration. It offers high performance but is limited to speech recognition tasks and requires specific hardware.
Refly, on the other hand, is a more versatile AI platform that includes speech recognition among other capabilities. It's implemented in Python, making it more accessible for general use but potentially less optimized for specific hardware.
The code comparison shows the difference in implementation languages and approaches, with Whisper using low-level C++ for performance and Refly using high-level Python for flexibility.
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Pros of WhisperX
- Focuses on speech recognition and transcription with advanced features like word-level timestamps
- Offers multi-language support and speaker diarization
- Provides a command-line interface for easy usage
Cons of WhisperX
- Limited to audio processing and transcription tasks
- May require more computational resources for advanced features
Code Comparison
WhisperX:
import whisperx
model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3")
print(result["text"])
Refly:
from refly import Refly
refly = Refly(api_key="your_api_key")
response = refly.complete("Your prompt here")
print(response.choices[0].text)
Summary
WhisperX specializes in audio transcription and speech recognition, offering advanced features like word-level timestamps and speaker diarization. It's well-suited for projects requiring accurate audio processing. Refly, on the other hand, is a more general-purpose AI tool focused on text generation and completion tasks. While WhisperX excels in audio-related tasks, Refly offers broader applicability for various text-based AI applications.
Faster Whisper transcription with CTranslate2
Pros of faster-whisper
- Optimized for speed, offering faster transcription than the original Whisper model
- Supports multiple languages and can perform language detection
- Provides flexible API for various use cases, including streaming audio
Cons of faster-whisper
- Focused solely on speech recognition, lacking additional AI capabilities
- Requires more setup and configuration compared to Refly's all-in-one solution
- May have higher computational requirements for optimal performance
Code Comparison
faster-whisper:
from faster_whisper import WhisperModel
model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
Refly:
from refly import Refly
refly = Refly(api_key="your_api_key")
response = refly.transcribe("audio.mp3")
print(response.text)
Summary
faster-whisper excels in speech recognition performance and flexibility, while Refly offers a more comprehensive AI toolkit with simpler integration. faster-whisper may be preferred for specialized speech-to-text tasks, whereas Refly provides a broader range of AI capabilities in a user-friendly package.
Pros of insanely-fast-whisper
- Focuses specifically on optimizing Whisper for speed, potentially offering faster transcription
- Provides detailed benchmarks and comparisons with other Whisper implementations
- Offers a simple command-line interface for easy use
Cons of insanely-fast-whisper
- Limited to Whisper functionality, while refly offers a broader range of AI-powered features
- May require more technical expertise to set up and use effectively
- Less integrated with other tools and services compared to refly's ecosystem
Code Comparison
insanely-fast-whisper
from faster_whisper import WhisperModel
model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)
refly
from refly import Refly
refly = Refly(api_key="your_api_key")
transcript = refly.transcribe("audio.mp3")
The insanely-fast-whisper code demonstrates its focus on Whisper optimization, while the refly code showcases its simplicity and integration with a broader AI platform. insanely-fast-whisper may offer more control over transcription parameters, but refly provides a more straightforward API for quick implementation.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Refly.AI
âï¸ The AI Native Creation Engine âï¸
Refly is an open-source AI-native creation engine powered by 13+ leading AI models. Its intuitive free-form canvas interface integrates multi-threaded conversations, multimodal inputs (text/images/files), RAG retrieval process, browser extension web clipper, contextual memory, AI document editing capabilities, code artifact generation (HTML/SVG/Mermaid/React), and website visualization engine, empowering you to effortlessly transform ideas into complete works with interactive visualizations and web applications.
ð v0.4.2 Released! Now Supporting Canvas template and document tableâ¡ï¸
Refly Cloud · Self-hosting · Forum · Discord · Twitter · Documentation
Quick Start
Before installing ReflyAI, ensure your machine meets these minimum system requirements:
CPU >= 2 cores
Memory >= 4GB
Self-deploy with Docker
Deploy your own feature-rich, unlimited version of ReflyAI using Docker. Our team is working hard to keep up with the latest versions.
To start deployment:
cd deploy/docker
cp ../../apps/api/.env.example .env # copy the example api env file
docker compose up -d
For the following steps, you can visit Self-deploy Guide for more details.
For core deployment tutorials, environment variable configuration, and FAQs, please refer to ð Deployment Guide.
Local Development
View details in CONTRIBUTING.
ð Featured Showcases
ð¨ Creative Canvas
Project | Description | Preview |
---|---|---|
ð§ Build Card Library CATxPAPA in 3 Days | Complete high-precision card visual asset library in 72 hours, creating industry benchmark with PAPA Lab | ![]() |
ð® Virtual Character Script Generator | Dynamic difficulty adjustment system based on knowledge graph, covering 200+ core K12 knowledge points | ![]() |
ð Understanding Large Models with 3D Visualization | Interactive visualization analysis supporting architectures like Transformer, parameter-level neuron activity tracking | ![]() |
ð Featured Artifacts
Project | Description | Preview |
---|---|---|
ð AI Teaching Assistant | Say goodbye to tedious manual organization, AI intelligently builds course knowledge framework to improve teaching efficiency | ![]() |
ð¯ Interactive Math Tutoring | Learning through play, AI-driven interactive Q&A helps children love math through games and improve grades | ![]() |
ð One-Click Webpage Clone | No coding needed, quickly clone webpages by entering links, efficiently build event landing pages | ![]() |
⨠Key Features
1
ð§µ Multi-threaded Conversation System
Built on an innovative multi-threaded architecture that enables parallel management of independent conversation contexts. Implements complex Agentic Workflows through efficient state management and context switching mechanisms, transcending traditional dialogue model limitations.
2
ð¤ Multi-model Integration Framework
- Integration with 13+ leading language models, including DeepSeek R1, Claude 3.5 Sonnet, Google Gemini 2.0, and OpenAI O3-mini
- Support for model hybrid scheduling and parallel processing
- Flexible model switching mechanism with unified conversation interface
- Multi-model knowledge base collaboration
3
ð¨ Multimodal Processing Capabilities
- File Format Support: 7+ formats including PDF, DOCX, RTF, TXT, MD, HTML, EPUB
- Image Processing: Support for mainstream formats including PNG, JPG, JPEG, BMP, GIF, SVG, WEBP
- Intelligent Batch Processing: Canvas multi-element selection and AI analysis
4
â¡ï¸ AI-Powered Skill System
Integrating advanced capabilities from Perplexity AI, Stanford Storm, and more:
- Intelligent web-wide search and information aggregation
- Vector database-based knowledge retrieval
- Smart query rewriting and recommendations
- AI-assisted document generation workflow
5
ð Context Management System
- Precise temporary knowledge base construction
- Flexible node selection mechanism
- Multi-dimensional context correlation
- Cursor-like intelligent context understanding
6
ð Knowledge Base Engine
- Support for multi-source heterogeneous data import
- RAG-based semantic retrieval architecture
- Intelligent knowledge graph construction
- Personalized knowledge space management
7
âï¸ Intelligent Content Capture
- One-click content capture from mainstream platforms (Github, Medium, Wikipedia, Arxiv)
- Intelligent content parsing and structuring
- Automatic knowledge classification and tagging
- Deep knowledge base integration
8
ð Citation System
- Flexible multi-source content referencing
- Intelligent context correlation
- One-click citation generation
- Reference source tracking
9
âï¸ AI-Enhanced Editor
- Real-time Markdown rendering
- AI-assisted content optimization
- Intelligent content analysis
- Notion-like editing experience
10
ð¨ Code Artifact Generation
- Generate HTML, SVG, Mermaid diagrams, and React applications
- Smart code structure optimization
- Component-based architecture support
- Real-time code preview and debugging
11
ð Website Visualization Engine
- Interactive web page rendering and preview
- Complex concept visualization support
- Dynamic SVG and diagram generation
- Responsive design templates
- Real-time website prototyping
- Integration with modern web frameworks
ð£ï¸ Roadmap
We're continuously improving Refly with exciting new features. For a detailed roadmap, visit our complete roadmap documentation.
- ð¨ Advanced image, audio, and video generation capabilities
- ð¨ Cross-modal content transformation tools
- ð» High-performance desktop client with improved resource management
- ð» Enhanced offline capabilities
- ð Advanced knowledge organization and visualization tools
- ð Collaborative knowledge base features
- ð Open standard for third-party plugin development based on MCP
- ð Plugin marketplace and developer SDK
- ð¤ Autonomous task completion with minimal supervision
- ð¤ Multi-agent collaboration systems
- â¡ï¸ Visual workflow builder for complex AI-powered processes
- â¡ï¸ Advanced integration capabilities with external systems and API support
- ð Enhanced security and compliance tools
- ð Advanced team management and analytics
How to Use?
- Cloud
- We've deployed a Refly Cloud version that allows zero-configuration usage, offering all capabilities of the self-hosted version, including free access to GPT-4o-mini and limited trials of GPT-4o and Claude-3.5-Sonnet. Visit https://refly.ai/ to get started.
- Self-hosting Refly Community Edition
- Get started quickly with our Getting Started Guide to run Refly in your environment. For more detailed references and in-depth instructions, please refer to our documentation.
- Refly for enterprise / organizations
- Please contact us at support@refly.ai for private deployment solutions.
Stay Updated
Star Refly on GitHub to receive instant notifications about new version releases.
Contributing Guidelines
Bug Reports | Feature Requests | Issues/Discussions | ReflyAI Community |
---|---|---|---|
Create Bug Report | Submit Feature Request | View GitHub Discussions | Visit ReflyAI Community |
Something isn't working as expected | Ideas for new features or improvements | Discuss and raise questions | A place to ask questions, learn, and connect with others |
Calling all developers, testers, tech writers and more! Contributions of all types are more than welcome, please check our CONTRIBUTING.md and feel free to browse our GitHub issues to show us what you can do.
For bug reports, feature requests, and other suggestions, you can also create a new issue and choose the most appropriate template to provide feedback.
If you have any questions, feel free to reach out to us. One of the best places to get more information and learn is the ReflyAI Community, where you can connect with other like-minded individuals.
Community and Contact
- GitHub Discussion: Best for sharing feedback and asking questions.
- GitHub Issues: Best for reporting bugs and suggesting features when using ReflyAI. Please refer to our contribution guidelines.
- Discord: Best for sharing your applications and interacting with the community.
- X(Twitter): Best for sharing your applications and staying connected with the community.
Upstream Projects
We would also like to thank the following open-source projects that make ReflyAI possible:
- LangChain - Library for building AI applications.
- ReactFlow - Library for building visual workflows.
- Tiptap - Library for building collaborative editors.
- Ant Design - UI library.
- yjs - Provides CRDT foundation for our state management and data sync implementation.
- React - Library for web and native user interfaces.
- NestJS - Library for building Node.js servers.
- Zustand - Primitive and flexible state management for React.
- Vite - Next generation frontend tooling.
- TailwindCSS - CSS library for writing beautiful styles.
- Tanstack Query - Library for frontend request handling.
- Radix-UI - Library for building accessible React UI.
- Elasticsearch - Library for building search functionality.
- QDrant - Library for building vector search functionality.
- Resend - Library for building email sending functionality.
- Other upstream dependencies.
We are deeply grateful to the community for providing such powerful yet simple libraries that allow us to focus more on implementing product logic. We hope that our project will also provide an easier-to-use AI Native content creation engine for everyone in the future.
Security Issues
To protect your privacy, please avoid posting security-related issues on GitHub. Instead, send your questions to support@refly.ai, and we will provide you with a more detailed response.
License
This repository is licensed under the ReflyAI Open Source License, which is essentially the Apache 2.0 License with some additional restrictions.
Top Related Projects
Robust Speech Recognition via Large-Scale Weak Supervision
Port of OpenAI's Whisper model in C/C++
High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Faster Whisper transcription with CTranslate2
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot