Convert Figma logo to code with AI

refly-ai logorefly

🎨 Refly is an open-source AI-native creation engine. Its intuitive free-form canvas interface combines multi-threaded dialogues, artifacts, AI knowledge base integration, chrome extension clip & save, contextual memory, intelligent search, WYSIWYG AI editor and more, empowering you to effortlessly transform ideas into production-ready content.

3,248
273
3,248
39

Top Related Projects

80,764

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

9,079

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

14,609

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Faster Whisper transcription with CTranslate2

Quick Overview

Refly is an open-source AI-powered code review tool designed to enhance the code review process. It leverages large language models to provide automated code suggestions, identify potential issues, and offer explanations for complex code segments, aiming to improve code quality and developer productivity.

Pros

  • Automated code review process, saving time for developers
  • Utilizes advanced AI models to provide intelligent suggestions and explanations
  • Integrates seamlessly with popular version control systems like GitHub
  • Supports multiple programming languages and frameworks

Cons

  • May require fine-tuning or customization for specific project needs
  • Potential for false positives or missed issues in complex codebases
  • Relies on external AI services, which may raise privacy concerns for some organizations
  • Learning curve for developers to effectively use and interpret AI-generated suggestions

Code Examples

# Initialize Refly client
from refly import ReflyClient

client = ReflyClient(api_key="your_api_key")

# Submit code for review
review = client.review_code(
    code="def add(a, b):\n    return a + b",
    language="python"
)

# Print review suggestions
for suggestion in review.suggestions:
    print(f"Line {suggestion.line}: {suggestion.message}")
# Get explanation for a code snippet
explanation = client.explain_code(
    code="lambda x: x**2 + 2*x + 1",
    language="python"
)

print(explanation.text)
# Analyze code complexity
complexity = client.analyze_complexity(
    code="def factorial(n):\n    return 1 if n == 0 else n * factorial(n-1)",
    language="python"
)

print(f"Cyclomatic complexity: {complexity.cyclomatic}")
print(f"Cognitive complexity: {complexity.cognitive}")

Getting Started

To get started with Refly, follow these steps:

  1. Install the Refly library:

    pip install refly
    
  2. Set up your API key as an environment variable:

    export REFLY_API_KEY=your_api_key
    
  3. Create a new Python file and import the Refly client:

    from refly import ReflyClient
    
    client = ReflyClient()
    
    # Use Refly features here
    
  4. Start using Refly's features in your code review process or integrate it into your CI/CD pipeline for automated code analysis.

Competitor Comparisons

80,764

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Highly accurate speech recognition across multiple languages
  • Open-source with extensive documentation and community support
  • Capable of handling various audio formats and noisy environments

Cons of Whisper

  • Requires significant computational resources for optimal performance
  • Limited real-time processing capabilities due to model size
  • Primarily focused on speech-to-text, lacking additional NLP features

Code Comparison

Whisper:

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

Refly:

from refly import Refly

refly = Refly(api_key="your_api_key")
response = refly.generate(prompt="Summarize this text:")
print(response.text)

Key Differences

  • Whisper specializes in speech recognition, while Refly focuses on text generation and summarization
  • Whisper is a standalone model, whereas Refly is an API-based service
  • Whisper processes audio files, while Refly works with text input
  • Whisper is open-source and locally deployable, Refly requires an API key and cloud access

Use Cases

  • Whisper: Transcription, subtitling, voice command systems
  • Refly: Content creation, text summarization, language translation

Both tools serve different purposes in the AI ecosystem, with Whisper excelling in speech-to-text tasks and Refly offering broader text-based AI capabilities.

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Lightweight C++ implementation, offering better performance and lower resource usage
  • Supports various platforms including mobile and embedded systems
  • Provides real-time audio transcription capabilities

Cons of whisper.cpp

  • Limited to speech recognition and transcription tasks
  • Requires more technical expertise to integrate and use effectively
  • Less comprehensive feature set compared to Refly's AI-powered writing assistant

Code Comparison

whisper.cpp:

#include "whisper.h"

int main(int argc, char ** argv) {
    struct whisper_context * ctx = whisper_init_from_file("model.bin");
    whisper_full_default(ctx, params, pcmf32.data(), pcmf32.size());
    whisper_print_timings(ctx);
    whisper_free(ctx);
}

Refly:

from refly import Refly

refly = Refly(api_key="your_api_key")
response = refly.generate(prompt="Write a blog post about AI")
print(response.text)

Summary

whisper.cpp is a specialized speech recognition library focused on performance and portability, while Refly is a more comprehensive AI-powered writing assistant. whisper.cpp excels in lightweight, real-time transcription tasks across various platforms, but requires more technical expertise. Refly offers a broader range of writing-related features with easier integration, but may have higher resource requirements and less flexibility for speech recognition tasks.

9,079

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

Pros of Whisper

  • Focuses on efficient speech recognition using GPU acceleration
  • Implements the Whisper model in C++ for improved performance
  • Provides a command-line interface for easy use

Cons of Whisper

  • Limited to speech recognition tasks only
  • Requires specific hardware (NVIDIA GPU) for optimal performance
  • Less versatile compared to Refly's broader AI capabilities

Code Comparison

Whisper (C++):

void CTranscribeTask::Process()
{
    const auto& mel = m_context.melSpectrogram();
    const auto& model = m_context.model();
    model.runEncoder( mel, m_encoderBegin, m_encoderEnd );
    // ... (additional processing code)
}

Refly (Python):

def process_audio(audio_file):
    audio = whisper.load_audio(audio_file)
    result = model.transcribe(audio)
    return result["text"]

Summary

Whisper is a specialized C++ implementation of the Whisper speech recognition model, optimized for GPU acceleration. It offers high performance but is limited to speech recognition tasks and requires specific hardware.

Refly, on the other hand, is a more versatile AI platform that includes speech recognition among other capabilities. It's implemented in Python, making it more accessible for general use but potentially less optimized for specific hardware.

The code comparison shows the difference in implementation languages and approaches, with Whisper using low-level C++ for performance and Refly using high-level Python for flexibility.

14,609

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Pros of WhisperX

  • Focuses on speech recognition and transcription with advanced features like word-level timestamps
  • Offers multi-language support and speaker diarization
  • Provides a command-line interface for easy usage

Cons of WhisperX

  • Limited to audio processing and transcription tasks
  • May require more computational resources for advanced features

Code Comparison

WhisperX:

import whisperx

model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3")
print(result["text"])

Refly:

from refly import Refly

refly = Refly(api_key="your_api_key")
response = refly.complete("Your prompt here")
print(response.choices[0].text)

Summary

WhisperX specializes in audio transcription and speech recognition, offering advanced features like word-level timestamps and speaker diarization. It's well-suited for projects requiring accurate audio processing. Refly, on the other hand, is a more general-purpose AI tool focused on text generation and completion tasks. While WhisperX excels in audio-related tasks, Refly offers broader applicability for various text-based AI applications.

Faster Whisper transcription with CTranslate2

Pros of faster-whisper

  • Optimized for speed, offering faster transcription than the original Whisper model
  • Supports multiple languages and can perform language detection
  • Provides flexible API for various use cases, including streaming audio

Cons of faster-whisper

  • Focused solely on speech recognition, lacking additional AI capabilities
  • Requires more setup and configuration compared to Refly's all-in-one solution
  • May have higher computational requirements for optimal performance

Code Comparison

faster-whisper:

from faster_whisper import WhisperModel

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

Refly:

from refly import Refly

refly = Refly(api_key="your_api_key")
response = refly.transcribe("audio.mp3")

print(response.text)

Summary

faster-whisper excels in speech recognition performance and flexibility, while Refly offers a more comprehensive AI toolkit with simpler integration. faster-whisper may be preferred for specialized speech-to-text tasks, whereas Refly provides a broader range of AI capabilities in a user-friendly package.

Pros of insanely-fast-whisper

  • Focuses specifically on optimizing Whisper for speed, potentially offering faster transcription
  • Provides detailed benchmarks and comparisons with other Whisper implementations
  • Offers a simple command-line interface for easy use

Cons of insanely-fast-whisper

  • Limited to Whisper functionality, while refly offers a broader range of AI-powered features
  • May require more technical expertise to set up and use effectively
  • Less integrated with other tools and services compared to refly's ecosystem

Code Comparison

insanely-fast-whisper

from faster_whisper import WhisperModel

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)

refly

from refly import Refly

refly = Refly(api_key="your_api_key")
transcript = refly.transcribe("audio.mp3")

The insanely-fast-whisper code demonstrates its focus on Whisper optimization, while the refly code showcases its simplicity and integration with a broader AI platform. insanely-fast-whisper may offer more control over transcription parameters, but refly provides a more straightforward API for quick implementation.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

refly-cover

Refly.AI
⭐️ The AI Native Creation Engine ⭐️

Refly is an open-source AI-native creation engine powered by 13+ leading AI models. Its intuitive free-form canvas interface integrates multi-threaded conversations, multimodal inputs (text/images/files), RAG retrieval process, browser extension web clipper, contextual memory, AI document editing capabilities, code artifact generation (HTML/SVG/Mermaid/React), and website visualization engine, empowering you to effortlessly transform ideas into complete works with interactive visualizations and web applications.

🚀 v0.4.2 Released! Now Supporting Canvas template and document table⚡️

Refly Cloud · Self-hosting · Forum · Discord · Twitter · Documentation

Static Badge Static Badge Discord Chat Static Badge TypeScript-version-icon

README in English 简体中文版自述文件

Quick Start

Before installing ReflyAI, ensure your machine meets these minimum system requirements:

CPU >= 2 cores

Memory >= 4GB

Self-deploy with Docker

Deploy your own feature-rich, unlimited version of ReflyAI using Docker. Our team is working hard to keep up with the latest versions.

To start deployment:

cd deploy/docker
cp ../../apps/api/.env.example .env # copy the example api env file
docker compose up -d

For the following steps, you can visit Self-deploy Guide for more details.

For core deployment tutorials, environment variable configuration, and FAQs, please refer to 👉 Deployment Guide.

Local Development

View details in CONTRIBUTING.

🌟 Featured Showcases

🎨 Creative Canvas

ProjectDescriptionPreview
🧠 Build Card Library CATxPAPA in 3 DaysComplete high-precision card visual asset library in 72 hours, creating industry benchmark with PAPA LabCATxPAPA
🎮 Virtual Character Script GeneratorDynamic difficulty adjustment system based on knowledge graph, covering 200+ core K12 knowledge pointsMath Game
🔍 Understanding Large Models with 3D VisualizationInteractive visualization analysis supporting architectures like Transformer, parameter-level neuron activity tracking3D Vis

👉 Explore More Use Cases

🚀 Featured Artifacts

ProjectDescriptionPreview
📊 AI Teaching AssistantSay goodbye to tedious manual organization, AI intelligently builds course knowledge framework to improve teaching efficiencyCourse Outline
🎯 Interactive Math TutoringLearning through play, AI-driven interactive Q&A helps children love math through games and improve gradesMath QA
🌐 One-Click Webpage CloneNo coding needed, quickly clone webpages by entering links, efficiently build event landing pagesCopy Web

👉 Explore More Artifacts

✨ Key Features

1 🧵 Multi-threaded Conversation System

Built on an innovative multi-threaded architecture that enables parallel management of independent conversation contexts. Implements complex Agentic Workflows through efficient state management and context switching mechanisms, transcending traditional dialogue model limitations.

2 🤖 Multi-model Integration Framework

  • Integration with 13+ leading language models, including DeepSeek R1, Claude 3.5 Sonnet, Google Gemini 2.0, and OpenAI O3-mini
  • Support for model hybrid scheduling and parallel processing
  • Flexible model switching mechanism with unified conversation interface
  • Multi-model knowledge base collaboration

3 🎨 Multimodal Processing Capabilities

  • File Format Support: 7+ formats including PDF, DOCX, RTF, TXT, MD, HTML, EPUB
  • Image Processing: Support for mainstream formats including PNG, JPG, JPEG, BMP, GIF, SVG, WEBP
  • Intelligent Batch Processing: Canvas multi-element selection and AI analysis

4 ⚡️ AI-Powered Skill System

Integrating advanced capabilities from Perplexity AI, Stanford Storm, and more:

  • Intelligent web-wide search and information aggregation
  • Vector database-based knowledge retrieval
  • Smart query rewriting and recommendations
  • AI-assisted document generation workflow

5 🔍 Context Management System

  • Precise temporary knowledge base construction
  • Flexible node selection mechanism
  • Multi-dimensional context correlation
  • Cursor-like intelligent context understanding

6 📚 Knowledge Base Engine

  • Support for multi-source heterogeneous data import
  • RAG-based semantic retrieval architecture
  • Intelligent knowledge graph construction
  • Personalized knowledge space management

7 ✂️ Intelligent Content Capture

  • One-click content capture from mainstream platforms (Github, Medium, Wikipedia, Arxiv)
  • Intelligent content parsing and structuring
  • Automatic knowledge classification and tagging
  • Deep knowledge base integration

8 📌 Citation System

  • Flexible multi-source content referencing
  • Intelligent context correlation
  • One-click citation generation
  • Reference source tracking

9 ✍️ AI-Enhanced Editor

  • Real-time Markdown rendering
  • AI-assisted content optimization
  • Intelligent content analysis
  • Notion-like editing experience

10 🎨 Code Artifact Generation

  • Generate HTML, SVG, Mermaid diagrams, and React applications
  • Smart code structure optimization
  • Component-based architecture support
  • Real-time code preview and debugging

11 🌐 Website Visualization Engine

  • Interactive web page rendering and preview
  • Complex concept visualization support
  • Dynamic SVG and diagram generation
  • Responsive design templates
  • Real-time website prototyping
  • Integration with modern web frameworks

🛣️ Roadmap

We're continuously improving Refly with exciting new features. For a detailed roadmap, visit our complete roadmap documentation.

  • 🎨 Advanced image, audio, and video generation capabilities
  • 🎨 Cross-modal content transformation tools
  • 💻 High-performance desktop client with improved resource management
  • 💻 Enhanced offline capabilities
  • 📚 Advanced knowledge organization and visualization tools
  • 📚 Collaborative knowledge base features
  • 🔌 Open standard for third-party plugin development based on MCP
  • 🔌 Plugin marketplace and developer SDK
  • 🤖 Autonomous task completion with minimal supervision
  • 🤖 Multi-agent collaboration systems
  • ⚡️ Visual workflow builder for complex AI-powered processes
  • ⚡️ Advanced integration capabilities with external systems and API support
  • 🔒 Enhanced security and compliance tools
  • 🔒 Advanced team management and analytics

How to Use?

  • Cloud
    • We've deployed a Refly Cloud version that allows zero-configuration usage, offering all capabilities of the self-hosted version, including free access to GPT-4o-mini and limited trials of GPT-4o and Claude-3.5-Sonnet. Visit https://refly.ai/ to get started.
  • Self-hosting Refly Community Edition
    • Get started quickly with our Getting Started Guide to run Refly in your environment. For more detailed references and in-depth instructions, please refer to our documentation.
  • Refly for enterprise / organizations

Stay Updated

Star Refly on GitHub to receive instant notifications about new version releases.

stay-tuned

Contributing Guidelines

Bug ReportsFeature RequestsIssues/DiscussionsReflyAI Community
Create Bug ReportSubmit Feature RequestView GitHub DiscussionsVisit ReflyAI Community
Something isn't working as expectedIdeas for new features or improvementsDiscuss and raise questionsA place to ask questions, learn, and connect with others

Calling all developers, testers, tech writers and more! Contributions of all types are more than welcome, please check our CONTRIBUTING.md and feel free to browse our GitHub issues to show us what you can do.

For bug reports, feature requests, and other suggestions, you can also create a new issue and choose the most appropriate template to provide feedback.

If you have any questions, feel free to reach out to us. One of the best places to get more information and learn is the ReflyAI Community, where you can connect with other like-minded individuals.

Community and Contact

  • GitHub Discussion: Best for sharing feedback and asking questions.
  • GitHub Issues: Best for reporting bugs and suggesting features when using ReflyAI. Please refer to our contribution guidelines.
  • Discord: Best for sharing your applications and interacting with the community.
  • X(Twitter): Best for sharing your applications and staying connected with the community.

Upstream Projects

We would also like to thank the following open-source projects that make ReflyAI possible:

  1. LangChain - Library for building AI applications.
  2. ReactFlow - Library for building visual workflows.
  3. Tiptap - Library for building collaborative editors.
  4. Ant Design - UI library.
  5. yjs - Provides CRDT foundation for our state management and data sync implementation.
  6. React - Library for web and native user interfaces.
  7. NestJS - Library for building Node.js servers.
  8. Zustand - Primitive and flexible state management for React.
  9. Vite - Next generation frontend tooling.
  10. TailwindCSS - CSS library for writing beautiful styles.
  11. Tanstack Query - Library for frontend request handling.
  12. Radix-UI - Library for building accessible React UI.
  13. Elasticsearch - Library for building search functionality.
  14. QDrant - Library for building vector search functionality.
  15. Resend - Library for building email sending functionality.
  16. Other upstream dependencies.

We are deeply grateful to the community for providing such powerful yet simple libraries that allow us to focus more on implementing product logic. We hope that our project will also provide an easier-to-use AI Native content creation engine for everyone in the future.

Security Issues

To protect your privacy, please avoid posting security-related issues on GitHub. Instead, send your questions to support@refly.ai, and we will provide you with a more detailed response.

License

This repository is licensed under the ReflyAI Open Source License, which is essentially the Apache 2.0 License with some additional restrictions.