Convert Figma logo to code with AI

Canner logoWrenAI

⚡️ GenBI (Generative BI) queries any database in natural language, generates accurate SQL (Text-to-SQL), charts (Text-to-Chart), and AI-powered insights in seconds.

11,139
1,119
11,139
225

Top Related Projects

85,961

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

16,462

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Faster Whisper transcription with CTranslate2

9,434

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

Quick Overview

WrenAI is an open-source AI assistant framework designed to help developers create custom AI assistants. It provides a flexible and extensible platform for building conversational AI applications, leveraging large language models and other AI technologies.

Pros

  • Highly customizable and extensible architecture
  • Supports multiple AI models and integrations
  • Easy to deploy and scale
  • Active community and ongoing development

Cons

  • Limited documentation for advanced features
  • Steeper learning curve compared to some simpler chatbot frameworks
  • Requires knowledge of AI concepts and language models
  • May have higher computational requirements for complex assistants

Code Examples

Here are a few code examples demonstrating basic usage of WrenAI:

  1. Creating a simple assistant:
from wrenai import Assistant

assistant = Assistant("My Assistant")
assistant.add_skill("greeting", "Say hello to the user")
assistant.add_skill("weather", "Provide weather information")

response = assistant.process("Hello, what's the weather like today?")
print(response)
  1. Adding a custom skill:
from wrenai import Skill

class WeatherSkill(Skill):
    def execute(self, input_text):
        # Implement weather fetching logic here
        return "It's sunny and 25°C today."

assistant.add_skill(WeatherSkill())
  1. Using a specific AI model:
from wrenai import Assistant, GPT3Model

model = GPT3Model(api_key="your-api-key")
assistant = Assistant("GPT-3 Assistant", model=model)

response = assistant.process("Explain quantum computing")
print(response)

Getting Started

To get started with WrenAI, follow these steps:

  1. Install WrenAI:
pip install wrenai
  1. Create a basic assistant:
from wrenai import Assistant

assistant = Assistant("My First Assistant")
assistant.add_skill("greeting", "Greet the user")
assistant.add_skill("farewell", "Say goodbye to the user")

while True:
    user_input = input("You: ")
    if user_input.lower() == "exit":
        break
    response = assistant.process(user_input)
    print(f"Assistant: {response}")
  1. Run your assistant and start interacting with it!

For more advanced usage and customization options, refer to the WrenAI documentation.

Competitor Comparisons

85,961

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Highly accurate speech recognition across multiple languages
  • Open-source with extensive documentation and community support
  • Robust performance on diverse audio inputs, including noisy environments

Cons of Whisper

  • Requires significant computational resources for real-time transcription
  • Large model size may be challenging for deployment on resource-constrained devices
  • Limited customization options for specific domain adaptations

Code Comparison

WrenAI:

from wren import Wren

wren = Wren()
result = wren.transcribe("audio.wav")
print(result.text)

Whisper:

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.wav")
print(result["text"])

Key Differences

  • WrenAI focuses on lightweight, efficient speech recognition, while Whisper prioritizes accuracy and multilingual support
  • WrenAI is designed for edge devices and real-time applications, whereas Whisper is better suited for server-side processing
  • Whisper offers more comprehensive language support and advanced features, while WrenAI emphasizes simplicity and ease of integration

Use Cases

  • Choose WrenAI for mobile apps, IoT devices, or scenarios requiring low-latency transcription
  • Opt for Whisper when accuracy is paramount, dealing with multiple languages, or processing large volumes of audio data

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Highly optimized C++ implementation for efficient speech recognition
  • Supports various quantization levels for reduced memory usage
  • Provides both command-line and library interfaces for flexibility

Cons of whisper.cpp

  • Limited to Whisper model, while WrenAI supports multiple models
  • Requires more technical expertise to set up and use effectively
  • Less focus on user-friendly interfaces compared to WrenAI

Code Comparison

whisper.cpp:

#include "whisper.h"

int main(int argc, char ** argv) {
    struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
    whisper_full_default(ctx, wparams, pcmf32.data(), pcmf32.size());
    whisper_print_timings(ctx);
    whisper_free(ctx);
}

WrenAI:

from wren import Wren

wren = Wren()
result = wren.transcribe("audio.mp3")
print(result.text)

The code comparison showcases the difference in complexity and ease of use between the two libraries. whisper.cpp requires more setup and C++ knowledge, while WrenAI offers a simpler Python interface for quick transcription tasks.

16,462

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Pros of WhisperX

  • Offers advanced features like word-level timestamps and speaker diarization
  • Supports multiple languages and provides language detection
  • Actively maintained with frequent updates and improvements

Cons of WhisperX

  • Requires more computational resources due to its advanced features
  • May have a steeper learning curve for beginners
  • Limited to audio transcription and diarization tasks

Code Comparison

WhisperX:

import whisperx

model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3")
print(result["segments"])

WrenAI:

from wren import Wren

wren = Wren()
response = wren.generate("Summarize this text: ...")
print(response)

Key Differences

WhisperX focuses on audio transcription and diarization, while WrenAI is a more general-purpose AI assistant. WhisperX provides detailed audio analysis, including word-level timestamps and speaker identification. WrenAI, on the other hand, offers a broader range of capabilities, including text generation, summarization, and potentially other AI-powered tasks.

The choice between these repositories depends on the specific use case. For audio-related tasks, WhisperX would be more suitable, while WrenAI might be preferable for general AI assistance and text-based tasks.

Faster Whisper transcription with CTranslate2

Pros of faster-whisper

  • Optimized for speed, offering faster transcription than standard Whisper models
  • Supports multiple languages and can perform translation
  • Provides streaming capabilities for real-time transcription

Cons of faster-whisper

  • Focused solely on speech-to-text, lacking additional AI capabilities
  • May require more computational resources due to its optimization for speed
  • Limited customization options compared to more general-purpose AI frameworks

Code Comparison

faster-whisper:

from faster_whisper import WhisperModel

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

WrenAI:

from wren import Wren

wren = Wren()
response = wren.chat("Transcribe the following audio file: audio.mp3")
print(response)

Summary

faster-whisper excels in speech-to-text tasks with its speed-optimized approach and multi-language support. However, it's limited to transcription tasks. WrenAI, while not specifically optimized for transcription, offers a more versatile AI platform that can handle various tasks through natural language interactions. The choice between the two depends on whether you need specialized transcription capabilities or a more general-purpose AI assistant.

9,434

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

Pros of Whisper

  • Optimized for performance with GPU acceleration
  • Supports multiple languages for transcription
  • Provides a C++ implementation for better integration with native applications

Cons of Whisper

  • Limited to speech recognition and transcription tasks
  • Requires more setup and configuration compared to WrenAI
  • May have a steeper learning curve for non-technical users

Code Comparison

WrenAI:

from wren import Wren

wren = Wren()
result = wren.transcribe("audio.mp3")
print(result.text)

Whisper:

#include "whisper.h"

whisper_context * ctx = whisper_init_from_file("model.bin");
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
whisper_full(ctx, params, audio_data, audio_len, "en");

Summary

Whisper offers high-performance speech recognition with multi-language support and GPU acceleration, making it suitable for more advanced applications. However, it may require more technical expertise to implement and use effectively. WrenAI, on the other hand, provides a simpler interface for transcription tasks, making it more accessible for quick integration into projects, but may lack some of the advanced features and optimizations found in Whisper.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Wren AI - Open-Source GenBI Agent

Docs

Canner%2FWrenAI | Trendshift

Wren AI is your GenBI Agent, that you can query any database with natural language → get accurate SQL(Text-to-SQL), charts(Text-to-Charts) & AI-generated insights in seconds. ⚡️

😍 Demos

https://github.com/user-attachments/assets/f9c1cb34-5a95-4580-8890-ec9644da4160

Watch GenBI Demo

🤖 Features

What you getWhy it matters
Talk to Your DataAsk in any language → precise SQL & answersSlash the SQL learning curve
GenBI InsightsAI-written summaries, charts & reportsDecision-ready context in one click
Semantic LayerMDL models encode schema, metrics, joinsKeeps LLM outputs accurate & governed
Embed via APIGenerate queries & charts inside your apps (API Docs)Build custom agents, SaaS features, chatbots (Streamlit Live Demo)

🤩 Learn more about GenBI

🚀 Getting Started

Using Wren AI is super simple, you can set it up within 3 minutes, and start to interact with your data!

🏗️ Architecture

👉 Learn more about our Design

🔌 Data Sources

If your data source is not listed here, vote for it in our GitHub discussion thread. It will be a valuable input for us to decide on the next supported data sources.

  • Athena (Trino)
  • Redshift
  • BigQuery
  • DuckDB
  • PostgreSQL
  • MySQL
  • Microsoft SQL Server
  • ClickHouse
  • Oracle
  • Trino
  • Snowflake

🤖 LLM Models

Wren AI supports integration with various Large Language Models (LLMs), including but not limited to:

  • OpenAI Models
  • Azure OpenAI Models
  • DeepSeek Models
  • Google AI Studio – Gemini Models
  • Vertex AI Models (Gemini + Anthropic)
  • Bedrock Models
  • Anthropic API Models
  • Groq Models
  • Ollama Models
  • Databricks Models

Check configuration examples here!

[!CAUTION] The performance of Wren AI depends significantly on the capabilities of the LLM you choose. We strongly recommend using the most powerful model available for optimal results. Using less capable models may lead to reduced performance, slower response times, or inaccurate outputs.

📚 Documentation

Visit Wren AI documentation to view the full documentation.

📪 Keep Posted?

Subscribe our blog and Follow our LinkedIn

🛠️ Contribution

  1. Star ⭐ the repo to show support (it really helps).
  2. Open an issue for bugs, ideas, or discussions.
  3. Read Contribution Guidelines for setup & PR guidelines.

⭐️ Community

  • Join 1.3k+ developers in our Discord for real-time help and roadmap previews.
  • If there are any issues, please visit GitHub Issues.
  • Explore our public roadmap to stay updated on upcoming features and improvements!

Please note that our Code of Conduct applies to all Wren AI community channels. Users are highly encouraged to read and adhere to them to avoid repercussions.

🎉 Our Contributors

⬆️ Back to Top