WrenAI
🤖 Open-source GenBI AI Agent that empowers data-driven teams to chat with their data to generate Text-to-SQL, charts, spreadsheets, reports, and BI. 📈📊📋🧑💻
Top Related Projects
Robust Speech Recognition via Large-Scale Weak Supervision
Port of OpenAI's Whisper model in C/C++
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Faster Whisper transcription with CTranslate2
High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
Quick Overview
WrenAI is an open-source AI assistant framework designed to help developers create custom AI assistants. It provides a flexible and extensible platform for building conversational AI applications, leveraging large language models and other AI technologies.
Pros
- Highly customizable and extensible architecture
- Supports multiple AI models and integrations
- Easy to deploy and scale
- Active community and ongoing development
Cons
- Limited documentation for advanced features
- Steeper learning curve compared to some simpler chatbot frameworks
- Requires knowledge of AI concepts and language models
- May have higher computational requirements for complex assistants
Code Examples
Here are a few code examples demonstrating basic usage of WrenAI:
- Creating a simple assistant:
from wrenai import Assistant
assistant = Assistant("My Assistant")
assistant.add_skill("greeting", "Say hello to the user")
assistant.add_skill("weather", "Provide weather information")
response = assistant.process("Hello, what's the weather like today?")
print(response)
- Adding a custom skill:
from wrenai import Skill
class WeatherSkill(Skill):
def execute(self, input_text):
# Implement weather fetching logic here
return "It's sunny and 25°C today."
assistant.add_skill(WeatherSkill())
- Using a specific AI model:
from wrenai import Assistant, GPT3Model
model = GPT3Model(api_key="your-api-key")
assistant = Assistant("GPT-3 Assistant", model=model)
response = assistant.process("Explain quantum computing")
print(response)
Getting Started
To get started with WrenAI, follow these steps:
- Install WrenAI:
pip install wrenai
- Create a basic assistant:
from wrenai import Assistant
assistant = Assistant("My First Assistant")
assistant.add_skill("greeting", "Greet the user")
assistant.add_skill("farewell", "Say goodbye to the user")
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
break
response = assistant.process(user_input)
print(f"Assistant: {response}")
- Run your assistant and start interacting with it!
For more advanced usage and customization options, refer to the WrenAI documentation.
Competitor Comparisons
Robust Speech Recognition via Large-Scale Weak Supervision
Pros of Whisper
- Highly accurate speech recognition across multiple languages
- Open-source with extensive documentation and community support
- Robust performance on diverse audio inputs, including noisy environments
Cons of Whisper
- Requires significant computational resources for real-time transcription
- Large model size may be challenging for deployment on resource-constrained devices
- Limited customization options for specific domain adaptations
Code Comparison
WrenAI:
from wren import Wren
wren = Wren()
result = wren.transcribe("audio.wav")
print(result.text)
Whisper:
import whisper
model = whisper.load_model("base")
result = model.transcribe("audio.wav")
print(result["text"])
Key Differences
- WrenAI focuses on lightweight, efficient speech recognition, while Whisper prioritizes accuracy and multilingual support
- WrenAI is designed for edge devices and real-time applications, whereas Whisper is better suited for server-side processing
- Whisper offers more comprehensive language support and advanced features, while WrenAI emphasizes simplicity and ease of integration
Use Cases
- Choose WrenAI for mobile apps, IoT devices, or scenarios requiring low-latency transcription
- Opt for Whisper when accuracy is paramount, dealing with multiple languages, or processing large volumes of audio data
Port of OpenAI's Whisper model in C/C++
Pros of whisper.cpp
- Highly optimized C++ implementation, offering excellent performance
- Supports various platforms and architectures, including x86, ARM, and WebAssembly
- Provides a command-line interface for easy integration into existing workflows
Cons of whisper.cpp
- Limited to speech recognition and transcription tasks
- Requires more technical expertise to set up and use effectively
- Less focus on natural language processing and generation capabilities
Code Comparison
WrenAI:
from wren import Wren
wren = Wren()
response = wren.generate("Tell me a joke about AI")
print(response)
whisper.cpp:
#include "whisper.h"
whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
whisper_full_default(ctx, params, pcmf32.data(), pcmf32.size());
whisper_print_timings(ctx);
whisper_free(ctx);
Summary
While whisper.cpp excels in speech recognition and transcription tasks with its optimized C++ implementation, WrenAI offers a more comprehensive AI solution with natural language processing capabilities. whisper.cpp is better suited for developers looking for high-performance speech-to-text functionality, while WrenAI provides a more user-friendly interface for general AI tasks, including text generation and conversation.
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Pros of WhisperX
- Offers advanced features like word-level timestamps and speaker diarization
- Supports multiple languages and provides language detection
- Actively maintained with frequent updates and improvements
Cons of WhisperX
- Requires more computational resources due to its advanced features
- May have a steeper learning curve for beginners
- Limited to audio transcription and diarization tasks
Code Comparison
WhisperX:
import whisperx
model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3")
print(result["segments"])
WrenAI:
from wren import Wren
wren = Wren()
response = wren.generate("Summarize this text: ...")
print(response)
Key Differences
WhisperX focuses on audio transcription and diarization, while WrenAI is a more general-purpose AI assistant. WhisperX provides detailed audio analysis, including word-level timestamps and speaker identification. WrenAI, on the other hand, offers a broader range of capabilities, including text generation, summarization, and potentially other AI-powered tasks.
The choice between these repositories depends on the specific use case. For audio-related tasks, WhisperX would be more suitable, while WrenAI might be preferable for general AI assistance and text-based tasks.
Faster Whisper transcription with CTranslate2
Pros of faster-whisper
- Optimized for speed, offering faster transcription than standard Whisper models
- Supports multiple languages and can perform translation
- Provides streaming capabilities for real-time transcription
Cons of faster-whisper
- Focused solely on speech-to-text, lacking additional AI capabilities
- May require more computational resources due to its optimization for speed
- Limited customization options compared to more general-purpose AI frameworks
Code Comparison
faster-whisper:
from faster_whisper import WhisperModel
model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
WrenAI:
from wren import Wren
wren = Wren()
response = wren.chat("Transcribe the following audio file: audio.mp3")
print(response)
Summary
faster-whisper excels in speech-to-text tasks with its speed-optimized approach and multi-language support. However, it's limited to transcription tasks. WrenAI, while not specifically optimized for transcription, offers a more versatile AI platform that can handle various tasks through natural language interactions. The choice between the two depends on whether you need specialized transcription capabilities or a more general-purpose AI assistant.
High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
Pros of Whisper
- Optimized for performance with GPU acceleration
- Supports multiple languages for transcription
- Provides a C++ implementation for better integration with native applications
Cons of Whisper
- Limited to speech recognition and transcription tasks
- Requires more setup and configuration compared to WrenAI
- May have a steeper learning curve for non-technical users
Code Comparison
WrenAI:
from wren import Wren
wren = Wren()
result = wren.transcribe("audio.mp3")
print(result.text)
Whisper:
#include "whisper.h"
whisper_context * ctx = whisper_init_from_file("model.bin");
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
whisper_full(ctx, params, audio_data, audio_len, "en");
Summary
Whisper offers high-performance speech recognition with multi-language support and GPU acceleration, making it suitable for more advanced applications. However, it may require more technical expertise to implement and use effectively. WrenAI, on the other hand, provides a simpler interface for transcription tasks, making it more accessible for quick integration into projects, but may lack some of the advanced features and optimizations found in Whisper.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Wren AI
Open-source GenBI AI Agent that empowers data-driven teams to chat with their data to generate Text-to-SQL, charts, spreadsheets, reports, and BI.
ð¶ Try it yourself!
GenBI (Generative Business Intelligence)
Ask any questions
ð Try with your data on Wren AI Cloud or Install in your local environment
Supported LLM Models
Wren AI supports integration with various Large Language Models (LLMs), including but not limited to:
- OpenAI Models
- Azure OpenAI Models
- DeepSeek Models
- Google AI Studio â Gemini Models
- Vertex AI Models (Gemini + Anthropic)
- Bedrock Models
- Anthropic API Models
- Groq Models
- Ollama Models
- Databricks Models
Check configuration examples here!
[!CAUTION] The performance of Wren AI depends significantly on the capabilities of the LLM you choose. We strongly recommend using the most powerful model available for optimal results. Using less capable models may lead to reduced performance, slower response times, or inaccurate outputs.
ð¯ Our Vision & Mission
At Wren AI, our mission is to revolutionize business intelligence by empowering organizations with seamless access to data through Generative Business Intelligence (GenBI). We aim to break down barriers to data insights with advanced AI-driven solutions, composable data frameworks, and semantic intelligence, enabling every team member to make faster, smarter, and data-driven decisions with confidence.
ð¤ A User-Centric, End-to-End Open-source SQL AI Agent - Text-to-SQL Total Solution
1. Talk to Your Data in Any Language
Wren AI speaks your language, such as English, German, Spanish, French, Japanese, Korean, Portuguese, Chinese, and more. Unlock valuable insights by asking your business questions to Wren AI. It goes beyond surface-level data analysis to reveal meaningful information and simplifies obtaining answers from lead scoring templates to customer segmentation.
2. GenBI Insights
The GenBI feature empowers users with AI-generated summaries that provide key insights alongside SQL queries, simplifying complex data. Instantly convert query results into AI-generated reports, charts, transforming raw data into clear, actionable visuals. With GenBI, you can make faster, smarter decisions with ease.
3. AI-powered Data Exploration Features
Beyond just retrieving data from your databases, Wren AI now answers exploratory questions like âWhat data do I have?â or âWhat are the columns in my customer tables?â Additionally, our AI dynamically generates recommended questions and intelligent follow-up queries tailored to your context, making data exploration smarter, faster, and more intuitive. Empower your team to unlock deeper insights effortlessly with AI.
4. Semantic Indexing with a Well-Crafted UI/UX
Wren AI has implemented a semantic engine architecture to provide the LLM context of your business; you can easily establish a logical presentation layer on your data schema that helps LLM learn more about your business context.
5. Generate SQL Queries with Context
With Wren AI, you can process metadata, schema, terminology, data relationships, and the logic behind calculations and aggregations with âModeling Definition Languageâ, reducing duplicate coding and simplifying data joins.
6. Get Insights without Writing Code
When starting a new conversation in Wren AI, your question is used to find the most relevant tables. From these, LLM generates the most relevant question for the user. You can also ask follow-up questions to get deeper insights.
7. Easily Export and Visualize Your Data
Wren AI provides a seamless end-to-end workflow, enabling you to connect your data effortlessly with popular analysis tools such as Excel and Google Sheets. This way, your insights remain accessible, allowing for further analysis using the tools you know best.
ð¤ Why Wren AI?
We focus on providing an open, secure, and accurate SQL AI Agent for everyone.
1. Turnkey Solution
Wren AI makes it easy to onboard your data. Discover and analyze your data with our user interface. Effortlessly generate results without needing to code.
2. Secure SQL Generation
We use RAG architecture to leverage your schema and context, generating SQL queries without requiring you to expose or upload your data to LLM models.
3. Open-source End-to-end Solution
Deploy Wren AI anywhere you like on your own data, LLM APIs, and environment, it's free.
ð¤ Wren AI Text-to-SQL Agentic Architecture
Wren AI consists of three core services:
-
Wren UI: An intuitive user interface for asking questions, defining data relationships, and integrating data sources.
-
Wren AI Service: Processes queries using a vector database for context retrieval, guiding LLMs to produce precise SQL outputs.
-
Wren Engine: Serves as the semantic engine, mapping business terms to data sources, defining relationships, and incorporating predefined calculations and aggregations.
â¤ï¸ Knowledge Sharing From Wren AI
Want to get our latest sharing? Follow our blog!
ð Getting Started
Using Wren AI is super simple, you can set it up within 3 minutes, and start to interact with your data!
- Visit our Installation Guide of Wren AI.
- Visit the Usage Guides to learn more about how to use Wren AI.
ð Documentation
Visit Wren AI documentation to view the full documentation.
ð ï¸ Contribution
Want to contribute to Wren AI? Check out our Contribution Guidelines.
âï¸ Community
- Welcome to our Discord server to give us feedback!
- If there are any issues, please visit GitHub Issues.
- Explore our public roadmap to stay updated on upcoming features and improvements!
Please note that our Code of Conduct applies to all Wren AI community channels. Users are highly encouraged to read and adhere to them to avoid repercussions.
ð Our Contributors
Top Related Projects
Robust Speech Recognition via Large-Scale Weak Supervision
Port of OpenAI's Whisper model in C/C++
WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
Faster Whisper transcription with CTranslate2
High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot