Convert Figma logo to code with AI

getAsterisk logodeepclaude

A high-performance LLM inference API and Chat UI that integrates DeepSeek R1's CoT reasoning traces with Anthropic Claude models.

4,904
382
4,904
48

Top Related Projects

74,778

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

14,609

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Faster Whisper transcription with CTranslate2

9,079

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

Quick Overview

DeepClaude is an open-source project that aims to create a local, offline version of the Claude AI assistant using deep learning techniques. It's designed to provide similar functionality to the online Claude AI, but with the privacy and control benefits of running on a user's own hardware.

Pros

  • Offers offline functionality, enhancing privacy and data control
  • Potentially faster response times due to local processing
  • Customizable and adaptable to specific use cases
  • No subscription or usage fees required

Cons

  • May require significant computational resources for optimal performance
  • Likely less capable than the full online Claude AI in terms of knowledge and capabilities
  • Requires technical expertise to set up and maintain
  • Potentially slower update cycle compared to the online version

Code Examples

As this is not primarily a code library but rather a project to recreate Claude AI locally, there are no specific code examples to showcase. The project likely involves complex machine learning models and infrastructure rather than a simple API or library for developers to use directly.

Getting Started

Since DeepClaude is a project aimed at recreating Claude AI locally, there isn't a straightforward "getting started" process like you'd find with a typical code library. Instead, users interested in this project would likely need to:

  1. Clone the repository:

    git clone https://github.com/getAsterisk/deepclaude.git
    
  2. Follow the setup instructions in the project's README or documentation to install dependencies and set up the necessary environment.

  3. Train or fine-tune the models using provided scripts or instructions.

  4. Run the local Claude AI instance according to the project's guidelines.

Note that the actual steps may vary significantly depending on the project's current state and implementation details. Users should refer to the official repository for the most up-to-date and accurate information on getting started with DeepClaude.

Competitor Comparisons

74,778

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Robust speech recognition model with multilingual support
  • Extensive documentation and examples for various use cases
  • Large community and active development from OpenAI

Cons of Whisper

  • Requires significant computational resources for optimal performance
  • Limited customization options for specific domains or accents

Code Comparison

Whisper:

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

DeepClaude:

from deepclaude import DeepClaude

dc = DeepClaude()
transcript = dc.transcribe("audio.mp3")
print(transcript)

Key Differences

  • Whisper is a more established project with broader language support
  • DeepClaude appears to be a simpler implementation, potentially easier to use
  • Whisper offers more model options and fine-tuning capabilities
  • DeepClaude may be more suitable for quick, straightforward transcription tasks

Use Cases

  • Whisper: Large-scale, multilingual speech recognition projects
  • DeepClaude: Simpler transcription tasks or integration into existing Claude-based applications

Community and Support

  • Whisper has a larger community and more extensive documentation
  • DeepClaude is a newer project with potentially less community support

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Highly optimized C++ implementation for efficient speech recognition
  • Supports multiple languages and models
  • Can run on various platforms, including mobile devices

Cons of whisper.cpp

  • Focused solely on speech recognition, lacking broader AI capabilities
  • Requires more technical expertise to set up and use effectively
  • Limited to Whisper model architecture

Code Comparison

whisper.cpp:

#include "whisper.h"

int main(int argc, char ** argv) {
    struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
    whisper_full_default(ctx, wparams, pcmf32.data(), pcmf32.size());
    whisper_print_timings(ctx);
    whisper_free(ctx);
}

deepclaude:

from deepclaude import DeepClaude

dc = DeepClaude()
response = dc.generate("Tell me about AI")
print(response)

Key Differences

  • whisper.cpp is specialized for speech recognition, while deepclaude is a more general-purpose AI model
  • whisper.cpp is implemented in C++ for performance, deepclaude uses Python for ease of use
  • whisper.cpp requires more setup and configuration, deepclaude aims for simplicity
  • whisper.cpp processes audio input, deepclaude handles text-based interactions
14,609

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Pros of WhisperX

  • Focuses specifically on audio transcription and alignment
  • Provides word-level timestamps for accurate synchronization
  • Supports multiple languages and offers language detection

Cons of WhisperX

  • Limited to audio processing tasks
  • Requires additional dependencies for GPU acceleration
  • May have higher computational requirements for large audio files

Code Comparison

WhisperX:

import whisperx

model = whisperx.load_model("large-v2")
result = model.transcribe("audio.mp3")
aligned_segments = whisperx.align(result["segments"], model, "en")

DeepClaude:

from deepclaude import DeepClaude

dc = DeepClaude()
response = dc.generate("Summarize this text: ...")
print(response)

Key Differences

WhisperX is specialized for audio transcription and alignment, offering precise timing information for transcribed text. It supports multiple languages and can detect the language of the audio input.

DeepClaude, on the other hand, is a more general-purpose AI assistant based on the Claude model. It can handle a wide range of tasks, including text generation, summarization, and question-answering.

While WhisperX excels in audio-related tasks, DeepClaude offers broader functionality for various natural language processing applications. The choice between the two depends on the specific requirements of the project and whether audio processing or general AI assistance is the primary focus.

Faster Whisper transcription with CTranslate2

Pros of faster-whisper

  • Optimized for speed, offering faster transcription performance
  • Supports multiple languages and provides language detection
  • Implements efficient CTC beam search decoding

Cons of faster-whisper

  • Focused solely on speech recognition, lacking Claude's broader AI capabilities
  • Requires more setup and dependencies for GPU acceleration
  • May have higher resource requirements for optimal performance

Code Comparison

faster-whisper:

from faster_whisper import WhisperModel

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

deepclaude:

from deepclaude import Claude

claude = Claude()
response = claude.complete("Transcribe the following audio file: audio.mp3")
print(response)

Summary

faster-whisper is a specialized speech recognition tool optimized for performance, while deepclaude offers a more versatile AI assistant capable of various tasks. faster-whisper provides more control over transcription parameters and supports multiple languages, but requires more setup. deepclaude offers a simpler interface for general AI tasks, including potential audio transcription, but may not match faster-whisper's specialized performance in speech recognition.

9,079

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

Pros of Whisper

  • Optimized for performance with GPU acceleration
  • Supports multiple languages for transcription
  • Includes a command-line interface for easy use

Cons of Whisper

  • Limited to speech recognition and transcription tasks
  • Requires more setup and dependencies
  • Less flexible for integration into other projects

Code Comparison

Whisper:

void diarize( const std::vector<float>& audio, int sampleRate, std::vector<Segment>& res )
{
    // Diarization implementation
}

deepclaude:

def transcribe(audio_file):
    # Transcription implementation using Claude API
    return transcription

Key Differences

  • Whisper is implemented in C++ for performance, while deepclaude uses Python for easier integration with Claude API
  • Whisper focuses on local processing, while deepclaude leverages cloud-based AI for transcription
  • Whisper offers more features for audio processing, while deepclaude is simpler and more focused on AI-powered transcription

Use Cases

  • Whisper: Ideal for applications requiring local, high-performance speech recognition across multiple languages
  • deepclaude: Better suited for projects needing AI-powered transcription with potential for more advanced language understanding

Community and Support

  • Whisper: Larger community, more established project with regular updates
  • deepclaude: Newer project, smaller community, but potential for rapid development with AI integration

Pros of insanely-fast-whisper

  • Focuses specifically on optimizing Whisper for faster speech recognition
  • Provides detailed benchmarks and performance comparisons
  • Offers multiple optimization techniques (e.g., batching, CTranslate2)

Cons of insanely-fast-whisper

  • Limited to Whisper model optimization, not a general-purpose AI assistant
  • Requires more technical knowledge to implement and use effectively
  • May have higher computational requirements for some optimizations

Code Comparison

insanely-fast-whisper:

from faster_whisper import WhisperModel

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)

deepclaude:

from deepclaude import DeepClaude

dc = DeepClaude()
response = dc.chat("Tell me about the history of AI.")
print(response)

The code snippets highlight the different focus areas of the two projects. insanely-fast-whisper is centered on optimized speech recognition, while deepclaude provides a more general-purpose AI assistant interface. The insanely-fast-whisper code demonstrates its speech transcription capabilities, whereas deepclaude showcases a simple chat interaction.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

DeepClaude 🐬🧠

Harness the power of DeepSeek R1's reasoning and Claude's creativity and code generation capabilities with a unified API and chat interface.

GitHub license Rust API Status

Getting Started • Features • API Usage • Documentation • Self-Hosting • Contributing

Table of Contents

Overview

DeepClaude is a high-performance LLM inference API that combines DeepSeek R1's Chain of Thought (CoT) reasoning capabilities with Anthropic Claude's creative and code generation prowess. It provides a unified interface for leveraging the strengths of both models while maintaining complete control over your API keys and data.

Features

🚀 Zero Latency - Instant responses with R1's CoT followed by Claude's response in a single stream, powered by a high-performance Rust API

🔒 Private & Secure - End-to-end security with local API key management. Your data stays private

⚙️ Highly Configurable - Customize every aspect of the API and interface to match your needs

🌟 Open Source - Free and open-source codebase. Contribute, modify, and deploy as you wish

🤖 Dual AI Power - Combine DeepSeek R1's reasoning with Claude's creativity and code generation

🔑 Managed BYOK API - Use your own API keys with our managed infrastructure for complete control

Why R1 + Claude?

DeepSeek R1's CoT trace demonstrates deep reasoning to the point of an LLM experiencing "metacognition" - correcting itself, thinking about edge cases, and performing quasi Monte Carlo Tree Search in natural language.

However, R1 lacks in code generation, creativity, and conversational skills. Claude 3.5 Sonnet excels in these areas, making it the perfect complement. DeepClaude combines both models to provide:

  • R1's exceptional reasoning and problem-solving capabilities
  • Claude's superior code generation and creativity
  • Fast streaming responses in a single API call
  • Complete control with your own API keys

Getting Started

Prerequisites

  • Rust 1.75 or higher
  • DeepSeek API key
  • Anthropic API key

Installation

  1. Clone the repository:
git clone https://github.com/getasterisk/deepclaude.git
cd deepclaude
  1. Build the project:
cargo build --release

Configuration

Create a config.toml file in the project root:

[server]
host = "127.0.0.1"
port = 3000

[pricing]
# Configure pricing settings for usage tracking

API Usage

See API Docs

Basic Example

import requests

response = requests.post(
    "http://127.0.0.1:1337/",
    headers={
        "X-DeepSeek-API-Token": "<YOUR_DEEPSEEK_API_KEY>",
        "X-Anthropic-API-Token": "<YOUR_ANTHROPIC_API_KEY>"
    },
    json={
        "messages": [
            {"role": "user", "content": "How many 'r's in the word 'strawberry'?"}
        ]
    }
)

print(response.json())

Streaming Example

import asyncio
import json
import httpx

async def stream_response():
    async with httpx.AsyncClient() as client:
        async with client.stream(
            "POST",
            "http://127.0.0.1:1337/",
            headers={
                "X-DeepSeek-API-Token": "<YOUR_DEEPSEEK_API_KEY>",
                "X-Anthropic-API-Token": "<YOUR_ANTHROPIC_API_KEY>"
            },
            json={
                "stream": True,
                "messages": [
                    {"role": "user", "content": "How many 'r's in the word 'strawberry'?"}
                ]
            }
        ) as response:
            response.raise_for_status()
            async for line in response.aiter_lines():
                if line:
                    if line.startswith('data: '):
                        data = line[6:]
                        try:
                            parsed_data = json.loads(data)
                            if 'content' in parsed_data:
                                content = parsed_data.get('content', '')[0]['text']
                                print(content, end='',flush=True)
                            else:
                                print(data, flush=True)
                        except json.JSONDecodeError:
                            pass

if __name__ == "__main__":
    asyncio.run(stream_response())

Configuration Options

The API supports extensive configuration through the request body:

{
    "stream": false,
    "verbose": false,
    "system": "Optional system prompt",
    "messages": [...],
    "deepseek_config": {
        "headers": {},
        "body": {}
    },
    "anthropic_config": {
        "headers": {},
        "body": {}
    }
}

Self-Hosting

DeepClaude can be self-hosted on your own infrastructure. Follow these steps:

  1. Configure environment variables or config.toml
  2. Build the Docker image or compile from source
  3. Deploy to your preferred hosting platform

Security

  • No data storage or logged
  • BYOK (Bring Your Own Keys) architecture
  • Regular security audits and updates

Contributing

We welcome contributions! Please see our Contributing Guidelines for details on:

  • Code of Conduct
  • Development process
  • Submitting pull requests
  • Reporting issues

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

DeepClaude is a free and open-source project by Asterisk. Special thanks to:

  • DeepSeek for their incredible R1 model
  • Anthropic for Claude's capabilities
  • The open-source community for their continuous support

Made with ❤️ by Asterisk