Convert Figma logo to code with AI

raidendotai logocofounder

ai-generated apps , full stack + generative UI

5,561
616
5,561
65

Top Related Projects

69,530

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

12,873

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Faster Whisper transcription with CTranslate2

8,614

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

Quick Overview

Cofounder is an AI-powered tool designed to assist entrepreneurs in building and launching startups. It leverages large language models to provide guidance, answer questions, and offer insights across various aspects of startup development, from ideation to execution.

Pros

  • Provides 24/7 access to AI-powered startup advice and guidance
  • Covers a wide range of startup-related topics and challenges
  • Offers personalized recommendations based on user input and project specifics
  • Can potentially reduce costs associated with hiring consultants or advisors

Cons

  • May lack the nuanced understanding and real-world experience of human advisors
  • Accuracy and relevance of advice may vary depending on the quality of the AI model and training data
  • Cannot replace the value of human networking and connections in the startup ecosystem
  • May not be able to provide up-to-date information on rapidly changing market conditions or regulations

Code Examples

This project is not a code library, so code examples are not applicable.

Getting Started

As this is not a code library, there are no specific code-based getting started instructions. However, users can typically access the Cofounder AI tool through a web interface or API, depending on the implementation details provided by the project maintainers.

Competitor Comparisons

69,530

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Highly accurate speech recognition model with multilingual support
  • Open-source with extensive documentation and community support
  • Versatile, supporting various audio formats and transcription tasks

Cons of Whisper

  • Requires significant computational resources for optimal performance
  • Limited real-time transcription capabilities
  • Primarily focused on speech recognition, lacking broader AI functionalities

Code Comparison

Whisper:

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

Cofounder:

from cofounder import Cofounder

cf = Cofounder()
response = cf.chat("Generate a business plan for a startup")
print(response)

Key Differences

Whisper is specialized in speech recognition and transcription, while Cofounder is an AI-powered tool for startup ideation and business planning. Whisper processes audio input, whereas Cofounder generates text-based responses for business-related queries. The code examples highlight these differences, with Whisper transcribing audio and Cofounder engaging in text-based conversation.

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Highly optimized C++ implementation for efficient speech recognition
  • Supports various platforms and architectures, including mobile devices
  • Provides both command-line and library interfaces for flexibility

Cons of whisper.cpp

  • Focused solely on speech recognition, lacking broader AI capabilities
  • Requires more technical expertise to integrate and use effectively
  • Limited to Whisper model, not adaptable to other language models

Code Comparison

whisper.cpp:

#include "whisper.h"

int main(int argc, char** argv) {
    struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");
    whisper_full_default(ctx, wparams, pcmf32.data(), pcmf32.size());
}

cofounder:

from cofounder import Cofounder

cofounder = Cofounder()
response = cofounder.chat("How can I improve my startup's marketing?")
print(response)

Key Differences

whisper.cpp is a specialized tool for speech recognition, offering high performance and cross-platform support. It's ideal for projects requiring efficient audio transcription.

cofounder, on the other hand, is a more versatile AI assistant focused on providing startup advice and guidance. It offers a simpler Python interface and broader capabilities beyond speech processing.

While whisper.cpp excels in its specific domain, cofounder provides a more accessible and general-purpose solution for startup-related tasks and conversations.

12,873

WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

Pros of WhisperX

  • Focuses on speech recognition and transcription, offering advanced features like word-level timestamps and speaker diarization
  • Actively maintained with regular updates and improvements
  • Provides a command-line interface for easy usage

Cons of WhisperX

  • Limited to audio processing tasks, lacking the broader AI assistant capabilities of Cofounder
  • Requires more technical knowledge to set up and use effectively
  • May have higher computational requirements for processing large audio files

Code Comparison

WhisperX:

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)
print(f"Detected language '{info.language}' with probability {info.language_probability}")

Cofounder:

from cofounder import Cofounder

cofounder = Cofounder()
response = cofounder.chat("What's the weather like today?")
print(response)

WhisperX is specialized for audio transcription, while Cofounder provides a more general-purpose AI assistant interface. The code examples highlight the different focus areas of each project, with WhisperX offering detailed audio processing capabilities and Cofounder providing a simpler chat-like interaction.

Faster Whisper transcription with CTranslate2

Pros of faster-whisper

  • Optimized for speed, offering faster transcription performance
  • Supports multiple languages and provides language detection
  • Implements efficient CPU and GPU inference

Cons of faster-whisper

  • Focused solely on speech-to-text functionality
  • Requires more setup and dependencies for optimal performance
  • Limited to audio processing tasks

Code Comparison

faster-whisper:

from faster_whisper import WhisperModel

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)

for segment in segments:
    print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))

cofounder:

from cofounder import Cofounder

cofounder = Cofounder()
response = cofounder.chat("How can I improve my startup's marketing strategy?")
print(response)

Summary

faster-whisper is a specialized tool for efficient speech recognition, while cofounder is a more general-purpose AI assistant for startup-related tasks. faster-whisper excels in audio processing speed and language support, but is limited to transcription tasks. cofounder offers broader functionality for startup advice and planning but may lack the specialized performance of faster-whisper in audio processing.

8,614

High-performance GPGPU inference of OpenAI's Whisper automatic speech recognition (ASR) model

Pros of Whisper

  • Optimized for performance with C++ implementation
  • Supports GPU acceleration for faster processing
  • Provides a command-line interface for easy usage

Cons of Whisper

  • Limited to speech recognition and transcription tasks
  • Requires more technical setup and dependencies
  • Less versatile in terms of AI applications

Code Comparison

Whisper (C++):

const auto result = whisper_full_default(ctx, params, pcmf32.data(), pcmf32.size());
if (result != 0) {
    fprintf(stderr, "Failed to process audio\n");
    return 10;
}

Cofounder (Python):

from cofounder import Cofounder

cofounder = Cofounder()
response = cofounder.chat("How can I improve my startup's marketing strategy?")
print(response)

Summary

Whisper focuses on efficient speech recognition, offering performance benefits through C++ implementation and GPU acceleration. It's well-suited for specific audio processing tasks but requires more technical setup.

Cofounder, on the other hand, is a more versatile AI assistant designed for startup-related queries and general AI interactions. It offers a simpler Python interface but may not match Whisper's performance in speech recognition tasks.

The choice between the two depends on the specific use case: Whisper for dedicated speech processing, or Cofounder for broader AI assistance in startup contexts.

Pros of insanely-fast-whisper

  • Focused on speech recognition and transcription tasks
  • Optimized for speed and efficiency in audio processing
  • Utilizes advanced techniques like flash attention for improved performance

Cons of insanely-fast-whisper

  • Limited to audio transcription functionality
  • May require more technical expertise to implement and customize
  • Less versatile compared to cofounder's broader AI capabilities

Code Comparison

insanely-fast-whisper:

from faster_whisper import WhisperModel

model = WhisperModel("large-v2", device="cuda", compute_type="float16")
segments, info = model.transcribe("audio.mp3", beam_size=5)

cofounder:

from cofounder import Cofounder

cofounder = Cofounder()
response = cofounder.chat("How can I improve my business strategy?")

The code snippets highlight the different focus areas of the two projects. insanely-fast-whisper is specialized for audio transcription, while cofounder provides a more general-purpose AI assistant interface for various tasks, including business strategy advice.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

cofounder-og-black

Cofounder | Early alpha release

cofounder

  • full stack generative web apps ; backend + db + stateful web apps
  • gen ui rooted in app architecture, with ai-guided mockup designer & modular design systems

Early Alpha : Unstable ⚠️

The following points are very emphasized :

  • This is an EARLY, UNSTABLE, PREVIEW RELEASE of the project. ⚠️ Until v1 is released, it is expected to break often.
  • It consumes a lot of tokens. If you are on a tokens budget, wait until v1 is released.
  • Again, this is an early, unstable release. A first test run. An early preview of the project's ideas. Far from completion. Open-source iterative development. Work in progress. Unstable early alpha release. [etc]

If any of these might be issues for you, even in the slightest way, wait until v1 is released ! Do not try the current release !

To help you guide your decision on whether or not to try the current release , here is a guide

SituationRecommendation
I'm not sure if this tool release is mature yet, maybe it will not work as intended and I may spend millions of tokens for nothingDo not use it yet
I am very excited about this tool, I hope it is perfectly production-ready, because if it's not, I will make commentary about I spent X amount on OpenAI API callsDo not use it yet
I am not interested in code. I want to type words into a box and have my project completed; I do not want messy broken unfinished codeDo not use it yet
I love exploring experimental tools but I am on the fence. It's going to break halfway and leave me sadDo not use it yet
Who should even try it at this point?Nobody. Do not use it yet
But I really want to use it for some esoteric reason having read all the above.Do not use it yet either

https://github.com/user-attachments/assets/cfd09250-d21e-49fc-a29b-fa0c661abfc0

https://github.com/user-attachments/assets/c055f9c4-6bc0-4b11-ba8f-cc9f149387fa


Important

Early alpha release ; earlier than expected by few weeks

Still not merged with key target features of the project, notably :

  • project iteration modules for all dimensions of generated projects
  • admin interface for event streams and (deeper) project iterations
  • integrate the full genUI plugin :
    • generative design systems
    • deploy finetuned models & serve from api.cofounder
  • local, browser-based dev env for the entire project scope
  • add { react-native , flutter , other web frameworks }
  • validations & swarm code review and autofix
  • code optimization
  • [...]

be patient :)


Usage

Install & Init

  • Open your terminal and run
npx @openinterface/cofounder

Follow the instructions. The installer

  • will ask you for your keys
  • setup dirs & start installs
  • will start the local cofounder/api builder and server
  • will open the web dashboard where you can create new projects (at http://localhost:4200 ) 🎉
note :
you will be asked for a cofounder.openinterface.ai key
it is recommended to use one as it enables the designer/layoutv1 and swarm/external-apis features
and can be used without limits during the current early alpha period

the full index will be available for local download on v1 release
  • currently using node v22 for the whole project.
# alternatively, you can make a new project without going through the dashboard
# by runing :
npx @openinterface/cofounder -p "YourAppProjectName" -d "describe your app here" -a "(optional) design instructions"

Run Generated Apps

  • Your backend & vite+react web app will incrementally generate inside ./apps/{YourApp} Open your terminal in ./apps/{YourApp} and run
npm i && npm run dev

It will start both the backend and vite+react, concurrently, after installing their dependencies Go to http://localhost:5173/ to open the web app 🎉

  • From within the generated apps , you can use ⌘+K / Ctrl+K to iterate on UI components

[more details later]

Notes

Dashboard & Local API

If you resume later and would like to iterate on your generated apps, the local ./cofounder/api server needs to be running to receive queries

You can (re)start the local cofounder API running the following command from ./cofounder/api

npm run start

The dashboard will open in http://localhost:4200

  • note: You can also generate new apps from the same env, without the the dashboard, by running, from ./cofounder/api, one of these commands

    npm run start -- -p "ProjectName" -f "some app description" -a "minimalist and spacious , light theme"
    npm run start -- -p "ProjectName" -f "./example_description.txt" -a "minimalist and spacious , light theme"
    

Concurrency

[the architecture will be further detailed and documented later]

Every "node" in the cofounder architecture has a defined configuration under ./cofounder/api/system/structure/nodes/{category}/{name}.yaml to handle things like concurrency, retries and limits per time interval

For example, if you want multiple LLM generations to run in parallel (when possible - sequences and parallels are defined in DAGS under ./cofounder/api/system/structure/sequences/{definition}.yaml ), go to

#./cofounder/api/system/structure/nodes/op/llm.yaml
nodes:
 op:LLM::GEN:
  desc: "..."
  in: [model, messages, preparser, parser, query, stream]
  out: [generated, usage]
  queue:
   concurrency: 1 # <------------------------------- here 
 op:LLM::VECTORIZE:
  desc: "{texts} -> {vectors}"
  in: [texts]
  out: [vectors, usage]
 mapreduce: true
 op:LLM::VECTORIZE:CHUNK:
  desc: "{texts} -> {vectors}"
  in: [texts]
  out: [vectors, usage]
  queue:
   concurrency: 50

and change the op:LLM::GEN parameter concurrency to a higher value

The default LLM concurrency is set to 2 so you can see what's happening in your console streams step by step - but you can increment it depending on your api keys limits


Changelog


Roadmap


Benchmarks


Community & Links

  • Cofounder | Community discord server by @flamecoders

Docs, Design Systems, ...

[WIP]


Architecture

[more details later]

archi/v1 is as follows :

architecture


Credits

  • Demo design systems built using Figma renders / UI kits from:
    • blocks.pm by Hexa Plugin (see cofounder/api/system/presets)
    • google material
    • figma core
    • shadcn
  • Dashboard node-based ui powered by react flow