Convert Figma logo to code with AI

miurla logomorphic

An AI-powered search engine with a generative UI

6,467
1,668
6,467
39

Top Related Projects

A Gradio web UI for Large Language Models.

Stable Diffusion web UI

37,484

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

23,607

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

14,015

High-performance In-browser LLM Inference Engine

Quick Overview

Morphic is an open-source project that provides a simple and flexible way to create and manage morphological analyzers for various languages. It aims to simplify the process of building, testing, and deploying morphological analysis tools, making it easier for linguists and developers to work with language data.

Pros

  • Easy to use and configure for different languages
  • Supports multiple morphological analysis techniques
  • Extensible architecture allowing for custom implementations
  • Well-documented with examples and tutorials

Cons

  • Limited to morphological analysis, not a full NLP toolkit
  • May require some linguistic knowledge to use effectively
  • Performance may vary depending on the complexity of the language
  • Still in active development, so some features may be unstable

Code Examples

  1. Creating a basic morphological analyzer:
from morphic import Analyzer

analyzer = Analyzer('english')
result = analyzer.analyze('running')
print(result)
# Output: [{'lemma': 'run', 'pos': 'VERB', 'features': {'Tense': 'Pres', 'Aspect': 'Prog'}}]
  1. Adding custom rules to the analyzer:
from morphic import Analyzer, Rule

analyzer = Analyzer('english')
custom_rule = Rule(r'(\w+)ing', r'\1', {'pos': 'VERB', 'features': {'Tense': 'Pres', 'Aspect': 'Prog'}})
analyzer.add_rule(custom_rule)

result = analyzer.analyze('jumping')
print(result)
# Output: [{'lemma': 'jump', 'pos': 'VERB', 'features': {'Tense': 'Pres', 'Aspect': 'Prog'}}]
  1. Using the analyzer with multiple languages:
from morphic import Analyzer

en_analyzer = Analyzer('english')
es_analyzer = Analyzer('spanish')

en_result = en_analyzer.analyze('cats')
es_result = es_analyzer.analyze('gatos')

print(en_result)
# Output: [{'lemma': 'cat', 'pos': 'NOUN', 'features': {'Number': 'Plur'}}]
print(es_result)
# Output: [{'lemma': 'gato', 'pos': 'NOUN', 'features': {'Number': 'Plur', 'Gender': 'Masc'}}]

Getting Started

To get started with Morphic, follow these steps:

  1. Install Morphic using pip:

    pip install morphic
    
  2. Import the Analyzer class and create an instance for your desired language:

    from morphic import Analyzer
    analyzer = Analyzer('english')
    
  3. Use the analyzer to analyze words:

    result = analyzer.analyze('running')
    print(result)
    
  4. Explore the documentation for more advanced features and customization options.

Competitor Comparisons

A Gradio web UI for Large Language Models.

Pros of text-generation-webui

  • More comprehensive UI with advanced features like chat, notebook, and instruct modes
  • Supports a wider range of models and architectures
  • Extensive customization options and parameters for fine-tuning output

Cons of text-generation-webui

  • More complex setup and configuration process
  • Heavier resource requirements due to its extensive feature set
  • Steeper learning curve for new users

Code Comparison

text-generation-webui:

def generate_reply(
    question, state, stopping_strings=None, is_chat=False, for_ui=False
):
    # Complex generation logic with multiple parameters and options
    # ...

Morphic:

def generate(
    self, prompt: str, max_new_tokens: int = 128, temperature: float = 0.8
) -> str:
    # Simpler generation function with fewer parameters
    # ...

Summary

text-generation-webui offers a more feature-rich and customizable experience, supporting various models and modes. However, it comes with increased complexity and resource demands. Morphic, on the other hand, provides a simpler, more streamlined approach, which may be preferable for users seeking a more straightforward text generation solution with lower overhead.

Stable Diffusion web UI

Pros of stable-diffusion-webui

  • More extensive feature set and customization options
  • Larger community and more frequent updates
  • Better support for various models and extensions

Cons of stable-diffusion-webui

  • Steeper learning curve for beginners
  • Higher system requirements for optimal performance
  • More complex setup process

Code Comparison

stable-diffusion-webui:

def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iteration=0, position_in_batch=0):
    index = position_in_batch + iteration * p.batch_size

    clip_skip = getattr(p, 'clip_skip', opts.CLIP_stop_at_last_layers)
    token_merging_ratio = getattr(p, 'token_merging_ratio', 0)
    token_merging_ratio_hr = getattr(p, 'token_merging_ratio_hr', 0)

morphic:

def generate_image(prompt, negative_prompt, width, height, steps, cfg_scale, sampler, seed):
    generator = torch.Generator(device=device).manual_seed(seed)
    image = pipe(
        prompt=prompt,
        negative_prompt=negative_prompt,
        width=width,
        height=height,
        num_inference_steps=steps,
        guidance_scale=cfg_scale,
        generator=generator,
    ).images[0]
37,484

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

Pros of FastChat

  • More comprehensive and feature-rich, offering a wider range of functionalities for chatbot development and deployment
  • Better documentation and community support, making it easier for developers to get started and troubleshoot issues
  • Supports multiple LLM models, providing flexibility in choosing the most suitable model for specific use cases

Cons of FastChat

  • Higher complexity and steeper learning curve, which may be overwhelming for beginners or small-scale projects
  • Requires more computational resources due to its extensive features and support for multiple models
  • Less focused on specific use cases, potentially leading to unnecessary overhead for simpler chatbot applications

Code Comparison

Morphic (Python):

from morphic import Morphic

morphic = Morphic()
response = morphic.generate("Tell me a joke")
print(response)

FastChat (Python):

from fastchat.model import load_model, get_conversation_template
from fastchat.serve.inference import generate_stream

model, tokenizer = load_model("vicuna-7b")
conv = get_conversation_template("vicuna")
conv.append_message(conv.roles[0], "Tell me a joke")
gen = generate_stream(model, tokenizer, conv, max_new_tokens=100)
for response in gen:
    print(response, end="", flush=True)

Pros of TaskMatrix

  • More comprehensive task management system with a focus on AI-driven task decomposition and execution
  • Integrates multiple AI models and tools for diverse task handling
  • Supports complex, multi-step tasks with dynamic planning and adaptation

Cons of TaskMatrix

  • More complex setup and configuration required
  • Potentially higher computational resources needed due to multiple AI models
  • Less focus on morphological analysis compared to Morphic

Code Comparison

TaskMatrix:

def decompose_task(task_description):
    subtasks = llm.generate_subtasks(task_description)
    return [SubTask(desc) for desc in subtasks]

def execute_task(task):
    plan = generate_execution_plan(task)
    for step in plan:
        tool = select_appropriate_tool(step)
        result = tool.execute(step)

Morphic:

def analyze_morphology(word):
    morphemes = segment_word(word)
    return [Morpheme(m) for m in morphemes]

def generate_related_forms(root):
    forms = apply_morphological_rules(root)
    return [Word(f) for f in forms]

This comparison highlights the different focus areas of the two projects. TaskMatrix emphasizes AI-driven task management and execution, while Morphic concentrates on morphological analysis of language. The code snippets illustrate these distinctions, with TaskMatrix showing task decomposition and execution, and Morphic demonstrating morphological analysis and word form generation.

23,607

JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf

Pros of JARVIS

  • More comprehensive and feature-rich, offering a wider range of AI-powered functionalities
  • Backed by Microsoft, potentially providing better long-term support and resources
  • Includes advanced natural language processing capabilities for more complex interactions

Cons of JARVIS

  • Larger and more complex codebase, which may be harder to understand and contribute to
  • Potentially higher resource requirements due to its extensive features
  • May have a steeper learning curve for new users or developers

Code Comparison

Morphic (Python):

def process_input(self, user_input):
    response = self.llm(user_input)
    return response

JARVIS (Python):

def process_input(self, user_input):
    parsed_input = self.nlp_parser.parse(user_input)
    context = self.context_manager.get_context()
    response = self.llm.generate(parsed_input, context)
    return self.response_formatter.format(response)

The code comparison shows that JARVIS has a more complex input processing pipeline, including parsing, context management, and response formatting, while Morphic has a simpler, more direct approach to handling user input.

14,015

High-performance In-browser LLM Inference Engine

Pros of web-llm

  • Focuses on running large language models directly in web browsers
  • Utilizes WebGPU for accelerated inference on various devices
  • Provides a more seamless integration with web applications

Cons of web-llm

  • Limited to browser-based environments
  • May have performance constraints due to browser limitations
  • Requires WebGPU support, which is not universally available

Code Comparison

web-llm:

import * as webllm from "@mlc-ai/web-llm";

const chat = new webllm.ChatModule();
await chat.reload("vicuna-v1-7b");
const output = await chat.generate("Hello, how are you?");

morphic:

from morphic import Morphic

morphic = Morphic()
model = morphic.load_model("vicuna-v1-7b")
output = model.generate("Hello, how are you?")

Both repositories aim to provide easy access to large language models, but they differ in their approach and target environments. web-llm focuses on browser-based deployment, leveraging WebGPU for acceleration, while morphic appears to be a more general-purpose library for model deployment and inference. The code examples demonstrate the different APIs and usage patterns between the two projects.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Morphic

An AI-powered search engine with a generative UI.

[!CAUTION] Morphic is built with Vercel AI SDK RSC. AI SDK RSC is experimental and has some limitations. When using it in production, it is recommended to migrate to SDK UI.

capture

[!NOTE] Please note that there are differences between this repository and the official website morphic.sh. The official website is a fork of this repository with additional features such as authentication, which are necessary for providing the service online. The core source code of Morphic resides in this repository, and it's designed to be easily built and deployed.

🗂️ Overview

🛠 Features

  • Search and answer using GenerativeUI
  • Understand user's questions
  • Search history functionality
  • Share search results (Optional)
  • Video search support (Optional)
  • Get answers from specified URLs
  • Use as a search engine ※
  • Support for providers other than OpenAI
    • Google Generative AI Provider
    • Azure OpenAI Provider ※
    • Anthropic Provider
    • Ollama Provider
    • Groq Provider
  • Local Redis support
  • SearXNG Search API support with customizable depth (basic or advanced)
  • Configurable search depth (basic or advanced)
  • SearXNG Search API support with customizable depth

🧱 Stack

🚀 Quickstart

1. Fork and Clone repo

Fork the repo to your Github account, then run the following command to clone the repo:

git clone git@github.com:[YOUR_GITHUB_ACCOUNT]/morphic.git

2. Install dependencies

cd morphic
bun install

3. Setting up Upstash Redis

Follow the guide below to set up Upstash Redis. Create a database and obtain UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN. Refer to the Upstash guide for instructions on how to proceed.

If you intend to use a local Redis, you can skip this step.

4. Fill out secrets

cp .env.local.example .env.local

Your .env.local file should look like this:

# OpenAI API key retrieved here: https://platform.openai.com/api-keys
OPENAI_API_KEY=

# Tavily API Key retrieved here: https://app.tavily.com/home
TAVILY_API_KEY=

# Upstash Redis URL and Token retrieved here: https://console.upstash.com/redis
UPSTASH_REDIS_REST_URL=
UPSTASH_REDIS_REST_TOKEN=

## Redis Configuration

This application supports both Upstash Redis and local Redis. To use local Redis:

1. Set `USE_LOCAL_REDIS=true` in your `.env.local` file.
2. Optionally, set `LOCAL_REDIS_URL` if your local Redis is not running on the default `localhost:6379` or `redis://redis:6379` if you're using docker compose.

To use Upstash Redis:

1. Set `USE_LOCAL_REDIS=false` or leave it unset in your `.env.local` file.
2. Set `UPSTASH_REDIS_REST_URL` and `UPSTASH_REDIS_REST_TOKEN` with your Upstash credentials.

# SearXNG Configuration
SEARXNG_API_URL=http://localhost:8080  # Replace with your local SearXNG API URL or docker http://searxng:8080
SEARCH_API=tavily  #  use searxng, tavily or exa
SEARXNG_SECRET="" # generate a secret key e.g. openssl rand -base64 32
SEARXNG_PORT=8080 # default port
SEARXNG_BIND_ADDRESS=0.0.0.0 # default address
SEARXNG_IMAGE_PROXY=true # enable image proxy
SEARXNG_LIMITER=false # can be enabled to limit the number of requests per IP address
SEARXNG_DEFAULT_DEPTH=basic # Set to 'basic' or 'advanced', only affects SearXNG searches
SEARXNG_MAX_RESULTS=50 # Maximum number of results to return from SearXNG

5. Run app locally

Using Bun

To run the application locally using Bun, execute the following command:

bun dev

You can now visit http://localhost:3000 in your web browser.

Using Docker

To run the application using Docker, use the following command:

docker compose up -d

This will start the application in detached mode. You can access it at http://localhost:3000.

🌐 Deploy

Host your own live version of Morphic with Vercel or Cloudflare Pages.

Vercel

Deploy with Vercel

🔎 Search Engine

Setting up the Search Engine in Your Browser

If you want to use Morphic as a search engine in your browser, follow these steps:

  1. Open your browser settings.
  2. Navigate to the search engine settings section.
  3. Select "Manage search engines and site search".
  4. Under "Site search", click on "Add".
  5. Fill in the fields as follows:
    • Search engine: Morphic
    • Shortcut: morphic
    • URL with %s in place of query: https://morphic.sh/search?q=%s
  6. Click "Add" to save the new search engine.
  7. Find "Morphic" in the list of site search, click on the three dots next to it, and select "Make default".

This will allow you to use Morphic as your default search engine in the browser.

Using SearXNG as an Alternative Search Backend

Morphic now supports SearXNG as an alternative search backend with advanced search capabilities. To use SearXNG:

  1. Ensure you have Docker and Docker Compose installed on your system.

  2. In your .env.local file, set the following variables:

    • NEXT_PUBLIC_BASE_URL=http://localhost:3000 # Base URL for local development
    • SEARXNG_API_URL=http://localhost:8080 # Replace with your local SearXNG API URL or docker http://searxng:8080
    • SEARXNG_SECRET=your_secret_key_here
    • SEARXNG_PORT=8080
    • SEARXNG_IMAGE_PROXY=true
    • SEARCH_API=searxng
    • SEARXNG_LIMITER=false # can be enabled to limit the number of requests per IP
    • SEARXNG_DEFAULT_DEPTH=basic # Set to 'basic' or 'advanced'
    • SEARXNG_MAX_RESULTS=50 # Maximum number of results to return from SearXNG
    • SEARXNG_ENGINES=google,bing,duckduckgo,wikipedia # can be overriden in searxng config
    • SEARXNG_TIME_RANGE=None # Time range for search results
    • SEARXNG_SAFESEARCH=0 # Safe search setting
    • SEARXNG_CRAWL_MULTIPLIER=4 # Multiplier for the number of results to crawl in advanced search
  3. Two configuration files are provided in the root directory:

    • searxng-settings.yml: This file contains the main configuration for SearXNG, including engine settings and server options.
    • searxng-limiter.toml: This file configures the rate limiting and bot detection features of SearXNG.
  4. Run docker-compose up to start the Morphic stack with SearXNG included.

  5. SearXNG will be available at http://localhost:8080 and Morphic will use it as the search backend.

Advanced Search Configuration

  • NEXT_PUBLIC_BASE_URL: Set this to your local development URL (http://localhost:3000) or your production URL when deploying.
  • SEARXNG_DEFAULT_DEPTH: Set to 'basic' or 'advanced' to control the default search depth.
  • SEARXNG_MAX_RESULTS: Maximum number of results to return from SearXNG.
  • SEARXNG_CRAWL_MULTIPLIER: In advanced search mode, this multiplier determines how many results to crawl. For example, if SEARXNG_MAX_RESULTS=10 and SEARXNG_CRAWL_MULTIPLIER=4, up to 40 results will be crawled before filtering and ranking.
  • SEARXNG_ENGINES: Comma-separated list of search engines to use.
  • SEARXNG_TIME_RANGE: Time range for search results (e.g., 'day', 'week', 'month', 'year', 'all').
  • SEARXNG_SAFESEARCH: Safe search setting (0 for off, 1 for moderate, 2 for strict).

The advanced search feature includes content crawling, relevance scoring, and filtering to provide more accurate and comprehensive results.

Customizing SearXNG

  • You can modify searxng-settings.yml to enable/disable specific search engines, change UI settings, or adjust server options.
  • The searxng-limiter.toml file allows you to configure rate limiting and bot detection. This is useful if you're exposing SearXNG directly to the internet.
  • If you prefer not to use external configuration files, you can set these options using environment variables in the docker-compose.yml file or directly in the SearXNG container.

Troubleshooting

  • If you encounter issues with specific search engines (e.g., Wikidata), you can disable them in searxng-settings.yml:
engines:
  - name: wikidata
    disabled: true

✅ Verified models

List of models applicable to all

  • OpenAI
    • gpt-4o
    • gpt-4o-mini
    • gpt-4-turbo
    • gpt-3.5-turbo
  • Google
    • Gemini 1.5 Pro (Unstable)
    • Gemini 2.0 Flash (Experimental)
  • Anthropic
    • Claude 3.5 Sonnet
  • Ollama
    • qwen2.5
  • Groq
    • llama3-groq-8b-8192-tool-use-preview
    • llama3-groq-70b-8192-tool-use-preview