Top Related Projects
A Gradio web UI for Large Language Models with support for multiple inference backends.
Stable Diffusion web UI
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
High-performance In-browser LLM Inference Engine
Quick Overview
Morphic is an open-source project that provides a simple and flexible way to create and manage morphological analyzers for various languages. It aims to simplify the process of building, testing, and deploying morphological analysis tools, making it easier for linguists and developers to work with language data.
Pros
- Easy to use and configure for different languages
- Supports multiple morphological analysis techniques
- Extensible architecture allowing for custom implementations
- Well-documented with examples and tutorials
Cons
- Limited to morphological analysis, not a full NLP toolkit
- May require some linguistic knowledge to use effectively
- Performance may vary depending on the complexity of the language
- Still in active development, so some features may be unstable
Code Examples
- Creating a basic morphological analyzer:
from morphic import Analyzer
analyzer = Analyzer('english')
result = analyzer.analyze('running')
print(result)
# Output: [{'lemma': 'run', 'pos': 'VERB', 'features': {'Tense': 'Pres', 'Aspect': 'Prog'}}]
- Adding custom rules to the analyzer:
from morphic import Analyzer, Rule
analyzer = Analyzer('english')
custom_rule = Rule(r'(\w+)ing', r'\1', {'pos': 'VERB', 'features': {'Tense': 'Pres', 'Aspect': 'Prog'}})
analyzer.add_rule(custom_rule)
result = analyzer.analyze('jumping')
print(result)
# Output: [{'lemma': 'jump', 'pos': 'VERB', 'features': {'Tense': 'Pres', 'Aspect': 'Prog'}}]
- Using the analyzer with multiple languages:
from morphic import Analyzer
en_analyzer = Analyzer('english')
es_analyzer = Analyzer('spanish')
en_result = en_analyzer.analyze('cats')
es_result = es_analyzer.analyze('gatos')
print(en_result)
# Output: [{'lemma': 'cat', 'pos': 'NOUN', 'features': {'Number': 'Plur'}}]
print(es_result)
# Output: [{'lemma': 'gato', 'pos': 'NOUN', 'features': {'Number': 'Plur', 'Gender': 'Masc'}}]
Getting Started
To get started with Morphic, follow these steps:
-
Install Morphic using pip:
pip install morphic
-
Import the Analyzer class and create an instance for your desired language:
from morphic import Analyzer analyzer = Analyzer('english')
-
Use the analyzer to analyze words:
result = analyzer.analyze('running') print(result)
-
Explore the documentation for more advanced features and customization options.
Competitor Comparisons
A Gradio web UI for Large Language Models with support for multiple inference backends.
Pros of text-generation-webui
- More comprehensive UI with advanced features like chat, notebook, and instruct modes
- Supports a wider range of models and architectures
- Extensive customization options and parameters for fine-tuning output
Cons of text-generation-webui
- More complex setup and configuration process
- Heavier resource requirements due to its extensive feature set
- Steeper learning curve for new users
Code Comparison
text-generation-webui:
def generate_reply(
question, state, stopping_strings=None, is_chat=False, for_ui=False
):
# Complex generation logic with multiple parameters and options
# ...
Morphic:
def generate(
self, prompt: str, max_new_tokens: int = 128, temperature: float = 0.8
) -> str:
# Simpler generation function with fewer parameters
# ...
Summary
text-generation-webui offers a more feature-rich and customizable experience, supporting various models and modes. However, it comes with increased complexity and resource demands. Morphic, on the other hand, provides a simpler, more streamlined approach, which may be preferable for users seeking a more straightforward text generation solution with lower overhead.
Stable Diffusion web UI
Pros of stable-diffusion-webui
- More extensive feature set and customization options
- Larger community and more frequent updates
- Better support for various models and extensions
Cons of stable-diffusion-webui
- Steeper learning curve for beginners
- Higher system requirements for optimal performance
- More complex setup process
Code Comparison
stable-diffusion-webui:
def create_infotext(p, all_prompts, all_seeds, all_subseeds, comments=None, iteration=0, position_in_batch=0):
index = position_in_batch + iteration * p.batch_size
clip_skip = getattr(p, 'clip_skip', opts.CLIP_stop_at_last_layers)
token_merging_ratio = getattr(p, 'token_merging_ratio', 0)
token_merging_ratio_hr = getattr(p, 'token_merging_ratio_hr', 0)
morphic:
def generate_image(prompt, negative_prompt, width, height, steps, cfg_scale, sampler, seed):
generator = torch.Generator(device=device).manual_seed(seed)
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
width=width,
height=height,
num_inference_steps=steps,
guidance_scale=cfg_scale,
generator=generator,
).images[0]
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Pros of FastChat
- More comprehensive and feature-rich, offering a wider range of functionalities for chatbot development and deployment
- Better documentation and community support, making it easier for developers to get started and troubleshoot issues
- Supports multiple LLM models, providing flexibility in choosing the most suitable model for specific use cases
Cons of FastChat
- Higher complexity and steeper learning curve, which may be overwhelming for beginners or small-scale projects
- Requires more computational resources due to its extensive features and support for multiple models
- Less focused on specific use cases, potentially leading to unnecessary overhead for simpler chatbot applications
Code Comparison
Morphic (Python):
from morphic import Morphic
morphic = Morphic()
response = morphic.generate("Tell me a joke")
print(response)
FastChat (Python):
from fastchat.model import load_model, get_conversation_template
from fastchat.serve.inference import generate_stream
model, tokenizer = load_model("vicuna-7b")
conv = get_conversation_template("vicuna")
conv.append_message(conv.roles[0], "Tell me a joke")
gen = generate_stream(model, tokenizer, conv, max_new_tokens=100)
for response in gen:
print(response, end="", flush=True)
Pros of TaskMatrix
- More comprehensive task management system with a focus on AI-driven task decomposition and execution
- Integrates multiple AI models and tools for diverse task handling
- Supports complex, multi-step tasks with dynamic planning and adaptation
Cons of TaskMatrix
- More complex setup and configuration required
- Potentially higher computational resources needed due to multiple AI models
- Less focus on morphological analysis compared to Morphic
Code Comparison
TaskMatrix:
def decompose_task(task_description):
subtasks = llm.generate_subtasks(task_description)
return [SubTask(desc) for desc in subtasks]
def execute_task(task):
plan = generate_execution_plan(task)
for step in plan:
tool = select_appropriate_tool(step)
result = tool.execute(step)
Morphic:
def analyze_morphology(word):
morphemes = segment_word(word)
return [Morpheme(m) for m in morphemes]
def generate_related_forms(root):
forms = apply_morphological_rules(root)
return [Word(f) for f in forms]
This comparison highlights the different focus areas of the two projects. TaskMatrix emphasizes AI-driven task management and execution, while Morphic concentrates on morphological analysis of language. The code snippets illustrate these distinctions, with TaskMatrix showing task decomposition and execution, and Morphic demonstrating morphological analysis and word form generation.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
Pros of JARVIS
- More comprehensive and feature-rich, offering a wider range of AI-powered functionalities
- Backed by Microsoft, potentially providing better long-term support and resources
- Includes advanced natural language processing capabilities for more complex interactions
Cons of JARVIS
- Larger and more complex codebase, which may be harder to understand and contribute to
- Potentially higher resource requirements due to its extensive features
- May have a steeper learning curve for new users or developers
Code Comparison
Morphic (Python):
def process_input(self, user_input):
response = self.llm(user_input)
return response
JARVIS (Python):
def process_input(self, user_input):
parsed_input = self.nlp_parser.parse(user_input)
context = self.context_manager.get_context()
response = self.llm.generate(parsed_input, context)
return self.response_formatter.format(response)
The code comparison shows that JARVIS has a more complex input processing pipeline, including parsing, context management, and response formatting, while Morphic has a simpler, more direct approach to handling user input.
High-performance In-browser LLM Inference Engine
Pros of web-llm
- Focuses on running large language models directly in web browsers
- Utilizes WebGPU for accelerated inference on various devices
- Provides a more seamless integration with web applications
Cons of web-llm
- Limited to browser-based environments
- May have performance constraints due to browser limitations
- Requires WebGPU support, which is not universally available
Code Comparison
web-llm:
import * as webllm from "@mlc-ai/web-llm";
const chat = new webllm.ChatModule();
await chat.reload("vicuna-v1-7b");
const output = await chat.generate("Hello, how are you?");
morphic:
from morphic import Morphic
morphic = Morphic()
model = morphic.load_model("vicuna-v1-7b")
output = model.generate("Hello, how are you?")
Both repositories aim to provide easy access to large language models, but they differ in their approach and target environments. web-llm focuses on browser-based deployment, leveraging WebGPU for acceleration, while morphic appears to be a more general-purpose library for model deployment and inference. The code examples demonstrate the different APIs and usage patterns between the two projects.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Morphic
An AI-powered search engine with a generative UI.
ðï¸ Overview
- ð Features
- ð§± Stack
- ð Quickstart
- ð Deploy
- ð Search Engine
- â Verified models
- â¡ AI SDK Implementation
- ð¦ Open Source vs Cloud Offering
- ð¥ Contributing
ð Features
Core Features
- AI-powered search with GenerativeUI
- Natural language question understanding
- Multiple search providers support (Tavily, SearXNG, Exa)
- Model selection from UI (switch between available AI models)
- Reasoning models with visible thought process
Chat & History
- Chat history functionality (Optional)
- Share search results (Optional)
- Redis support (Local/Upstash)
AI Providers
The following AI providers are supported:
- OpenAI (Default)
- Google Generative AI
- Azure OpenAI
- Anthropic
- Ollama
- Groq
- DeepSeek
- Fireworks
- xAI (Grok)
- OpenAI Compatible
Models are configured in public/config/models.json
. Each model requires its corresponding API key to be set in the environment variables. See Configuration Guide for details.
Search Capabilities
- URL-specific search
- Video search support (Optional)
- SearXNG integration with:
- Customizable search depth (basic/advanced)
- Configurable engines
- Adjustable results limit
- Safe search options
- Custom time range filtering
Additional Features
- Docker deployment ready
- Browser search engine integration
ð§± Stack
Core Framework
- Next.js - App Router, React Server Components
- TypeScript - Type safety
- Vercel AI SDK - Text streaming / Generative UI
AI & Search
- OpenAI - Default AI provider (Optional: Google AI, Anthropic, Groq, Ollama, Azure OpenAI, DeepSeek, Fireworks)
- Tavily AI - Default search provider
- Alternative providers:
Data Storage
UI & Styling
- Tailwind CSS - Utility-first CSS framework
- shadcn/ui - Re-usable components
- Radix UI - Unstyled, accessible components
- Lucide Icons - Beautiful & consistent icons
ð Quickstart
1. Fork and Clone repo
Fork the repo to your Github account, then run the following command to clone the repo:
git clone git@github.com:[YOUR_GITHUB_ACCOUNT]/morphic.git
2. Install dependencies
cd morphic
bun install
3. Configure environment variables
cp .env.local.example .env.local
Fill in the required environment variables in .env.local
:
# Required
OPENAI_API_KEY= # Get from https://platform.openai.com/api-keys
TAVILY_API_KEY= # Get from https://app.tavily.com/home
For optional features configuration (Redis, SearXNG, etc.), see CONFIGURATION.md
4. Run app locally
Using Bun
bun dev
Using Docker
docker compose up -d
Visit http://localhost:3000 in your browser.
ð Deploy
Host your own live version of Morphic with Vercel, Cloudflare Pages, or Docker.
Vercel
Docker Prebuilt Image
Prebuilt Docker images are available on GitHub Container Registry:
docker pull ghcr.io/miurla/morphic:latest
You can use it with docker-compose:
services:
morphic:
image: ghcr.io/miurla/morphic:latest
env_file: .env.local
ports:
- '3000:3000'
volumes:
- ./models.json:/app/public/config/models.json # Optional: Override default model configuration
The default model configuration is located at public/config/models.json
. For Docker deployment, you can create models.json
alongside .env.local
to override the default configuration.
ð Search Engine
Setting up the Search Engine in Your Browser
If you want to use Morphic as a search engine in your browser, follow these steps:
- Open your browser settings.
- Navigate to the search engine settings section.
- Select "Manage search engines and site search".
- Under "Site search", click on "Add".
- Fill in the fields as follows:
- Search engine: Morphic
- Shortcut: morphic
- URL with %s in place of query:
https://morphic.sh/search?q=%s
- Click "Add" to save the new search engine.
- Find "Morphic" in the list of site search, click on the three dots next to it, and select "Make default".
This will allow you to use Morphic as your default search engine in the browser.
â Verified models
List of models applicable to all
- OpenAI
- o3-mini
- gpt-4o
- gpt-4o-mini
- gpt-4-turbo
- gpt-3.5-turbo
- Google
- Gemini 2.5 Pro (Experimental)
- Gemini 2.0 Flash Thinking (Experimental)
- Gemini 2.0 Flash
- Anthropic
- Claude 3.5 Sonnet
- Claude 3.5 Hike
- Ollama
- qwen2.5
- deepseek-r1
- Groq
- deepseek-r1-distill-llama-70b
- DeepSeek
- DeepSeek V3
- DeepSeek R1
- xAI
- grok-2
- grok-2-vision
â¡ AI SDK Implementation
Current Version: AI SDK UI
This version of Morphic uses the AI SDK UI implementation, which is recommended for production use. It provides better streaming performance and more reliable client-side UI updates.
Previous Version: AI SDK RSC (v0.2.34 and earlier)
The React Server Components (RSC) implementation of AI SDK was used in versions up to v0.2.34 but is now considered experimental and not recommended for production. If you need to reference the RSC implementation, please check the v0.2.34 release tag.
Note: v0.2.34 was the final version using RSC implementation before migrating to AI SDK UI.
For more information about choosing between AI SDK UI and RSC, see the official documentation.
ð¦ Open Source vs Cloud Offering
Morphic is open source software available under the Apache-2.0 license.
To maintain sustainable development and provide cloud-ready features, we offer a hosted version of Morphic alongside our open-source offering. The cloud solution makes Morphic accessible to non-technical users and provides additional features while keeping the core functionality open and available for developers.
For our cloud service, visit morphic.sh.
ð¥ Contributing
We welcome contributions to Morphic! Whether it's bug reports, feature requests, or pull requests, all contributions are appreciated.
Please see our Contributing Guide for details on:
- How to submit issues
- How to submit pull requests
- Commit message conventions
- Development setup
Top Related Projects
A Gradio web UI for Large Language Models with support for multiple inference backends.
Stable Diffusion web UI
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
High-performance In-browser LLM Inference Engine
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot