Convert Figma logo to code with AI

All-Hands-AI logoOpenHands

🙌 OpenHands: Code Less, Make More

39,129
4,406
39,129
250

Top Related Projects

Google Research

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

35,868

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

85,015

Tensors and Dynamic neural networks in Python with strong GPU acceleration

186,879

An Open Source Machine Learning Framework for Everyone

Quick Overview

OpenHands is an open-source project aimed at creating a comprehensive library for sign language processing tasks. It provides tools and models for sign language recognition, translation, and generation, with a focus on accessibility and ease of use for researchers and developers working in the field of sign language technologies.

Pros

  • Comprehensive toolkit for various sign language processing tasks
  • Open-source nature encourages collaboration and community contributions
  • Supports multiple sign languages and datasets
  • Provides pre-trained models and easy-to-use APIs

Cons

  • May require significant computational resources for training and running models
  • Limited documentation for some advanced features
  • Potential challenges in handling regional variations of sign languages
  • Relatively new project, which may lead to frequent changes and updates

Code Examples

  1. Loading a pre-trained sign language recognition model:
from openhands import SignLanguageRecognizer

recognizer = SignLanguageRecognizer.from_pretrained("asl_recognition_model")
prediction = recognizer.predict(video_input)
print(f"Recognized sign: {prediction}")
  1. Translating sign language to text:
from openhands import SignLanguageTranslator

translator = SignLanguageTranslator(source_lang="asl", target_lang="en")
translation = translator.translate(sign_language_video)
print(f"Translation: {translation}")
  1. Generating sign language animations:
from openhands import SignLanguageGenerator

generator = SignLanguageGenerator(language="bsl")
animation = generator.generate_animation("Hello, how are you?")
animation.save("output_animation.mp4")

Getting Started

To get started with OpenHands, follow these steps:

  1. Install the library:
pip install openhands
  1. Import the necessary modules:
from openhands import SignLanguageRecognizer, SignLanguageTranslator, SignLanguageGenerator
  1. Load a pre-trained model or create a new one:
recognizer = SignLanguageRecognizer.from_pretrained("asl_recognition_model")
# or
recognizer = SignLanguageRecognizer(model_config="path/to/config.json")
  1. Use the model for recognition, translation, or generation tasks:
result = recognizer.predict(input_data)

For more detailed instructions and advanced usage, refer to the official documentation.

Competitor Comparisons

Google Research

Pros of google-research

  • Extensive collection of research projects covering diverse AI/ML topics
  • Backed by Google's resources and expertise in cutting-edge research
  • Regular updates and contributions from Google's research teams

Cons of google-research

  • May be overwhelming for beginners due to its vast scope
  • Less focused on a specific application area compared to OpenHands
  • Potentially steeper learning curve for implementing some projects

Code Comparison

OpenHands (Python):

from openhands import HandPoseEstimator

estimator = HandPoseEstimator()
keypoints = estimator.estimate(image)

google-research (TensorFlow):

import tensorflow as tf
from google_research import model

model = model.load_pretrained()
predictions = model(inputs)

OpenHands focuses specifically on hand pose estimation, providing a simpler API for this task. google-research offers a broader range of models and research implementations, requiring more setup and understanding of the specific project being used.

While OpenHands is tailored for hand-related computer vision tasks, google-research covers a wide array of AI/ML topics, making it more versatile but potentially less accessible for users focused on a particular domain like hand pose estimation.

30,331

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Pros of fairseq

  • More comprehensive and feature-rich, supporting a wide range of sequence-to-sequence tasks
  • Backed by Facebook Research, with extensive documentation and community support
  • Highly optimized for performance and scalability

Cons of fairseq

  • Steeper learning curve due to its complexity and extensive feature set
  • May be overkill for simpler projects or those focused solely on sign language processing
  • Requires more computational resources for training and inference

Code Comparison

fairseq:

from fairseq.models.transformer import TransformerModel

model = TransformerModel.from_pretrained('/path/to/model', checkpoint_file='model.pt')
tokens = model.encode('Hello world')
translated = model.translate(tokens)

OpenHands:

from openhands.datasets import LSA64
from openhands.models import SLR

dataset = LSA64()
model = SLR(num_classes=64)
model.fit(dataset)
predictions = model.predict(test_data)

fairseq offers a more general-purpose approach for various sequence-to-sequence tasks, while OpenHands is specifically tailored for sign language recognition. fairseq's code demonstrates loading a pre-trained model and performing translation, whereas OpenHands showcases a simpler API for training and inference on sign language datasets.

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

Pros of transformers

  • Extensive library with support for numerous pre-trained models
  • Well-documented and actively maintained by a large community
  • Seamless integration with popular deep learning frameworks

Cons of transformers

  • Steeper learning curve for beginners due to its comprehensive nature
  • Higher computational requirements for some models
  • May be overkill for simpler NLP tasks

Code Comparison

transformers:

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")

OpenHands:

from openhands import HandPoseEstimator

estimator = HandPoseEstimator()
keypoints = estimator.estimate("hand_image.jpg")
print(f"Detected {len(keypoints)} keypoints")

While transformers focuses on NLP tasks, OpenHands specializes in hand pose estimation. transformers offers a wide range of pre-trained models and pipelines for various language tasks, whereas OpenHands provides tools specifically for hand tracking and gesture recognition in computer vision applications.

35,868

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.

Pros of DeepSpeed

  • Highly optimized for large-scale distributed training
  • Extensive documentation and tutorials
  • Supports a wide range of AI models and architectures

Cons of DeepSpeed

  • Steeper learning curve for beginners
  • Primarily focused on PyTorch, limiting flexibility for other frameworks
  • May be overkill for smaller projects or simpler models

Code Comparison

DeepSpeed:

import deepspeed
model_engine, optimizer, _, _ = deepspeed.initialize(args=args,
                                                     model=model,
                                                     model_parameters=params)

OpenHands:

from openhands import HandPose
model = HandPose()
model.predict(image)

OpenHands is more focused on hand pose estimation, offering a simpler API for this specific task. DeepSpeed, on the other hand, provides a more general-purpose optimization toolkit for large-scale deep learning models, requiring more setup but offering greater flexibility and performance benefits for complex projects.

85,015

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • Extensive ecosystem with wide industry adoption
  • Comprehensive documentation and large community support
  • Supports a broader range of deep learning applications beyond hand tracking

Cons of PyTorch

  • Steeper learning curve for beginners
  • Larger codebase and more complex architecture
  • Not specialized for hand-related tasks

Code Comparison

OpenHands example:

from openhands.datasets import LSA64Dataset
from openhands.models import SLR_GCN

dataset = LSA64Dataset(root="data/lsa64")
model = SLR_GCN(num_classes=64)

PyTorch example:

import torch
import torch.nn as nn

class SimpleNet(nn.Module):
    def __init__(self):
        super(SimpleNet, self).__init__()
        self.fc = nn.Linear(10, 5)

    def forward(self, x):
        return self.fc(x)

OpenHands is specifically designed for hand-related tasks, offering a more streamlined experience for projects in this domain. PyTorch, being a general-purpose deep learning framework, provides greater flexibility but requires more setup for specialized tasks. OpenHands builds upon PyTorch, leveraging its core functionality while providing higher-level abstractions for hand-related applications.

186,879

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • Extensive ecosystem with robust tools and libraries
  • Strong industry adoption and community support
  • Comprehensive documentation and learning resources

Cons of TensorFlow

  • Steeper learning curve for beginners
  • Can be slower for prototyping compared to more lightweight frameworks
  • Large framework size may be overkill for smaller projects

Code Comparison

TensorFlow:

import tensorflow as tf

model = tf.keras.Sequential([
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

OpenHands:

from openhands.models import SLRModel

model = SLRModel(
    backbone="resnet18",
    num_classes=100
)

Summary

TensorFlow is a comprehensive deep learning framework with a vast ecosystem, while OpenHands is specifically designed for sign language recognition tasks. TensorFlow offers more flexibility and broader applications, but OpenHands provides a more focused and potentially easier-to-use solution for its specific domain. The choice between them depends on the project requirements, scale, and the developer's familiarity with each framework.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Logo

OpenHands: Code Less, Make More

Contributors Stargazers CodeCov MIT License
Join our Slack community Join our Discord community Credits
Check out the documentation Paper on Arxiv Evaluation Benchmark Score

Welcome to OpenHands (formerly OpenDevin), a platform for software development agents powered by AI.

OpenHands agents can do anything a human developer can: modify code, run commands, browse the web, call APIs, and yes—even copy code snippets from StackOverflow.

Learn more at docs.all-hands.dev, or jump to the Quick Start.

[!IMPORTANT] Using OpenHands for work? We'd love to chat! Fill out this short form to join our Design Partner program, where you'll get early access to commercial features and the opportunity to provide input on our product roadmap.

App screenshot

⚡ Quick Start

The easiest way to run OpenHands is in Docker. See the Installation guide for system requirements and more information.

docker pull docker.all-hands.dev/all-hands-ai/runtime:0.16-nikolaik

docker run -it --rm --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.16-nikolaik \
    -e LOG_ALL_EVENTS=true \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v ~/.openhands:/home/openhands/.openhands \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app \
    docker.all-hands.dev/all-hands-ai/openhands:0.16

You'll find OpenHands running at http://localhost:3000!

Finally, you'll need a model provider and API key. Anthropic's Claude 3.5 Sonnet (anthropic/claude-3-5-sonnet-20241022) works best, but you have many options.


You can also connect OpenHands to your local filesystem, run OpenHands in a scriptable headless mode, interact with it via a friendly CLI, or run it on tagged issues with a github action.

Visit Installation for more information and setup instructions.

[!CAUTION] OpenHands is meant to be run by a single user on their local workstation. It is not appropriate for multi-tenant deployments, where multiple users share the same instance--there is no built-in isolation or scalability.

If you're interested in running OpenHands in a multi-tenant environment, please get in touch with us for advanced deployment options.

If you want to modify the OpenHands source code, check out Development.md.

Having issues? The Troubleshooting Guide can help.

📖 Documentation

To learn more about the project, and for tips on using OpenHands, check out our documentation.

There you'll find resources on how to use different LLM providers, troubleshooting resources, and advanced configuration options.

🤝 How to Join the Community

OpenHands is a community-driven project, and we welcome contributions from everyone. We do most of our communication through Slack, so this is the best place to start, but we also are happy to have you contact us on Discord or Github:

See more about the community in COMMUNITY.md or find details on contributing in CONTRIBUTING.md.

📈 Progress

See the monthly OpenHands roadmap here (updated at the maintainer's meeting at the end of each month).

Star History Chart

📜 License

Distributed under the MIT License. See LICENSE for more information.

🙏 Acknowledgements

OpenHands is built by a large number of contributors, and every contribution is greatly appreciated! We also build upon other open source projects, and we are deeply thankful for their work.

For a list of open source projects and licenses used in OpenHands, please see our CREDITS.md file.

📚 Cite

@misc{openhands,
      title={{OpenHands: An Open Platform for AI Software Developers as Generalist Agents}},
      author={Xingyao Wang and Boxuan Li and Yufan Song and Frank F. Xu and Xiangru Tang and Mingchen Zhuge and Jiayi Pan and Yueqi Song and Bowen Li and Jaskirat Singh and Hoang H. Tran and Fuqiang Li and Ren Ma and Mingzhang Zheng and Bill Qian and Yanjun Shao and Niklas Muennighoff and Yizhe Zhang and Binyuan Hui and Junyang Lin and Robert Brennan and Hao Peng and Heng Ji and Graham Neubig},
      year={2024},
      eprint={2407.16741},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2407.16741},
}