Convert Figma logo to code with AI

motiondivision logomotion

A modern animation library for React and JavaScript

26,378
862
26,378
243

Top Related Projects

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.

51,450

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite

77,006

Models and examples built with TensorFlow

Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"

OpenMMLab Detection Toolbox and Benchmark

24,600

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Quick Overview

Motion is a Python library for building beautiful animations and graphics. It provides a simple and intuitive API for creating complex animations, making it easy for developers and designers to bring their ideas to life with smooth, high-quality motion graphics.

Pros

  • Easy-to-use API for creating complex animations
  • High-quality output with smooth transitions and effects
  • Supports a wide range of animation types and styles
  • Integrates well with other Python libraries and frameworks

Cons

  • Limited documentation and examples for advanced features
  • May have a steeper learning curve for those new to animation concepts
  • Performance can be slower for very complex animations
  • Lacks some advanced features found in professional animation software

Code Examples

Creating a simple shape animation:

from motion import Scene, Circle

class MyScene(Scene):
    def construct(self):
        circle = Circle()
        self.play(circle.animate.scale(2))
        self.play(circle.animate.set_fill(color="red", opacity=0.5))

MyScene().render()

Animating text:

from motion import Scene, Text

class TextAnimation(Scene):
    def construct(self):
        text = Text("Hello, Motion!")
        self.play(text.animate.set_color("blue"))
        self.play(text.animate.scale(2))
        self.play(text.animate.rotate(PI/2))

TextAnimation().render()

Creating a custom animation:

from motion import Scene, Square, Create, Transform

class CustomAnimation(Scene):
    def construct(self):
        square = Square()
        circle = Circle()
        
        self.play(Create(square))
        self.play(Transform(square, circle))

CustomAnimation().render()

Getting Started

To get started with Motion, follow these steps:

  1. Install Motion using pip:

    pip install motiongraphics
    
  2. Create a new Python file and import the necessary classes:

    from motion import Scene, Circle, Square
    
  3. Define your scene class and construct method:

    class MyFirstScene(Scene):
        def construct(self):
            circle = Circle()
            self.play(circle.animate.scale(2))
    
  4. Render your scene:

    MyFirstScene().render()
    

This will create a simple animation of a circle scaling up. You can now explore more complex animations and effects using the Motion library.

Competitor Comparisons

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.

Pros of Detectron2

  • More comprehensive and feature-rich, offering a wide range of object detection and segmentation models
  • Better documentation and community support, with extensive tutorials and examples
  • Backed by Facebook AI Research, ensuring regular updates and maintenance

Cons of Detectron2

  • Steeper learning curve due to its complexity and extensive feature set
  • Heavier and more resource-intensive, potentially overkill for simpler projects
  • Primarily focused on static image analysis, less optimized for video processing

Code Comparison

Motion:

from motion import VideoTracker

tracker = VideoTracker()
tracks = tracker.track("video.mp4")

Detectron2:

from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg

cfg = get_cfg()
cfg.merge_from_file("config.yaml")
predictor = DefaultPredictor(cfg)
outputs = predictor(image)

Motion is more streamlined for video tracking tasks, while Detectron2 requires more setup but offers greater flexibility for various computer vision tasks.

51,450

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite

Pros of YOLOv5

  • More established and widely adopted in the computer vision community
  • Extensive documentation and pre-trained models available
  • Supports a broader range of object detection tasks

Cons of YOLOv5

  • Larger model size and potentially higher computational requirements
  • Steeper learning curve for beginners
  • Less focused on motion-specific applications

Code Comparison

YOLOv5:

from ultralytics import YOLO

model = YOLO('yolov5s.pt')
results = model('image.jpg')
results.show()

Motion:

from motion import VideoTracker

tracker = VideoTracker()
tracker.track('video.mp4')
tracker.visualize()

Key Differences

Motion focuses specifically on motion tracking and analysis, while YOLOv5 is a more general-purpose object detection framework. Motion may be easier to use for motion-specific tasks, but YOLOv5 offers more flexibility for various computer vision applications.

77,006

Models and examples built with TensorFlow

Pros of TensorFlow Models

  • Extensive collection of pre-trained models and implementations
  • Backed by Google, with a large community and frequent updates
  • Comprehensive documentation and tutorials

Cons of TensorFlow Models

  • Can be complex and overwhelming for beginners
  • Requires more computational resources for many models
  • Steeper learning curve compared to Motion

Code Comparison

Motion:

from motion import VideoCapture, ObjectDetector

cap = VideoCapture(0)
detector = ObjectDetector()

while True:
    frame = cap.read()
    objects = detector.detect(frame)

TensorFlow Models:

import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils

model = tf.saved_model.load('path/to/model')
category_index = label_map_util.create_category_index_from_labelmap('path/to/labelmap.pbtxt')

image = tf.convert_to_tensor(np.expand_dims(image_np, 0), dtype=tf.float32)
detections = model(image)

Motion focuses on simplicity and ease of use, making it more accessible for quick prototyping and smaller projects. TensorFlow Models offers a wider range of advanced features and pre-trained models, suitable for more complex and large-scale machine learning tasks.

Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"

Pros of Mask2Former

  • More advanced and versatile architecture for instance segmentation tasks
  • Supports multiple datasets and benchmarks out-of-the-box
  • Extensive documentation and pre-trained models available

Cons of Mask2Former

  • Higher computational requirements and complexity
  • Steeper learning curve for implementation and customization
  • Less focus on real-time performance compared to Motion

Code Comparison

Motion:

from motion import BackgroundSubtractor

bs = BackgroundSubtractor()
mask = bs.apply(frame)

Mask2Former:

from detectron2.config import get_cfg
from mask2former import add_maskformer2_config

cfg = get_cfg()
add_maskformer2_config(cfg)
predictor = DefaultPredictor(cfg)
outputs = predictor(image)

Motion is more straightforward for basic background subtraction, while Mask2Former offers a more comprehensive solution for instance segmentation tasks but requires more setup and configuration.

OpenMMLab Detection Toolbox and Benchmark

Pros of mmdetection

  • Extensive collection of object detection algorithms and models
  • Well-documented with comprehensive tutorials and examples
  • Active community and frequent updates

Cons of mmdetection

  • Steeper learning curve due to its complexity and extensive features
  • Larger codebase, which may be overwhelming for simple projects
  • Primarily focused on object detection, less versatile for other computer vision tasks

Code Comparison

mmdetection:

from mmdet.apis import init_detector, inference_detector

config_file = 'configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
checkpoint_file = 'checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
model = init_detector(config_file, checkpoint_file, device='cuda:0')
result = inference_detector(model, 'test.jpg')

motion:

from motion import Motion

motion = Motion()
motion.load_model("yolov5s")
result = motion.detect("test.jpg")

The code comparison shows that mmdetection requires more setup and configuration, while motion offers a simpler API for quick implementation. mmdetection provides more flexibility and control over the model and inference process, whereas motion focuses on ease of use for common tasks.

24,600

Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow

Pros of Mask_RCNN

  • More established and widely used for instance segmentation tasks
  • Extensive documentation and community support
  • Built on top of popular deep learning frameworks (TensorFlow, Keras)

Cons of Mask_RCNN

  • Heavier and more complex architecture
  • Requires more computational resources
  • Less suitable for real-time applications

Code Comparison

Mask_RCNN:

import mrcnn.model as modellib
from mrcnn import utils

model = modellib.MaskRCNN(mode="inference", config=config, model_dir=MODEL_DIR)
model.load_weights(WEIGHTS_PATH, by_name=True)
results = model.detect([image], verbose=1)

Motion:

from motiondivision import MotionDivision

md = MotionDivision()
md.load_model("path/to/model")
results = md.process_frame(frame)

Motion focuses on real-time object detection and tracking, while Mask_RCNN specializes in instance segmentation. Motion is lighter and faster, making it more suitable for video processing and real-time applications. Mask_RCNN, on the other hand, provides more detailed segmentation masks but at the cost of higher computational requirements.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Motion logo

Motion

An open source motion library for JavaScript and React.


Motion is the only animation library with first-class APIs for both JavaScript and React.

It's the only animation library with a hybrid engine, combining the power of JavaScript animations with the performance of native browser APIs.

🏎️ Quick start

Install motion via your package manager:

npm install motion

JavaScript

import { animate } from "motion"

animate("#box", { x: 100 })

Read the full JavaScript docs.

React

import { motion } from "motion/react"

function Component() {
    return <motion.div animate={{ x: 100 }} />
}

Read the full React docs.

💎 Contribute

👩🏻‍⚖️ License

  • Motion is MIT licensed.

✨ Sponsors

Motion is sustainable thanks to the kind support of its sponsors.

Partners

Framer

Motion powers Framer animations, the web builder for creative pros. Design and ship your dream site. Zero code, maximum speed.

Framer

Platinum

Syntax.fm Tailwind Emil Kowalski Linear

Gold

Liveblocks Luma

Silver

Frontend.fyi Statamic Firecrawl Puzzmo Build UI Hover

Personal