Top Related Projects
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Models and examples built with TensorFlow
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Quick Overview
Scissors is an open-source image cropping library for iOS developed by Lyft. It provides a user interface for selecting and cropping portions of images, with support for circular and rectangular crops, as well as custom aspect ratios.
Pros
- Easy integration with existing iOS projects
- Supports both circular and rectangular crop shapes
- Allows for custom aspect ratios
- Provides a smooth and intuitive user experience
Cons
- Limited to iOS platform only
- May require additional customization for specific use cases
- Documentation could be more comprehensive
- Not actively maintained (last commit was in 2019)
Code Examples
- Basic usage:
let image = UIImage(named: "example")
let cropViewController = CropViewController(image: image)
cropViewController.delegate = self
present(cropViewController, animated: true, completion: nil)
This code creates a new CropViewController
with an image and presents it modally.
- Customizing crop shape:
cropViewController.cropShape = .circle
This line sets the crop shape to circular instead of the default rectangle.
- Setting a custom aspect ratio:
cropViewController.aspectRatio = CGSize(width: 16, height: 9)
cropViewController.aspectRatioLockEnabled = true
This code sets a 16:9 aspect ratio for the crop area and locks it.
- Handling the cropped image:
func cropViewController(_ cropViewController: CropViewController, didCropToImage image: UIImage, withRect cropRect: CGRect, angle: Int) {
// Use the cropped image
imageView.image = image
dismiss(animated: true, completion: nil)
}
This delegate method receives the cropped image and can be used to update the UI or save the image.
Getting Started
- Add Scissors to your project using CocoaPods:
pod 'Scissors'
- Import Scissors in your Swift file:
import Scissors
- Create and present a
CropViewController
:
let image = UIImage(named: "example")
let cropViewController = CropViewController(image: image)
cropViewController.delegate = self
present(cropViewController, animated: true, completion: nil)
- Implement the
CropViewControllerDelegate
methods to handle the cropped image:
extension YourViewController: CropViewControllerDelegate {
func cropViewController(_ cropViewController: CropViewController, didCropToImage image: UIImage, withRect cropRect: CGRect, angle: Int) {
// Handle the cropped image
imageView.image = image
dismiss(animated: true, completion: nil)
}
}
Competitor Comparisons
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
Pros of Segment Anything
- More advanced and versatile image segmentation capabilities
- Larger and more active community support
- Extensive documentation and pre-trained models available
Cons of Segment Anything
- Higher computational requirements and complexity
- Steeper learning curve for implementation and customization
- May be overkill for simpler image processing tasks
Code Comparison
Scissors:
from scissors import cut_image
cut_image('input.jpg', 'output.jpg', width=300, height=200)
Segment Anything:
from segment_anything import SamPredictor, sam_model_registry
sam = sam_model_registry["default"](checkpoint="sam_vit_h_4b8939.pth")
predictor = SamPredictor(sam)
predictor.set_image(image)
masks, _, _ = predictor.predict(point_coords=input_point, point_labels=input_label)
Summary
Segment Anything offers more advanced image segmentation capabilities with extensive community support and pre-trained models. However, it comes with higher complexity and computational requirements. Scissors, on the other hand, provides a simpler approach for basic image cutting tasks, making it more suitable for straightforward image processing needs. The choice between the two depends on the specific requirements of the project and the level of image segmentation sophistication needed.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
Pros of CLIP
- More versatile for general image-text understanding tasks
- Larger model with more advanced capabilities in multimodal learning
- Backed by OpenAI's research and resources
Cons of CLIP
- Potentially higher computational requirements
- May be overkill for simpler image processing tasks
- Less focused on specific image manipulation operations
Code Comparison
CLIP example:
import torch
from PIL import Image
from clip import clip
model, preprocess = clip.load("ViT-B/32", device="cuda")
image = preprocess(Image.open("image.jpg")).unsqueeze(0).to("cuda")
text = clip.tokenize(["a dog", "a cat"]).to("cuda")
with torch.no_grad():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
Scissors example:
from scissors import Image
image = Image.open("image.jpg")
resized = image.resize(width=300, height=200)
cropped = image.crop(x=100, y=100, width=200, height=150)
rotated = image.rotate(45)
Summary
CLIP is a more advanced, general-purpose model for image-text understanding, while Scissors focuses on specific image manipulation tasks. CLIP offers broader capabilities but may require more resources, whereas Scissors provides simpler, targeted functionality for common image processing operations.
DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Pros of DeepLab2
- More comprehensive and advanced semantic segmentation framework
- Includes state-of-the-art models and techniques for image segmentation
- Actively maintained with regular updates and improvements
Cons of DeepLab2
- Higher complexity and steeper learning curve
- Requires more computational resources for training and inference
- Less focused on specific use cases compared to Scissors
Code Comparison
DeepLab2 (Python):
model = deeplab2.Model(num_classes=21)
inputs = tf.keras.Input(shape=(None, None, 3))
outputs = model(inputs)
Scissors (JavaScript):
const scissors = new Scissors();
const result = await scissors.cut({
image: imageBuffer,
objects: ['person', 'car']
});
Summary
DeepLab2 is a more comprehensive and advanced semantic segmentation framework, offering state-of-the-art models and techniques. However, it comes with increased complexity and resource requirements. Scissors, on the other hand, is more focused on specific use cases and easier to use, but may lack some of the advanced features and flexibility of DeepLab2. The choice between the two depends on the specific requirements of the project and the level of expertise of the development team.
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Pros of Detectron2
- More comprehensive computer vision library with support for object detection, segmentation, and other tasks
- Extensive documentation and tutorials for easier adoption
- Larger community and more frequent updates
Cons of Detectron2
- Steeper learning curve due to its broader scope
- Heavier resource requirements for training and inference
- Less focused on specific use cases compared to Scissors
Code Comparison
Scissors (Image cropping):
from scissors import CropEngine
engine = CropEngine()
result = engine.crop_image(image_path, target_aspect_ratio=1.0)
Detectron2 (Object detection):
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"))
predictor = DefaultPredictor(cfg)
outputs = predictor(image)
While both libraries work with images, Scissors focuses on image cropping, while Detectron2 offers a wider range of computer vision tasks. Scissors provides a simpler API for its specific use case, while Detectron2 requires more setup but offers greater flexibility for various computer vision applications.
Models and examples built with TensorFlow
Pros of TensorFlow Models
- Extensive collection of pre-trained models for various AI tasks
- Well-documented and maintained by a large community
- Seamless integration with TensorFlow ecosystem
Cons of TensorFlow Models
- Larger repository size, potentially overwhelming for beginners
- Steeper learning curve due to complex model architectures
- May require more computational resources for some models
Code Comparison
Scissors (Python):
from scissors import cut_image
cut_image(image_path, output_dir, model="u2net")
TensorFlow Models (Python):
import tensorflow as tf
from official.vision.image_classification import resnet_model
model = resnet_model.resnet50(num_classes=1000)
inputs = tf.keras.Input(shape=(224, 224, 3))
outputs = model(inputs)
Summary
Scissors focuses on image segmentation with a simple API, while TensorFlow Models offers a broader range of AI models and applications. Scissors is more lightweight and easier to use for specific tasks, whereas TensorFlow Models provides more flexibility and advanced features for diverse AI projects. The choice between them depends on the specific requirements of your project and your familiarity with deep learning frameworks.
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Pros of YOLOv5
- More comprehensive object detection framework with pre-trained models
- Extensive documentation and community support
- Actively maintained with frequent updates and improvements
Cons of YOLOv5
- Larger codebase and potentially steeper learning curve
- May be overkill for simpler image processing tasks
Code Comparison
YOLOv5:
from ultralytics import YOLO
# Load a pretrained model
model = YOLO('yolov5s.pt')
# Perform object detection
results = model('image.jpg')
Scissors:
from scissors import ImageProcessor
# Load and process an image
processor = ImageProcessor('image.jpg')
processed_image = processor.process()
Summary
YOLOv5 is a robust object detection framework with extensive features and community support, while Scissors focuses on simpler image processing tasks. YOLOv5 offers pre-trained models and a more comprehensive approach to computer vision, but may be more complex for basic use cases. Scissors provides a straightforward API for image manipulation, making it easier to use for simpler tasks but potentially limiting for advanced object detection scenarios.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
â ï¸ This repository has been archived and is no longer accepting contributions â ï¸
Scissors
Fixed viewport image cropping library for Android with built-in support for Picasso, Glide or Universal Image Loader.
Usage
See scissors-sample
.
- Include it on your layout:
<com.lyft.android.scissors.CropView
android:id="@+id/crop_view"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:cropviewViewportRatio="1"
/>
- Set a Bitmap to be cropped. In example by calling
cropView.setImageBitmap(someBitmap);
- Call
Bitmap croppedBitmap = cropView.crop();
to obtain a cropped Bitmap to match viewport dimensions
Extensions
Scissors comes with handy extensions which help with common tasks like:
Loading a Bitmap
To load a Bitmap automatically with Picasso, Glide or Universal Image Loader into CropView
use as follows:
cropView.extensions()
.load(galleryUri);
Cropping into a File
To save a cropped Bitmap into a File
use as follows:
cropView.extensions()
.crop()
.quality(87)
.format(PNG)
.into(croppedFile))
Questions
For questions please use github issues. Mark question issue with "question" label.
Download
compile 'com.lyft:scissors:1.1.1'
Snapshots of development version are available in Sonatype's snapshots
repository.
License
Copyright (C) 2015 Lyft, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Contributing
Please see CONTRIBUTING.md
.
Contributors
Top Related Projects
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Models and examples built with TensorFlow
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot