Robust
Robust is an Android HotFix solution with high compatibility and high stability. Robust can fix bugs immediately without a reboot.
Top Related Projects
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Models and examples built with TensorFlow
OpenMMLab Detection Toolbox and Benchmark
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Quick Overview
Robust is an open-source Android library developed by Meituan-Dianping for handling Java exceptions and crashes in Android applications. It aims to improve app stability by providing a framework for graceful error recovery and hotfix deployment without requiring app restarts or updates through app stores.
Pros
- Allows for real-time bug fixes without app updates
- Improves app stability and user experience
- Supports both Java and Kotlin
- Provides detailed crash reports and logs for easier debugging
Cons
- Requires careful implementation to avoid introducing new bugs
- May slightly increase app size and memory usage
- Limited documentation in English
- Potential security concerns if not properly implemented
Code Examples
- Basic setup in Application class:
class MyApplication : Application() {
override fun attachBaseContext(base: Context) {
super.attachBaseContext(base)
PatchManager.getInstance().init(this, BuildConfig.VERSION_NAME)
}
}
- Applying a patch:
PatchManager.getInstance().applyPatch()
- Creating a patch for a method:
@Modify
@ChangeQuickRedirect(
targetClass = TargetClass::class,
targetMethod = "methodName"
)
fun fixedMethod(instance: Any, vararg args: Any?): Any? {
// Fixed implementation
}
Getting Started
- Add the Robust dependency to your app's
build.gradle
:
dependencies {
implementation 'com.meituan.robust:robust:0.4.99'
}
- Initialize Robust in your Application class:
class MyApplication : Application() {
override fun attachBaseContext(base: Context) {
super.attachBaseContext(base)
PatchManager.getInstance().init(this, BuildConfig.VERSION_NAME)
}
override fun onCreate() {
super.onCreate()
PatchManager.getInstance().applyPatch()
}
}
- Annotate methods that may need patching with
@Modify
:
@Modify
fun methodThatMayNeedPatching() {
// Original implementation
}
- Generate patches using the Robust Gradle plugin and deploy them to your users.
Competitor Comparisons
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Pros of Detectron2
- More comprehensive and feature-rich computer vision library
- Extensive documentation and community support
- Regularly updated with new models and techniques
Cons of Detectron2
- Steeper learning curve due to its complexity
- Requires more computational resources for training and inference
Code Comparison
Detectron2:
from detectron2.engine import DefaultPredictor
from detectron2.config import get_cfg
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
predictor = DefaultPredictor(cfg)
Robust:
from robust import Robust
robust = Robust()
robust.load_model("path/to/model")
robust.predict(image)
Summary
Detectron2 is a more comprehensive computer vision library with extensive features and community support, but it has a steeper learning curve and higher resource requirements. Robust, on the other hand, appears to be simpler to use but may have fewer features and less frequent updates. The code comparison shows that Detectron2 requires more configuration, while Robust has a more straightforward API for loading models and making predictions.
Models and examples built with TensorFlow
Pros of models
- Extensive collection of pre-implemented machine learning models
- Well-documented and maintained by Google's TensorFlow team
- Supports a wide range of applications, from image classification to natural language processing
Cons of models
- Large repository size, which can be overwhelming for beginners
- Requires familiarity with TensorFlow framework
- May have more complexity than needed for simple projects
Code Comparison
models:
import tensorflow as tf
from official.vision.image_classification import resnet_model
model = resnet_model.resnet50(num_classes=1000)
Robust:
from robust import Robust
model = Robust(model_name='resnet50', num_classes=1000)
Summary
models offers a comprehensive suite of machine learning models and tools, backed by Google's TensorFlow team. It provides extensive documentation and support for various applications. However, its large size and complexity may be challenging for beginners or those working on simpler projects.
Robust, on the other hand, focuses specifically on improving model robustness against adversarial attacks. It offers a more streamlined approach for implementing robust models, which may be easier to use for specific robustness-related tasks. However, it may lack the breadth of features and extensive documentation found in models.
OpenMMLab Detection Toolbox and Benchmark
Pros of mmdetection
- Comprehensive collection of object detection algorithms and models
- Extensive documentation and tutorials for easy adoption
- Active community and frequent updates
Cons of mmdetection
- Steeper learning curve due to its extensive feature set
- Larger codebase, which may be overwhelming for simple projects
- Primarily focused on object detection, less versatile for other tasks
Code Comparison
mmdetection:
from mmdet.apis import init_detector, inference_detector
config_file = 'configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py'
checkpoint_file = 'checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth'
model = init_detector(config_file, checkpoint_file, device='cuda:0')
result = inference_detector(model, 'test.jpg')
Robust:
from robust import RobustModel
model = RobustModel(model_name='resnet50', num_classes=10)
model.load_weights('path/to/weights.pth')
predictions = model.predict(images)
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Pros of YOLOv5
- Highly optimized for real-time object detection with excellent speed-accuracy trade-off
- Extensive documentation and community support
- Supports various export formats (ONNX, TensorRT, CoreML, etc.)
Cons of YOLOv5
- Focused solely on object detection, lacking broader computer vision capabilities
- May require more computational resources for training compared to Robust
Code Comparison
YOLOv5:
from ultralytics import YOLO
# Load a pretrained model
model = YOLO('yolov5s.pt')
# Perform inference on an image
results = model('image.jpg')
Robust:
from robust import Robust
# Initialize Robust model
model = Robust()
# Perform inference on an image
results = model.detect('image.jpg')
Summary
YOLOv5 excels in object detection tasks with its optimized performance and extensive ecosystem. Robust, on the other hand, offers a broader range of computer vision capabilities beyond object detection. YOLOv5 may be preferred for specialized object detection projects, while Robust could be more suitable for diverse computer vision applications.
Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"
Pros of Mask2Former
- More advanced and versatile architecture for instance segmentation tasks
- Supports a wider range of computer vision applications, including panoptic segmentation
- Better performance on benchmark datasets like COCO and ADE20K
Cons of Mask2Former
- More complex implementation, potentially requiring more computational resources
- Steeper learning curve for developers new to advanced computer vision techniques
- May be overkill for simpler object detection or segmentation tasks
Code Comparison
Mask2Former (PyTorch):
class Mask2Former(nn.Module):
def __init__(self, backbone, transformer, num_classes):
super().__init__()
self.backbone = backbone
self.transformer = transformer
self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
Robust (Java):
public class Robust {
private Context context;
private RobustCallBack robustCallBack;
public Robust(Context context) {
this.context = context;
}
}
The code snippets highlight the difference in complexity and focus between the two projects. Mask2Former is a more sophisticated deep learning model for computer vision tasks, while Robust is a simpler Java-based framework for Android app stability.
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Pros of Mask_RCNN
- More widely adopted and actively maintained
- Extensive documentation and community support
- Versatile for various object detection and instance segmentation tasks
Cons of Mask_RCNN
- Higher computational requirements
- Steeper learning curve for beginners
- Less focus on robustness against adversarial attacks
Code Comparison
Mask_RCNN:
import mrcnn.model as modellib
model = modellib.MaskRCNN(mode="inference", config=config, model_dir=MODEL_DIR)
model.load_weights(COCO_MODEL_PATH, by_name=True)
results = model.detect([image], verbose=1)
Robust:
from robust import RobustModel
model = RobustModel(model_path='path/to/model')
result = model.predict(image)
The Mask_RCNN code snippet demonstrates the initialization and inference process, while the Robust code shows a simpler API for prediction. Mask_RCNN offers more flexibility but requires more setup, whereas Robust provides a more straightforward interface for robust predictions.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Robust
Robust is an Android HotFix solution with high compatibility and high stability. Robust can fix bugs immediately without publishing apk.
More help on Wiki
Environment
- Mac Linux and Windows
- Gradle 2.10+ , include 3.0
- Java 1.7 +
Usage
-
Add below codes in the module's build.gradle.
apply plugin: 'com.android.application' //please uncomment fellow line before you build a patch //apply plugin: 'auto-patch-plugin' apply plugin: 'robust' compile 'com.meituan.robust:robust:0.4.99'
-
Add below codes in the outest project's build.gradle file.
buildscript { repositories { jcenter() } dependencies { classpath 'com.meituan.robust:gradle-plugin:0.4.99' classpath 'com.meituan.robust:auto-patch-plugin:0.4.99' } }
-
There are some configure items in app/robust.xml,such as classes which Robust will insert code,this may diff from projects to projects.Please copy this file to your project.
Advantages
-
Support 2.3 to 10 Android OS
-
Perfect compatibility
-
Patch takes effect without a reboot
-
Support fixing at method level,including static methods
-
Support add classes and methods
-
Support ProGuard,including inline methods or changing methods' signature
When you build APK,you may need to save "mapping.txt" and the files in directory "build/outputs/robust/".
AutoPatch
AutoPatch will generate patch for Robust automatically. You just need to fellow below steps to genrate patches. For more details please visit website http://tech.meituan.com/android_autopatch.html
Steps
-
Put 'auto-patch-plugin' just behind **'com.android.application'**ï¼but in the front of others pluginsãlike this:
apply plugin: 'com.android.application' apply plugin: 'auto-patch-plugin'
-
Put mapping.txt and methodsMap.robust which are generated when you build the apks in diretory app/robust/,if not exists ,create it!
-
After modifying the code ,please put annotation
@Modify
on the modified methods or invokeRobustModify.modify()
(designed for Lambda Expression )in the modified methods:@Modify protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } // protected void onCreate(Bundle savedInstanceState) { RobustModify.modify() super.onCreate(savedInstanceState); }
Use annotation
@Add
when you neeed to add methods or classes.//add method @Add public String getString() { return "Robust"; } //add class @Add public class NewAddCLass { public static String get() { return "robust"; } }
-
After those steps,you need to run the same gradle command as you build the apk,then you will get patches in directory app/build/outputs/robust/patch.jar.
-
Generating patches always end like this,which means patches is done
Demo Usage
-
Excute fellow command to build apkï¼
./gradlew clean assembleRelease --stacktrace --no-daemon
-
After install apk on your phone,you need to save mapping.txt and app/build/outputs/robust/methodsMap.robust
-
Put mapping.txt and methodsMap.robust which are generated when you build the apks into diretory app/robust/,if directory not exists ,create it!
-
After modifying the code ,please put annotation
@Modify
on the modified methods or invokeRobustModify.modify()
(designed for Lambda Expression )in the modified methods. -
Run the same gradle command as you build the apk:
./gradlew clean assembleRelease --stacktrace --no-daemon
-
Generating patches always end like this,which means patches is done
-
Copy patch to your phoneï¼
adb push ~/Desktop/code/robust/app/build/outputs/robust/patch.jar /sdcard/robust/patch.jar
patch directory can be configured in
PatchManipulateImp
. -
Open app,and click Patch button,patch is used.
-
Also you can use our sample patch in app/robust/sample_patch.jar ,this dex change text after you click Jump_second_Activity Button.
-
In the demo ,we change the text showed on the second activity which is configured in the method
getTextInfo(String meituan)
in classSecondActivity
Attentions
-
You should modify inner classes' private constructors to public modifier.
-
AutoPatch cannot handle situations which method returns this,you may need to wrap it like belows:
method a(){ return this; }
changed to
method a(){ return new B().setThis(this).getThis(); }
-
Not Support add fields,but you can add classes currently, this feature is under testing.
-
Classes added in patch should be static nested classes or non-inner classes,and all fields and methods in added class should be public.
-
Support to fix bugs in constructors currently is under testing.
-
Not support methods which only use fields,without method call or new expression.
-
Support to resources and so file is under testing.
-
For more help, please visit Wiki
License
Copyright 2017 Meituan-Dianping
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Top Related Projects
Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
Models and examples built with TensorFlow
OpenMMLab Detection Toolbox and Benchmark
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Code release for "Masked-attention Mask Transformer for Universal Image Segmentation"
Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot