coremltools
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
Top Related Projects
An Open Source Machine Learning Framework for Everyone
Open standard for machine learning interoperability
Tensors and Dynamic neural networks in Python with strong GPU acceleration
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Quick Overview
CoreMLTools is an open-source Python package developed by Apple for converting machine learning models to the Core ML format. It enables developers to integrate various machine learning models into iOS, macOS, watchOS, and tvOS applications, leveraging the power of Core ML for on-device machine learning.
Pros
- Supports conversion from popular ML frameworks like TensorFlow, Keras, scikit-learn, and more
- Provides tools for model optimization and compression
- Enables on-device machine learning, improving privacy and reducing latency
- Offers a simple API for model conversion and deployment
Cons
- Limited to Apple platforms (iOS, macOS, watchOS, tvOS)
- May not support all advanced features of original ML frameworks
- Conversion process can sometimes result in loss of accuracy or functionality
- Requires familiarity with Apple's ecosystem and development tools
Code Examples
- Converting a Keras model to Core ML:
import coremltools as ct
# Load your Keras model
keras_model = ...
# Convert to Core ML
coreml_model = ct.convert(keras_model)
# Save the Core ML model
coreml_model.save("my_model.mlmodel")
- Converting a scikit-learn model with feature descriptions:
import coremltools as ct
from sklearn.tree import DecisionTreeClassifier
# Train your scikit-learn model
dt_model = DecisionTreeClassifier()
dt_model.fit(X_train, y_train)
# Define feature descriptions
feature_descriptions = [("feature1", ct.features.Double()),
("feature2", ct.features.Double())]
# Convert to Core ML
coreml_model = ct.converters.sklearn.convert(dt_model,
feature_descriptions,
"target")
# Save the Core ML model
coreml_model.save("decision_tree.mlmodel")
- Optimizing a Core ML model:
import coremltools as ct
# Load an existing Core ML model
model = ct.models.MLModel("my_model.mlmodel")
# Optimize the model for a specific deployment target
optimized_model = ct.optimize.optimize_for_deployment(model,
target=ct.target.iOS15)
# Save the optimized model
optimized_model.save("optimized_model.mlmodel")
Getting Started
To get started with CoreMLTools, follow these steps:
-
Install CoreMLTools using pip:
pip install coremltools
-
Import the library in your Python script:
import coremltools as ct
-
Load your machine learning model from a supported framework (e.g., TensorFlow, Keras, scikit-learn).
-
Use the appropriate conversion function to convert your model to Core ML format.
-
Save the converted model as a
.mlmodel
file. -
Integrate the
.mlmodel
file into your Xcode project for use in your iOS, macOS, watchOS, or tvOS application.
Competitor Comparisons
An Open Source Machine Learning Framework for Everyone
Pros of TensorFlow
- Broader ecosystem and community support
- More extensive documentation and learning resources
- Supports a wider range of platforms and deployment options
Cons of TensorFlow
- Steeper learning curve for beginners
- Can be more complex to set up and configure
- Larger file size and potentially slower inference on mobile devices
Code Comparison
TensorFlow:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
CoreMLTools:
import coremltools as ct
mlmodel = ct.convert(model, source='tensorflow')
mlmodel.save('model.mlmodel')
TensorFlow offers a more comprehensive deep learning framework with extensive customization options, while CoreMLTools focuses on converting and optimizing models for Apple devices. TensorFlow provides a complete ecosystem for building and training models, whereas CoreMLTools is primarily used for converting pre-trained models to the Core ML format for iOS and macOS deployment.
TensorFlow's flexibility comes at the cost of complexity, while CoreMLTools offers a more streamlined experience for Apple-specific deployments but with limited cross-platform support.
Open standard for machine learning interoperability
Pros of ONNX
- Broader ecosystem support and compatibility across multiple frameworks and platforms
- More extensive set of supported operations and model types
- Active community development and frequent updates
Cons of ONNX
- Steeper learning curve for beginners
- May require additional steps for deployment on specific hardware or platforms
Code Comparison
ONNX example:
import onnx
from onnx import helper
# Create an ONNX model
node = helper.make_node('Relu', inputs=['X'], outputs=['Y'])
graph = helper.make_graph([node], 'test-model', [helper.make_tensor_value_info('X', onnx.TensorProto.FLOAT, [1, 3, 224, 224])], [helper.make_tensor_value_info('Y', onnx.TensorProto.FLOAT, [1, 3, 224, 224])])
model = helper.make_model(graph)
onnx.save(model, 'model.onnx')
CoreMLTools example:
import coremltools as ct
# Convert a Keras model to Core ML
keras_model = ... # Your Keras model
coreml_model = ct.convert(keras_model)
coreml_model.save('model.mlmodel')
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pros of PyTorch
- Broader ecosystem and community support
- More flexible and dynamic computational graph
- Supports a wider range of deep learning models and applications
Cons of PyTorch
- Larger file size and memory footprint
- Steeper learning curve for beginners
- Not optimized for mobile and edge devices
Code Comparison
PyTorch:
import torch
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.add(x, y)
CoreMLTools:
import coremltools as ct
input_features = [('input', ct.TensorType(shape=(1, 3)))]
output_features = [('output', ct.TensorType(shape=(1, 3)))]
model = ct.models.neural_network.NeuralNetworkBuilder(input_features, output_features)
model.add_elementwise('add', input_names=['input'], output_name='output', mode='ADD')
Summary
PyTorch offers a more comprehensive deep learning framework with greater flexibility and community support. However, CoreMLTools is specifically designed for deploying models on Apple devices, making it more efficient for iOS and macOS applications. PyTorch's code is generally more concise and intuitive, while CoreMLTools requires more setup but provides better integration with Apple's ecosystem.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Pros of ONNX Runtime
- Cross-platform support for multiple operating systems and hardware
- Optimized for performance with various acceleration options
- Supports a wide range of machine learning frameworks and models
Cons of ONNX Runtime
- Steeper learning curve for beginners compared to Core ML Tools
- May require more setup and configuration for specific use cases
Code Comparison
ONNX Runtime example:
import onnxruntime as ort
session = ort.InferenceSession("model.onnx")
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
result = session.run([output_name], {input_name: input_data})
Core ML Tools example:
import coremltools as ct
model = ct.models.MLModel("model.mlmodel")
prediction = model.predict({"input": input_data})
Both libraries provide ways to load and run machine learning models, but ONNX Runtime offers more flexibility and cross-platform support, while Core ML Tools is specifically designed for Apple platforms and provides a simpler API for iOS and macOS developers.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Pros of JAX
- More flexible and general-purpose, supporting a wide range of machine learning tasks
- Offers automatic differentiation and GPU/TPU acceleration for high-performance computing
- Active development with frequent updates and a growing ecosystem
Cons of JAX
- Steeper learning curve, especially for those not familiar with functional programming
- Less integrated with mobile and edge devices compared to CoreMLTools
- May require more manual optimization for deployment in production environments
Code Comparison
CoreMLTools example:
import coremltools as ct
model = ct.convert(keras_model, inputs=[ct.ImageType(shape=(1, 224, 224, 3))])
model.save("my_model.mlmodel")
JAX example:
import jax.numpy as jnp
from jax import grad, jit
def predict(params, x):
return jnp.dot(params, x)
grad_fn = jit(grad(predict))
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of Transformers
- Broader ecosystem support and compatibility with various deep learning frameworks
- Extensive collection of pre-trained models for diverse NLP tasks
- Active community and frequent updates with state-of-the-art models
Cons of Transformers
- Larger library size and potentially higher resource requirements
- Steeper learning curve for beginners due to its extensive features
- Less optimized for mobile and edge devices compared to CoreMLTools
Code Comparison
Transformers:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModel.from_pretrained("bert-base-uncased")
inputs = tokenizer("Hello world!", return_tensors="pt")
outputs = model(**inputs)
CoreMLTools:
import coremltools as ct
mlmodel = ct.convert(keras_model, source='keras')
mlmodel.save('my_model.mlmodel')
The Transformers example demonstrates loading a pre-trained model and tokenizer, while the CoreMLTools example shows converting a Keras model to Core ML format. Transformers focuses on NLP tasks with ready-to-use models, whereas CoreMLTools is designed for converting and optimizing models for Apple devices.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Core ML Tools
Use Core ML Tools (coremltools) to convert machine learning models from third-party libraries to the Core ML format. This Python package contains the supporting tools for converting models from training libraries such as the following:
- TensorFlow 1.x
- TensorFlow 2.x
- PyTorch
- Non-neural network frameworks:
With coremltools, you can:
- Convert trained models to the Core ML format.
- Read, write, and optimize Core ML models.
- Verify conversion/creation (on macOS) by making predictions using Core ML.
After conversion, you can integrate the Core ML models with your app using Xcode.
Install 8.0 Beta
The coremltools version 8 beta 2 is now out. To install, run the following command in your terminal:
pip install coremltools==8.0b2
Install Version 7.2
To install the latest non-beta version, run the following command in your terminal:
pip install -U coremltools
Core ML
Core ML is an Apple framework to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to fine-tune models, all on the userâs device. Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the userâs device removes any need for a network connection, which helps keep the userâs data private and your app responsive.
Resources
To install coremltools, see Installing Core ML Tools. For more information, see the following:
Top Related Projects
An Open Source Machine Learning Framework for Everyone
Open standard for machine learning interoperability
Tensors and Dynamic neural networks in Python with strong GPU acceleration
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot