Top Related Projects
An Open Source Machine Learning Framework for Everyone
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Open standard for machine learning interoperability
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
Cross-platform, customizable ML solutions for live and streaming media.
Caffe: a fast open framework for deep learning.
Quick Overview
Windows Machine Learning (Windows ML) is a platform that enables developers to integrate trained machine learning models into Windows applications. It provides a high-performance, reliable engine for running ML models on Windows devices, leveraging hardware acceleration when available.
Pros
- Seamless integration with Windows applications
- Hardware acceleration support for improved performance
- Cross-platform compatibility (UWP, Win32, .NET)
- Easy-to-use API for model inference
Cons
- Limited to Windows platform
- Requires ONNX format for models
- May have performance limitations compared to specialized ML frameworks
- Documentation could be more comprehensive
Code Examples
- Loading and evaluating a model:
using Windows.AI.MachineLearning;
// Load the model
LearningModel model = await LearningModel.LoadFromFileAsync("model.onnx");
// Create a session and binding
LearningModelSession session = new LearningModelSession(model);
LearningModelBinding binding = new LearningModelBinding(session);
// Bind input and output
binding.Bind("input", inputTensor);
binding.Bind("output", outputTensor);
// Evaluate the model
LearningModelEvaluationResult result = await session.EvaluateAsync(binding, "");
- Creating an ImageFeatureValue:
using Windows.Media;
VideoFrame inputImage = VideoFrame.CreateWithSoftwareBitmap(softwareBitmap);
ImageFeatureValue imageFeatureValue = ImageFeatureValue.CreateFromVideoFrame(inputImage);
- Using GPU acceleration:
LearningModelDevice device = new LearningModelDevice(LearningModelDeviceKind.DirectXHighPerformance);
LearningModelSession session = new LearningModelSession(model, device);
Getting Started
-
Install the Windows ML NuGet package:
Install-Package Microsoft.AI.MachineLearning
-
Add the following namespace to your code:
using Windows.AI.MachineLearning;
-
Load an ONNX model:
LearningModel model = await LearningModel.LoadFromFileAsync("path/to/model.onnx");
-
Create a session and binding:
LearningModelSession session = new LearningModelSession(model); LearningModelBinding binding = new LearningModelBinding(session);
-
Bind input and output, then evaluate the model:
binding.Bind("input", inputTensor); binding.Bind("output", outputTensor); LearningModelEvaluationResult result = await session.EvaluateAsync(binding, "");
Competitor Comparisons
An Open Source Machine Learning Framework for Everyone
Pros of TensorFlow
- Broader platform support (Windows, macOS, Linux, mobile)
- Larger community and ecosystem of tools/libraries
- More flexible for various ML tasks beyond just inference
Cons of TensorFlow
- Steeper learning curve for beginners
- Can be more resource-intensive
- Potentially more complex setup and configuration
Code Comparison
Windows-Machine-Learning:
LearningModel model = LearningModel::LoadFromFilePath(modelPath);
LearningModelSession session(model);
auto result = session.Evaluate(inputs, outputs);
TensorFlow:
model = tf.keras.models.load_model('model.h5')
predictions = model.predict(input_data)
Summary
Windows-Machine-Learning focuses on integrating pre-trained models into Windows applications, offering a streamlined experience for Windows developers. TensorFlow provides a more comprehensive framework for building, training, and deploying machine learning models across various platforms. While Windows-Machine-Learning excels in simplicity for Windows-specific scenarios, TensorFlow offers greater flexibility and a wider range of capabilities for diverse machine learning tasks.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pros of PyTorch
- Broader ecosystem and community support
- More flexible and dynamic computational graph
- Supports a wider range of deep learning models and applications
Cons of PyTorch
- Steeper learning curve for beginners
- Less optimized for Windows-specific hardware acceleration
- Larger footprint and potentially slower inference on Windows devices
Code Comparison
Windows Machine Learning:
LearningModel model = LearningModel::LoadFromFilePath(modelPath);
LearningModelSession session(model);
auto output = session.Evaluate(inputs, outputName);
PyTorch:
model = torch.load(model_path)
model.eval()
output = model(inputs)
Key Differences
Windows Machine Learning is specifically designed for Windows platforms, offering seamless integration with Windows APIs and optimized performance on Windows devices. It's ideal for deploying machine learning models in Windows applications.
PyTorch, on the other hand, is a more general-purpose deep learning framework with cross-platform support. It offers greater flexibility in model development and research, but may require additional steps for optimal performance on Windows systems.
Windows Machine Learning focuses on inference and deployment, while PyTorch provides a complete ecosystem for both training and inference across various platforms.
Open standard for machine learning interoperability
Pros of ONNX
- Broader ecosystem support and compatibility across multiple frameworks and platforms
- More extensive model optimization and conversion capabilities
- Active community development and frequent updates
Cons of ONNX
- Steeper learning curve for beginners
- May require additional tools or libraries for deployment on specific platforms
Code Comparison
ONNX example:
import onnx
model = onnx.load("model.onnx")
onnx.checker.check_model(model)
print(onnx.helper.printable_graph(model.graph))
Windows Machine Learning example:
LearningModel model = await LearningModel.LoadFromFileAsync("model.onnx");
LearningModelSession session = new LearningModelSession(model);
LearningModelBinding binding = new LearningModelBinding(session);
Key Differences
- ONNX focuses on providing a universal format for machine learning models, while Windows Machine Learning is specifically designed for deploying models on Windows devices.
- ONNX offers more flexibility in terms of model creation and optimization, whereas Windows Machine Learning emphasizes ease of use within the Windows ecosystem.
- ONNX has a larger community and more frequent updates, while Windows Machine Learning benefits from Microsoft's direct support and integration with Windows features.
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
Pros of Core ML Tools
- Supports a wider range of ML frameworks (TensorFlow, PyTorch, scikit-learn, etc.)
- Provides tools for model conversion, optimization, and quantization
- Offers Python-based API for easier integration with data science workflows
Cons of Core ML Tools
- Limited to Apple platforms (iOS, macOS, watchOS, tvOS)
- Requires more manual work for model deployment and integration
Code Comparison
Core ML Tools:
import coremltools as ct
model = ct.convert('model.h5', source='keras')
model.save('converted_model.mlmodel')
Windows Machine Learning:
using Microsoft.ML.OnnxRuntime;
var session = new InferenceSession("model.onnx");
var result = session.Run(inputs);
Key Differences
- Core ML Tools focuses on model conversion and optimization for Apple platforms
- Windows Machine Learning provides a runtime for executing ONNX models on Windows
- Core ML Tools offers a Python-based workflow, while Windows Machine Learning is primarily C#-based
- Windows Machine Learning is more tightly integrated with Windows OS features and hardware acceleration
Cross-platform, customizable ML solutions for live and streaming media.
Pros of MediaPipe
- Cross-platform support (iOS, Android, web) vs Windows-only
- Extensive pre-built solutions for various ML tasks (face detection, pose estimation, etc.)
- Easy-to-use Python API for rapid prototyping
Cons of MediaPipe
- Steeper learning curve due to more complex architecture
- Less integration with native Windows features and APIs
- Potentially higher resource usage for some applications
Code Comparison
MediaPipe (Python):
import mediapipe as mp
mp_face_detection = mp.solutions.face_detection
with mp_face_detection.FaceDetection(min_detection_confidence=0.5) as face_detection:
results = face_detection.process(image)
Windows Machine Learning (C#):
using Microsoft.AI.MachineLearning;
LearningModel model = LearningModel.LoadFromFilePath("model.onnx");
LearningModelSession session = new LearningModelSession(model);
var results = await session.EvaluateAsync(binding, "output");
Both repositories offer machine learning capabilities, but MediaPipe provides a more versatile, cross-platform solution with pre-built components for various ML tasks. Windows Machine Learning, on the other hand, offers tighter integration with Windows systems and potentially better performance for Windows-specific applications. The choice between the two depends on the target platform, required features, and development preferences.
Caffe: a fast open framework for deep learning.
Pros of Caffe
- Mature and widely-used deep learning framework with extensive community support
- Supports a variety of deep learning architectures and pre-trained models
- Efficient for both research and production environments
Cons of Caffe
- Less flexible for defining custom architectures compared to newer frameworks
- Limited support for distributed training and multi-GPU setups
- Steeper learning curve for beginners due to its C++ core
Code Comparison
Windows Machine Learning:
LearningModel model = LearningModel::LoadFromFilePath(modelPath);
LearningModelSession session(model);
LearningModelBinding binding(session);
binding.Bind(L"input", inputTensor);
auto results = session.Evaluate(binding, L"output");
Caffe:
Net<float> net(model_file, TEST);
net.CopyTrainedLayersFrom(trained_file);
Blob<float>* input_layer = net.input_blobs()[0];
net.Forward();
Blob<float>* output_layer = net.output_blobs()[0];
Windows Machine Learning focuses on integrating pre-trained models into Windows applications, while Caffe provides a more comprehensive framework for training and deploying deep learning models across various platforms. Windows Machine Learning offers easier integration with Windows ecosystems, whereas Caffe provides more flexibility and control over the entire deep learning pipeline.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Windows Machine Learning
Windows Machine Learning is a high-performance machine learning inference API that is powered by ONNX Runtime and DirectML.
The Windows ML API is a Windows Runtime Component and is suitable for high-performance, low-latency applications such as frameworks, games, and other real-time applications as well as applications built with high-level languages.
This repo contains Windows Machine Learning samples and tools that demonstrate how to build machine learning powered scenarios into Windows applications.
- Getting Started with Windows ML
- Model Samples
- Advanced Scenario Samples
- Developer Tools
- Feedback
- External Links
- Contributing
For additional information on Windows ML, including step-by-step tutorials and how-to guides, please visit the Windows ML documentation.
Sample/Tool | Status |
---|---|
All Samples | |
WinmlRunner | |
WinML Dashboard |
Getting Started with Windows ML
Prerequisites
Windows ML offers machine learning inferencing via the inbox Windows SDK as well as a redistributable NuGet package. The table below highlights the availability, distribution, language support, servicing, and forward compatibility aspects of the In-Box and NuGet package for Windows ML.
In-Box | NuGet | |
---|---|---|
Availability | Windows 10 - Build 17763 (RS5) or Newer For more detailed information about version support, checkout our docs. | Windows 8.1 or Newer NOTE: Some APIs (ie: VideoFrame) are not available on older OSes. |
Windows SDK | Windows SDK - Build 17763 (RS5) or Newer | Windows SDK - Build 17763 (RS5) or Newer |
Distribution | Built into Windows | Package and distribute as part of your application |
Servicing | Microsoft-driven (customers benefit automatically) | Developer-driven |
Forward | compatibility Automatically rolls forward with new features | Developer needs to update package manually |
Learn more here.
Model Samples
In this section you will find various model samples for a variety of scenarios across the different Windows ML API offerings.
Image Classification
A subdomain of computer vision in which an algorithm looks at an image and assigns it a tag from a collection of predefined tags or categories that it has been trained on.
Style Transfer
A computer vision technique that allows us to recompose the content of an image in the style of another.
Windows App Type Distribution | UWP In-Box | UWP NuGet | Desktop In-Box | Desktop NuGet |
---|---|---|---|---|
FNSCandy | âï¸C# - FNS Style Transfer âï¸C# - Real-Time Style Transfer |
Advanced Scenario Samples
These advanced samples show how to use various binding and evaluation features in Windows ML:
-
Custom Tensorization: A Windows Console Application (C++/WinRT) that shows how to do custom tensorization.
-
Custom Operator (CPU): A desktop app that defines multiple custom cpu operators. One of these is a debug operator which we invite you to integrate into your own workflow.
-
Adapter Selection: A desktop app that demonstrates how to choose a specific device adapter for running your model.
-
Plane Identifier: a UWP app and a WPF app packaged with the Desktop Bridge, sharing the same model trained using Azure Custom Vision service. For step-by-step instructions for this sample, please see the blog post Upgrade your WinML application to the latest bits.
-
Custom Vision and Windows ML: The tutorial shows how to train a neural network model to classify images of food using Azure Custom Vision service, export the model to ONNX format, and deploy the model in a Windows Machine Learning application running locally on Windows device.
-
ML.NET and Windows ML: This tutorial shows you how to train a neural network model to classify images of food using ML.NET Model Builder, export the model to ONNX format, and deploy the model in a Windows Machine Learning application running locally on a Windows device.
-
PyTorch Data Analysis: The tutorial shows how to solve a classification task with a neural network using the PyTorch library, export the model to ONNX format and deploy the model with the Windows Machine Learning application that can run on any Windows device.
-
PyTorch Image Classification: The tutorial shows how to train an image classification neural network model using PyTorch, export the model to the ONNX format, and deploy it in a Windows Machine Learning application running locally on your Windows device.
-
YoloV4 Object Detection: This tutorial shows how to build a UWP C# app that uses the YOLOv4 model to detect objects in video streams.
Developer Tools
-
Model Conversion
Windows ML provides inferencing capabilities powered by the ONNX Runtime engine. As such, all models run in Windows ML must be converted to the ONNX Model format. Models built and trained in source frameworks like TensorFlow or PyTorch must be converted to ONNX. Check out the documentation for how to convert to an ONNX model:
- https://onnxruntime.ai/docs/tutorials/mobile/model-conversion.html
- https://docs.microsoft.com/en-us/windows/ai/windows-ml/tutorials/pytorch-convert-model
- WinMLTools: a Python tool for converting models from different machine learning toolkits into ONNX for use with Windows ML.
-
Model Optimization
Models may need further optimizations applied post conversion to support advanced features like batching and quantization. Check out the following tools for optimizing your model:
-
WinML Dashboard (Preview): a GUI-based tool for viewing, editing, converting, and validating machine learning models for Windows ML inference engine. This tool can be used to enable free dimensions on models that were built with fixed dimensions. Download Preview Version
-
Graph Optimizations: Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.
-
Graph Quantization: Quantization in ONNX Runtime refers to 8 bit linear quantization of an ONNX model.
-
-
Model Validation
-
WinMLRunner: a command-line tool that can run .onnx or .pb models where the input and output variables are tensors or images. It is a very handy tool to quickly validate an ONNX model. It will attempt to load, bind, and evaluate a model and print out helpful messages. It also captures performance measurements.
-
-
Model Integration
-
WinML Code Generator (mlgen): a Visual Studio extension to help you get started using WinML APIs on UWP apps by generating a template code when you add a trained ONNX file into the UWP project. From the template code you can load a model, create a session, bind inputs, and evaluate with wrapper codes. See docs for more info.
-
WinML Samples Gallery: explore a variety of ML integration scenarios and models.
-
Check out the Model Samples and Advanced Scenario Samples to learn how to use Windows ML in your application.
-
Feedback
- For issues, file a bug on GitHub Issues.
- Ask questions on Stack Overflow.
- Vote for popular feature requests on Windows Developer Feedback or include your own request.
External Links
- ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator.
- ONNX: Open Neural Network Exchange Project.
Contributing
We're always looking for your help to fix bugs and improve the samples. Create a pull request, and we'll be happy to take a look.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Top Related Projects
An Open Source Machine Learning Framework for Everyone
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Open standard for machine learning interoperability
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.
Cross-platform, customizable ML solutions for live and streaming media.
Caffe: a fast open framework for deep learning.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot