seldon-core
An MLOps framework to package, deploy, monitor and manage thousands of production machine learning models
Top Related Projects
Standardized Serverless ML Inference Platform on Kubernetes
Open source platform for the machine learning lifecycle
Production infrastructure for machine learning at scale
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
Open Source ML Model Versioning, Metadata, and Experiment Management
The Open Source Feature Store for AI/ML
Quick Overview
Seldon Core is an open-source platform for deploying machine learning models on Kubernetes. It provides a flexible, scalable, and production-ready solution for serving ML models, offering features like A/B testing, canary deployments, and advanced monitoring capabilities.
Pros
- Seamless integration with Kubernetes for scalable ML model deployment
- Support for multiple ML frameworks (TensorFlow, PyTorch, scikit-learn, etc.)
- Advanced features like A/B testing, canary deployments, and explainers
- Extensive monitoring and observability tools
Cons
- Steep learning curve for those unfamiliar with Kubernetes
- Complex setup process for advanced features
- Limited support for edge deployment scenarios
- Resource-intensive for small-scale projects
Code Examples
- Creating a simple Seldon deployment:
from seldon_core.seldon_client import SeldonClient
import numpy as np
sc = SeldonClient(deployment_name="mymodel", namespace="seldon")
client_prediction = sc.predict(data=np.array([[1, 2, 3]]))
print(client_prediction)
- Implementing a custom model:
class MyModel:
def __init__(self):
print("Initializing")
def predict(self, X, features_names=None):
print("Predict called")
return X
def metrics(self):
return [
{"type": "COUNTER", "key": "mycounter", "value": 1},
{"type": "GAUGE", "key": "mygauge", "value": 100},
]
- Creating a Seldon deployment using SeldonDeployment CRD:
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: seldon-model
spec:
name: test-deployment
predictors:
- componentSpecs:
- spec:
containers:
- name: classifier
image: seldonio/sklearn-iris:0.1
graph:
name: classifier
type: MODEL
name: example
replicas: 1
Getting Started
- Install Seldon Core on your Kubernetes cluster:
kubectl create namespace seldon-system
helm install seldon-core seldon-core-operator \
--repo https://storage.googleapis.com/seldon-charts \
--set usageMetrics.enabled=true \
--namespace seldon-system
- Create a Seldon deployment (using the YAML example above):
kubectl apply -f seldon-deployment.yaml
- Access your model:
kubectl port-forward svc/seldon-model-example 8000:8000
curl -X POST http://localhost:8000/api/v1.0/predictions \
-H 'Content-Type: application/json' \
-d '{ "data": { "ndarray": [[1,2,3,4]] } }'
Competitor Comparisons
Standardized Serverless ML Inference Platform on Kubernetes
Pros of KServe
- Deeper integration with Kubernetes ecosystem and Knative
- More extensive support for model serving frameworks (TensorFlow, PyTorch, scikit-learn, etc.)
- Built-in support for model explainability and drift detection
Cons of KServe
- Steeper learning curve due to more complex architecture
- Less flexibility in custom model deployment compared to Seldon Core
- Requires Istio for full functionality, which can add complexity
Code Comparison
KServe example:
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "sklearn-iris"
spec:
predictor:
sklearn:
storageUri: "gs://kfserving-samples/models/sklearn/iris"
Seldon Core example:
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: sklearn-iris
spec:
predictors:
- graph:
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/sklearn/iris
name: default
Both KServe and Seldon Core are powerful platforms for deploying machine learning models in Kubernetes environments. KServe offers tighter integration with the Kubernetes ecosystem and broader support for various frameworks, while Seldon Core provides more flexibility for custom deployments and a gentler learning curve.
Open source platform for the machine learning lifecycle
Pros of MLflow
- More comprehensive ML lifecycle management, including experiment tracking and model registry
- Language-agnostic with support for Python, R, Java, and more
- Easier to set up and use for individual data scientists or small teams
Cons of MLflow
- Less focused on production deployment and scaling of ML models
- Limited built-in support for advanced serving features like A/B testing and canary deployments
- Requires additional tools for robust production-grade model serving
Code Comparison
MLflow example:
import mlflow
mlflow.start_run()
mlflow.log_param("param1", 5)
mlflow.log_metric("accuracy", 0.85)
mlflow.end_run()
Seldon Core example:
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="mymodel", namespace="default")
response = sc.predict(data=X)
print(response)
MLflow focuses on tracking experiments and logging metrics, while Seldon Core is designed for deploying and serving models in production environments. MLflow provides a more comprehensive solution for the entire ML lifecycle, whereas Seldon Core excels in robust, scalable model deployment on Kubernetes.
Production infrastructure for machine learning at scale
Pros of Cortex
- Simpler deployment process with automatic infrastructure provisioning
- Native support for AWS, reducing complexity for AWS users
- Built-in autoscaling and GPU support out of the box
Cons of Cortex
- Limited to AWS, while Seldon Core supports multiple cloud providers
- Smaller community and ecosystem compared to Seldon Core
- Less flexibility in terms of customization and integration options
Code Comparison
Cortex deployment example:
- name: iris-classifier
predictor:
type: python
path: predictor.py
compute:
gpu: 1
mem: 4G
Seldon Core deployment example:
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: iris-model
spec:
predictors:
- graph:
name: classifier
implementation: SKLEARN_SERVER
name: default
Both frameworks aim to simplify ML model deployment, but Cortex focuses on AWS-specific deployments with a more streamlined approach, while Seldon Core offers greater flexibility and multi-cloud support at the cost of increased complexity.
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
Pros of BentoML
- Simpler setup and deployment process, especially for local development
- Built-in support for a wider range of ML frameworks and libraries
- More flexible API serving options, including REST, gRPC, and CLI
Cons of BentoML
- Less mature ecosystem for large-scale production deployments
- Fewer advanced features for monitoring and scaling in complex environments
- Limited native support for A/B testing and canary deployments
Code Comparison
BentoML:
import bentoml
@bentoml.env(pip_packages=["scikit-learn"])
@bentoml.artifacts([bentoml.sklearn.SklearnModelArtifact('model')])
class SklearnIrisClassifier(bentoml.BentoService):
@bentoml.api(input=bentoml.handlers.DataframeHandler())
def predict(self, df):
return self.artifacts.model.predict(df)
Seldon Core:
class IrisClassifier(object):
def __init__(self):
self.model = joblib.load('iris_model.joblib')
def predict(self, X, features_names=None):
return self.model.predict(X)
Both frameworks aim to simplify ML model deployment, but BentoML offers a more user-friendly approach for local development and supports a broader range of ML frameworks out-of-the-box. Seldon Core, on the other hand, provides more advanced features for production-grade deployments and integrates better with Kubernetes ecosystems.
Open Source ML Model Versioning, Metadata, and Experiment Management
Pros of ModelDB
- Focuses on model versioning and metadata tracking
- Provides a user-friendly web interface for experiment management
- Supports integration with popular ML frameworks like TensorFlow and PyTorch
Cons of ModelDB
- Less emphasis on model deployment and serving compared to Seldon Core
- May require additional tools for end-to-end MLOps workflows
- Limited support for advanced deployment scenarios like A/B testing
Code Comparison
ModelDB (Python client):
from verta import Client
client = Client("http://localhost:3000")
proj = client.set_project("My Project")
expt = client.set_experiment("My Experiment")
run = client.set_experiment_run("My Run")
run.log_parameter("num_layers", 5)
run.log_metric("accuracy", 0.95)
Seldon Core (Deployment YAML):
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: iris-model
spec:
predictors:
- graph:
implementation: SKLEARN_SERVER
modelUri: gs://seldon-models/iris
name: default
The Open Source Feature Store for AI/ML
Pros of Feast
- Specialized in feature management and serving for machine learning
- Supports multiple data sources and feature stores
- Provides a unified API for offline and online feature access
Cons of Feast
- Limited to feature management, not a complete MLOps solution
- Requires additional tools for model deployment and serving
- May have a steeper learning curve for teams new to feature stores
Code Comparison
Feast example:
from feast import FeatureStore
store = FeatureStore("feature_repo/")
features = store.get_online_features(
features=["driver:rating", "driver:trips_today"],
entity_rows=[{"driver_id": 1001}]
)
Seldon Core example:
from seldon_core.seldon_client import SeldonClient
sc = SeldonClient(deployment_name="mymodel", namespace="default")
response = sc.predict(data={"ndarray": [[1.0, 2.0, 5.0]]})
While Feast focuses on feature management and serving, Seldon Core provides a more comprehensive MLOps solution for model deployment and serving. Feast excels in feature engineering and storage, whereas Seldon Core offers broader capabilities for model deployment, A/B testing, and monitoring in production environments.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Deploy Modular, Data-centric AI applications at scale
ð¡ About
Seldon Core 2 is an MLOps and LLMOps framework for deploying, managing and scaling AI systems in Kubernetes - from singular models, to modular and data-centric applications. With Core 2 you can deploy in a standardized way across a wide range of model types, on-prem or in any cloud, and production-ready out of the box.
To reach out to Seldon regarding commercial use, visit our website.
ð Documentation
The Seldon Core 2 Docs can be found here. For most specific sections, see here:
ð§ Installation ⢠⽠Servers ⢠ð¤ Models ⢠ð Pipelines ⢠ð§âð¬ Experiments ⢠ð Performance Tuning
ð§© Features
- Pipelines: Deploy composable AI applications, leveraging Kafka for realtime data streaming between components
- Autoscaling for models and application components based on native or custom logic
- Multi-Model Serving: Save infrastructure costs by consolidating multiple models on shared inference servers
- Overcommit: Deploy more models than available memory allows, saving infrastructure costs for unused models
- Experiments: Route data between candidate models or pipelines, with support for A/B tests and shadow deployments
- Custom Components: Implement custom logic, drift & outlier detection, LLMs and more through plug-and-play integrate with the rest of Seldon's ecosytem of ML/AI products!
ð¬ Research
These features are influenced by our position paper on the next generation of ML model serving frameworks:
ð Desiderata for next generation of ML model serving
ð License
Seldon is distributed under the terms of the The Business Source License. A complete version of the license is available in the LICENSE file in this repository. Any contribution made to this project will be licensed under the Business Source License.
Top Related Projects
Standardized Serverless ML Inference Platform on Kubernetes
Open source platform for the machine learning lifecycle
Production infrastructure for machine learning at scale
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
Open Source ML Model Versioning, Metadata, and Experiment Management
The Open Source Feature Store for AI/ML
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot