Top Related Projects
Open source platform for the machine learning lifecycle
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
The easiest way to serve AI apps and models - Build reliable Inference APIs, LLM apps, Multi-model chains, RAG service, and much more!
Quick Overview
Aim is an open-source, self-hosted AI experiment tracking tool. It allows data scientists and machine learning engineers to log, compare, and analyze their AI experiments, providing a centralized platform for managing and visualizing machine learning workflows.
Pros
- Easy integration with popular ML frameworks like PyTorch, TensorFlow, and Keras
- Powerful query language for filtering and comparing experiments
- Customizable and interactive visualizations for experiment metrics
- Self-hosted solution, offering better data privacy and control
Cons
- Requires setup and maintenance of a self-hosted infrastructure
- Learning curve for mastering the query language and advanced features
- Limited integrations compared to some commercial alternatives
- May require additional resources for scaling with large datasets or many experiments
Code Examples
Logging an experiment with Aim:
from aim import Run
run = Run()
# Log hyperparameters
run['hparams'] = {
'learning_rate': 0.001,
'batch_size': 32,
'epochs': 10
}
# Log metrics
for epoch in range(10):
accuracy = model.train()
run.track(accuracy, name='accuracy', epoch=epoch)
Querying experiments:
from aim import Repo
repo = Repo.from_path('.')
query = "run.hparams.learning_rate < 0.01 and metrics.accuracy.last > 0.9"
runs = repo.query_runs(query)
for run in runs:
print(f"Run {run.hash}: Accuracy = {run.get_metric('accuracy').last}")
Visualizing experiment results:
from aim import Repo
from aim.web.ui import run_ui
repo = Repo.from_path('.')
run_ui(repo, host='0.0.0.0', port=43800)
Getting Started
-
Install Aim:
pip install aim
-
Initialize Aim in your project:
from aim import Run run = Run()
-
Log experiments and track metrics:
run['hparams'] = {'learning_rate': 0.001} run.track(0.95, name='accuracy', epoch=10)
-
Start the Aim UI to explore your experiments:
aim up
Competitor Comparisons
Open source platform for the machine learning lifecycle
Pros of MLflow
- More mature and widely adopted in the industry
- Comprehensive feature set including experiment tracking, model packaging, and model serving
- Strong integration with popular ML frameworks and cloud platforms
Cons of MLflow
- Can be complex to set up and configure for beginners
- Requires more infrastructure and resources to run effectively
- UI can be less intuitive for visualizing experiment results
Code Comparison
MLflow:
import mlflow
mlflow.start_run()
mlflow.log_param("learning_rate", 0.01)
mlflow.log_metric("accuracy", 0.85)
mlflow.end_run()
Aim:
from aim import Run
run = Run()
run["hparams"] = {"learning_rate": 0.01}
run.track(0.85, name="accuracy", step=1)
Both MLflow and Aim provide experiment tracking capabilities, but MLflow offers a more comprehensive ecosystem for ML lifecycle management. Aim focuses on simplicity and powerful visualizations, making it easier for beginners to get started. MLflow's maturity and wide adoption make it a popular choice for enterprise-level projects, while Aim's lightweight nature and intuitive UI make it attractive for smaller teams and individual researchers. The choice between the two depends on the specific needs of the project and the team's expertise.
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
Pros of Wandb
- More extensive integrations with popular ML frameworks and tools
- Robust collaboration features for team projects
- Advanced experiment tracking and visualization capabilities
Cons of Wandb
- Requires internet connection for full functionality
- Pricing can be expensive for large-scale projects or teams
- Steeper learning curve for beginners
Code Comparison
Wandb:
import wandb
wandb.init(project="my-project")
wandb.log({"loss": 0.5, "accuracy": 0.8})
wandb.finish()
Aim:
from aim import Run
run = Run()
run.track(0.5, name="loss")
run.track(0.8, name="accuracy")
Both Wandb and Aim provide easy-to-use APIs for tracking experiments, but Wandb offers more built-in features and integrations out of the box. Aim, on the other hand, focuses on simplicity and local-first approach, which can be beneficial for individual researchers or those working with sensitive data.
While Wandb excels in team collaboration and advanced visualizations, Aim offers a lightweight alternative with a focus on performance and flexibility. The choice between the two depends on specific project requirements, team size, and infrastructure constraints.
The easiest way to serve AI apps and models - Build reliable Inference APIs, LLM apps, Multi-model chains, RAG service, and much more!
Pros of BentoML
- Focuses on model serving and deployment, providing a complete MLOps solution
- Offers containerization and API creation for easy model deployment
- Supports a wide range of ML frameworks and model types
Cons of BentoML
- Less emphasis on experiment tracking and visualization compared to Aim
- May have a steeper learning curve for users primarily interested in logging and monitoring
Code Comparison
BentoML example:
import bentoml
@bentoml.env(pip_packages=["scikit-learn"])
@bentoml.artifacts([SklearnModelArtifact('model')])
class SklearnIrisClassifier(bentoml.BentoService):
@bentoml.api(input=JsonInput(), output=JsonOutput())
def predict(self, input_data):
return self.artifacts.model.predict(input_data)
Aim example:
from aim import Run
run = Run()
run['learning_rate'] = 0.001
run['optimizer'] = 'adam'
for epoch in range(10):
run.track(epoch_loss, name='loss', epoch=epoch)
While BentoML focuses on model serving and deployment, Aim emphasizes experiment tracking and visualization. BentoML provides a more comprehensive MLOps solution, while Aim offers a simpler interface for logging and monitoring machine learning experiments.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Drop a star to support Aim â | Join Aim discord community |
An easy-to-use & supercharged open-source experiment tracker
Aim logs your training runs and any AI Metadata, enables a beautiful UI to compare, observe them and an API to query them programmatically.AimStack offers enterprise support that's beyond core Aim. Contact via hello@aimstack.io e-mail.
About • Demos • Ecosystem • Quick Start • Examples • Documentation • Community • Blog
â¹ï¸ About
Aim is an open-source, self-hosted ML experiment tracking tool designed to handle 10,000s of training runs.
Aim provides a performant and beautiful UI for exploring and comparing training runs. Additionally, its SDK enables programmatic access to tracked metadata â perfect for automations and Jupyter Notebook analysis.
Aim's mission is to democratize AI dev tools ð¯
Log Metadata Across Your ML Pipeline ð¾ | Visualize & Compare Metadata via UI ð |
---|---|
|
|
Run ML Trainings Effectively â¡ | Organize Your Experiments ðï¸ |
|
|
ð¬ Demos
Check out live Aim demos NOW to see it in action.
Machine translation experiments | lightweight-GAN experiments |
---|---|
Training logs of a neural translation model(from WMT'19 competition). | Training logs of 'lightweight' GAN, proposed in ICLR 2021. |
FastSpeech 2 experiments | Simple MNIST |
---|---|
Training logs of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech". | Simple MNIST training logs. |
ð Ecosystem
Aim is not just an experiment tracker. It's a groundwork for an ecosystem. Check out the two most famous Aim-based tools.
aimlflow | Aim-spaCy |
---|---|
Exploring MLflow experiments with a powerful UI | an Aim-based spaCy experiment tracker |
ð Quick start
Follow the steps below to get started with Aim.
1. Install Aim on your training environment
pip3 install aim
2. Integrate Aim with your code
from aim import Run
# Initialize a new run
run = Run()
# Log run parameters
run["hparams"] = {
"learning_rate": 0.001,
"batch_size": 32,
}
# Log metrics
for i in range(10):
run.track(i, name='loss', step=i, context={ "subset":"train" })
run.track(i, name='acc', step=i, context={ "subset":"train" })
See the full list of supported trackable objects(e.g. images, text, etc) here.
3. Run the training as usual and start Aim UI
aim up
Learn more
Migrate from other tools
Aim has built-in converters to easily migrate logs from other tools. These migrations cover the most common usage scenarios. In case of custom and complex scenarios you can use Aim SDK to implement your own conversion script.
Integrate Aim into an existing project
Aim easily integrates with a wide range of ML frameworks, providing built-in callbacks for most of them.
- Integration with Pytorch Ignite
- Integration with Pytorch Lightning
- Integration with Hugging Face
- Integration with Keras & tf.Keras
- Integration with Keras Tuner
- Integration with XGboost
- Integration with CatBoost
- Integration with LightGBM
- Integration with fastai
- Integration with MXNet
- Integration with Optuna
- Integration with PaddlePaddle
- Integration with Stable-Baselines3
- Integration with Acme
- Integration with Prophet
Query runs programmatically via SDK
Aim Python SDK empowers you to query and access any piece of tracked metadata with ease.
from aim import Repo
my_repo = Repo('/path/to/aim/repo')
query = "metric.name == 'loss'" # Example query
# Get collection of metrics
for run_metrics_collection in my_repo.query_metrics(query).iter_runs():
for metric in run_metrics_collection:
# Get run params
params = metric.run[...]
# Get metric values
steps, metric_values = metric.values.sparse_numpy()
Set up a centralized tracking server
Aim remote tracking server allows running experiments in a multi-host environment and collect tracked data in a centralized location.
See the docs on how to set up the remote server.
Deploy Aim on kubernetes
- The official Aim docker image: https://hub.docker.com/r/aimstack/aim
- A guide on how to deploy Aim on kubernetes: https://aimstack.readthedocs.io/en/latest/using/k8s_deployment.html
Read the full documentation on aimstack.readthedocs.io ð
ð Comparisons to familiar tools
TensorBoard vs Aim
Training run comparison
Order of magnitude faster training run comparison with Aim
- The tracked params are first class citizens at Aim. You can search, group, aggregate via params - deeply explore all the tracked data (metrics, params, images) on the UI.
- With tensorboard the users are forced to record those parameters in the training run name to be able to search and compare. This causes a super-tedius comparison experience and usability issues on the UI when there are many experiments and params. TensorBoard doesn't have features to group, aggregate the metrics
Scalability
- Aim is built to handle 1000s of training runs - both on the backend and on the UI.
- TensorBoard becomes really slow and hard to use when a few hundred training runs are queried / compared.
Beloved TB visualizations to be added on Aim
- Embedding projector.
- Neural network visualization.
MLflow vs Aim
MLFlow is an end-to-end ML Lifecycle tool. Aim is focused on training tracking. The main differences of Aim and MLflow are around the UI scalability and run comparison features.
Aim and MLflow are a perfect match - check out the aimlflow - the tool that enables Aim superpowers on Mlflow.
Run comparison
- Aim treats tracked parameters as first-class citizens. Users can query runs, metrics, images and filter using the params.
- MLFlow does have a search by tracked config, but there are no grouping, aggregation, subplotting by hyparparams and other comparison features available.
UI Scalability
- Aim UI can handle several thousands of metrics at the same time smoothly with 1000s of steps. It may get shaky when you explore 1000s of metrics with 10000s of steps each. But we are constantly optimizing!
- MLflow UI becomes slow to use when there are a few hundreds of runs.
Weights and Biases vs Aim
Hosted vs self-hosted
- Weights and Biases is a hosted closed-source MLOps platform.
- Aim is self-hosted, free and open-source experiment tracking tool.
ð£ï¸ Roadmap
Detailed milestones
The Aim product roadmap :sparkle:
- The
Backlog
contains the issues we are going to choose from and prioritize weekly - The issues are mainly prioritized by the highly-requested features
High-level roadmap
The high-level features we are going to work on the next few months:
In progress
- Aim SDK low-level interface
- Dashboards â customizable layouts with embedded explorers
- Ergonomic UI kit
- Text Explorer
Next-up
Aim UI
- Runs management
- Runs explorer â query and visualize runs data(images, audio, distributions, ...) in a central dashboard
- Explorers
- Distributions Explorer
SDK and Storage
- Scalability
- Smooth UI and SDK experience with over 10.000 runs
- Runs management
- CLI commands
- Reporting - runs summary and run details in a CLI compatible format
- Manipulations â copy, move, delete runs, params and sequences
- CLI commands
- Cloud storage support â store runs blob(e.g. images) data on the cloud
- Artifact storage â store files, model checkpoints, and beyond
Integrations
- ML Frameworks:
- Shortlist: scikit-learn
- Resource management tools
- Shortlist: Kubeflow, Slurm
- Workflow orchestration tools
Done
- Live updates (Shipped: Oct 18 2021)
- Images tracking and visualization (Start: Oct 18 2021, Shipped: Nov 19 2021)
- Distributions tracking and visualization (Start: Nov 10 2021, Shipped: Dec 3 2021)
- Jupyter integration (Start: Nov 18 2021, Shipped: Dec 3 2021)
- Audio tracking and visualization (Start: Dec 6 2021, Shipped: Dec 17 2021)
- Transcripts tracking and visualization (Start: Dec 6 2021, Shipped: Dec 17 2021)
- Plotly integration (Start: Dec 1 2021, Shipped: Dec 17 2021)
- Colab integration (Start: Nov 18 2021, Shipped: Dec 17 2021)
- Centralized tracking server (Start: Oct 18 2021, Shipped: Jan 22 2022)
- Tensorboard adaptor - visualize TensorBoard logs with Aim (Start: Dec 17 2021, Shipped: Feb 3 2022)
- Track git info, env vars, CLI arguments, dependencies (Start: Jan 17 2022, Shipped: Feb 3 2022)
- MLFlow adaptor (visualize MLflow logs with Aim) (Start: Feb 14 2022, Shipped: Feb 22 2022)
- Activeloop Hub integration (Start: Feb 14 2022, Shipped: Feb 22 2022)
- PyTorch-Ignite integration (Start: Feb 14 2022, Shipped: Feb 22 2022)
- Run summary and overview info(system params, CLI args, git info, ...) (Start: Feb 14 2022, Shipped: Mar 9 2022)
- Add DVC related metadata into aim run (Start: Mar 7 2022, Shipped: Mar 26 2022)
- Ability to attach notes to Run from UI (Start: Mar 7 2022, Shipped: Apr 29 2022)
- Fairseq integration (Start: Mar 27 2022, Shipped: Mar 29 2022)
- LightGBM integration (Start: Apr 14 2022, Shipped: May 17 2022)
- CatBoost integration (Start: Apr 20 2022, Shipped: May 17 2022)
- Run execution details(display stdout/stderr logs) (Start: Apr 25 2022, Shipped: May 17 2022)
- Long sequences(up to 5M of steps) support (Start: Apr 25 2022, Shipped: Jun 22 2022)
- Figures Explorer (Start: Mar 1 2022, Shipped: Aug 21 2022)
- Notify on stuck runs (Start: Jul 22 2022, Shipped: Aug 21 2022)
- Integration with KerasTuner (Start: Aug 10 2022, Shipped: Aug 21 2022)
- Integration with WandB (Start: Aug 15 2022, Shipped: Aug 21 2022)
- Stable remote tracking server (Start: Jun 15 2022, Shipped: Aug 21 2022)
- Integration with fast.ai (Start: Aug 22 2022, Shipped: Oct 6 2022)
- Integration with MXNet (Start: Sep 20 2022, Shipped: Oct 6 2022)
- Project overview page (Start: Sep 1 2022, Shipped: Oct 6 2022)
- Remote tracking server scaling (Start: Sep 11 2022, Shipped: Nov 26 2022)
- Integration with PaddlePaddle (Start: Oct 2 2022, Shipped: Nov 26 2022)
- Integration with Optuna (Start: Oct 2 2022, Shipped: Nov 26 2022)
- Audios Explorer (Start: Oct 30 2022, Shipped: Nov 26 2022)
- Experiment page (Start: Nov 9 2022, Shipped: Nov 26 2022)
- HuggingFace datasets (Start: Dec 29 2022, Feb 3 2023)
ð¥ Community
Aim README badge
Add Aim badge to your README, if you've enjoyed using Aim in your work:
[![Aim](https://img.shields.io/badge/powered%20by-Aim-%231473E6)](https://github.com/aimhubio/aim)
Cite Aim in your papers
In case you've found Aim helpful in your research journey, we'd be thrilled if you could acknowledge Aim's contribution:
@software{Arakelyan_Aim_2020,
author = {Arakelyan, Gor and Soghomonyan, Gevorg and {The Aim team}},
doi = {10.5281/zenodo.6536395},
license = {Apache-2.0},
month = {6},
title = {{Aim}},
url = {https://github.com/aimhubio/aim},
version = {3.9.3},
year = {2020}
}
Contributing to Aim
Considering contibuting to Aim? To get started, please take a moment to read the CONTRIBUTING.md guide.
Join Aim contributors by submitting your first pull request. Happy coding! ð
Made with contrib.rocks.
More questions?
Top Related Projects
Open source platform for the machine learning lifecycle
The AI developer platform. Use Weights & Biases to train and fine-tune models, and manage models from experimentation to production.
The easiest way to serve AI apps and models - Build reliable Inference APIs, LLM apps, Multi-model chains, RAG service, and much more!
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot