ML-YouTube-Courses
📺 Discover the latest machine learning / AI courses on YouTube.
Top Related Projects
12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
A roadmap connecting many of the most important concepts in machine learning, how to learn them and what tools to use to perform them.
A complete daily plan for studying to become a machine learning engineer.
A curated list of awesome Machine Learning frameworks, libraries and software.
VIP cheatsheets for Stanford's CS 229 Machine Learning
Quick Overview
The dair-ai/ML-YouTube-Courses
repository is a collection of links to high-quality machine learning and deep learning video courses hosted on YouTube. The repository aims to provide a curated list of educational resources for individuals interested in learning about various topics in the field of artificial intelligence and machine learning.
Pros
- Comprehensive Collection: The repository covers a wide range of machine learning and deep learning topics, including introductory concepts, advanced techniques, and specialized applications.
- Reputable Sources: The courses are sourced from well-known and respected instructors, ensuring the quality and reliability of the content.
- Free Access: All the courses listed in the repository are freely available on YouTube, making them accessible to a wide audience.
- Continuous Updates: The repository is actively maintained, with new courses being added regularly to keep the content up-to-date.
Cons
- No Structured Curriculum: The repository is a collection of individual courses, and there is no defined learning path or curriculum, which may make it challenging for beginners to navigate.
- Varying Quality: While the courses are curated, the quality and depth of the content may vary across different instructors and topics.
- Lack of Interactive Elements: As the courses are video-based, there are limited opportunities for hands-on exercises or interactive learning experiences.
- Potential Language Barriers: Some of the courses may be in languages other than English, which could be a limitation for some learners.
Getting Started
Since this repository is a collection of links to YouTube videos, there are no specific code examples or a quick start guide. To get started, you can browse the repository and explore the various courses that align with your learning goals and interests. The repository provides a well-organized structure, with courses categorized by topics, making it easier to navigate and find relevant content.
Competitor Comparisons
12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
Pros of ML-For-Beginners
- Structured curriculum with lesson plans and quizzes
- Covers a wide range of ML topics, including ethics and real-world applications
- Includes hands-on projects and exercises for practical learning
Cons of ML-For-Beginners
- Text-based content may not be as engaging as video lectures
- Requires more self-discipline to follow through the curriculum
- May not cover advanced topics in as much depth as specialized video courses
Code Comparison
ML-For-Beginners:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model = LogisticRegression()
model.fit(X_train, y_train)
ML-YouTube-Courses: (No specific code examples provided, as it's a curated list of video courses)
ML-For-Beginners offers a comprehensive, structured approach to learning machine learning with hands-on exercises and a broad curriculum. It's ideal for self-paced learners who prefer text-based content and practical projects. ML-YouTube-Courses, on the other hand, provides a curated list of video courses, which may be more engaging for visual learners and offer in-depth coverage of specific topics from various instructors.
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
Pros of handson-ml2
- Comprehensive Jupyter notebooks with detailed explanations and code examples
- Covers a wide range of machine learning topics, from basics to advanced techniques
- Regularly updated to include the latest TensorFlow and Scikit-learn features
Cons of handson-ml2
- Focuses primarily on TensorFlow and Scikit-learn, limiting exposure to other libraries
- Requires more self-guided learning compared to structured video courses
- May be overwhelming for absolute beginners due to its depth and breadth
Code Comparison
handson-ml2:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation="relu", input_shape=[4]),
tf.keras.layers.Dense(1)
])
ML-YouTube-Courses:
# No direct code examples provided in the repository
# The project focuses on curating YouTube course links
ML-YouTube-Courses is a curated list of machine learning courses available on YouTube, while handson-ml2 is a hands-on guide with code examples. The former provides a variety of learning resources from different instructors, while the latter offers a more structured, code-centric approach to learning machine learning concepts and implementations.
A roadmap connecting many of the most important concepts in machine learning, how to learn them and what tools to use to perform them.
Pros of machine-learning-roadmap
- Provides a structured learning path with clear milestones
- Includes practical projects and hands-on exercises
- Offers a comprehensive overview of ML concepts and tools
Cons of machine-learning-roadmap
- Less frequently updated compared to ML-YouTube-Courses
- Focuses primarily on Python-based ML, potentially limiting exposure to other languages
- May not cover the most recent advancements in ML as extensively
Code Comparison
ML-YouTube-Courses typically doesn't include code snippets, while machine-learning-roadmap provides practical examples:
# Example from machine-learning-roadmap
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
# Load data, split into train/test sets, and train a model
df = pd.read_csv("data.csv")
X_train, X_test, y_train, y_test = train_test_split(df.drop("target", axis=1), df["target"])
model = RandomForestClassifier().fit(X_train, y_train)
Both repositories serve as valuable resources for learning machine learning, with ML-YouTube-Courses offering a broader collection of video courses and machine-learning-roadmap providing a more structured learning path with practical coding examples.
A complete daily plan for studying to become a machine learning engineer.
Pros of machine-learning-for-software-engineers
- Provides a comprehensive roadmap for software engineers to learn machine learning
- Includes a wide range of resources, including books, courses, and articles
- Offers a step-by-step approach to learning ML concepts and techniques
Cons of machine-learning-for-software-engineers
- Less focus on video content compared to ML-YouTube-Courses
- May not be as frequently updated as ML-YouTube-Courses
- Lacks the structured course format found in ML-YouTube-Courses
Code Comparison
While both repositories primarily focus on educational resources rather than code examples, machine-learning-for-software-engineers does include some code snippets in its README. Here's an example of a Python code snippet from the repository:
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
return x * (1 - x)
ML-YouTube-Courses doesn't typically include code snippets directly in its README, as it primarily links to video courses. However, the linked courses often contain code examples and practical implementations of machine learning concepts.
A curated list of awesome Machine Learning frameworks, libraries and software.
Pros of awesome-machine-learning
- Broader scope, covering various ML topics and resources beyond just video courses
- Includes tools, libraries, and frameworks for practical ML implementation
- Offers resources in multiple programming languages (Python, R, Java, etc.)
Cons of awesome-machine-learning
- Less focused on structured learning paths compared to ML-YouTube-Courses
- May be overwhelming for beginners due to the vast amount of information
- Lacks the curated, course-like organization found in ML-YouTube-Courses
Code comparison
While both repositories primarily consist of curated lists rather than code, here's a brief comparison of their README structures:
ML-YouTube-Courses:
## Machine Learning
| Course Name | Institution | Lecturer | Year | Course Link | Language |
|-------------|-------------|----------|------|-------------|----------|
| Machine Learning Specialization | Stanford | Andrew Ng | 2022 | [Link](https://www.coursera.org/specializations/machine-learning-introduction) | English |
awesome-machine-learning:
## Python
#### Computer Vision
* [SimpleCV](http://simplecv.org/) - An open source computer vision framework that gives access to several high-powered computer vision libraries, such as OpenCV. Written on Python and runs on Mac, Windows, and Ubuntu Linux.
The ML-YouTube-Courses repository focuses on a tabular format for course listings, while awesome-machine-learning uses nested markdown lists to organize resources by category and programming language.
VIP cheatsheets for Stanford's CS 229 Machine Learning
Pros of stanford-cs-229-machine-learning
- Focused on a specific, well-structured Stanford course
- Provides concise cheatsheets and refreshers for quick reference
- Includes multilingual translations of materials
Cons of stanford-cs-229-machine-learning
- Limited to content from a single course
- May not cover the most recent developments in machine learning
- Less diverse in terms of content creators and perspectives
Code Comparison
While both repositories primarily focus on educational content rather than code, ML-YouTube-Courses occasionally includes code snippets from various courses. stanford-cs-229-machine-learning doesn't typically include code samples, but rather focuses on mathematical formulas and concepts.
ML-YouTube-Courses example (Python):
import numpy as np
from sklearn.linear_model import LinearRegression
X = np.array([[1, 1], [1, 2], [2, 2], [2, 3]])
y = np.dot(X, np.array([1, 2])) + 3
reg = LinearRegression().fit(X, y)
print(reg.score(X, y))
stanford-cs-229-machine-learning example (LaTeX formula):
\hat{\theta} = (X^T X)^{-1} X^T y
Both repositories serve as valuable resources for machine learning education, with ML-YouTube-Courses offering a broader range of content from various sources, while stanford-cs-229-machine-learning provides a more focused and structured approach based on a specific course.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
ðº ML YouTube Courses
At DAIR.AI we â¤ï¸ open AI education. In this repo, we index and organize some of the best and most recent machine learning courses available on YouTube.
Machine Learning
- Caltech CS156: Learning from Data
- Stanford CS229: Machine Learning
- Making Friends with Machine Learning
- Applied Machine Learning
- Introduction to Machine Learning (Tübingen)
- Machine Learning Lecture (Stefan Harmeling)
- Statistical Machine Learning (Tübingen)
- Probabilistic Machine Learning
- MIT 6.S897: Machine Learning for Healthcare (2019)
Deep Learning
- Neural Networks: Zero to Hero
- MIT: Deep Learning for Art, Aesthetics, and Creativity
- Stanford CS230: Deep Learning (2018)
- Introduction to Deep Learning (MIT)
- CMU Introduction to Deep Learning (11-785)
- Deep Learning: CS 182
- Deep Unsupervised Learning
- NYU Deep Learning SP21
- Foundation Models
- Deep Learning (Tübingen)
Scientific Machine Learning
Practical Machine Learning
- LLMOps: Building Real-World Applications With Large Language Models
- Evaluating and Debugging Generative AI
- ChatGPT Prompt Engineering for Developers
- LangChain for LLM Application Development
- LangChain: Chat with Your Data
- Building Systems with the ChatGPT API
- LangChain & Vector Databases in Production
- Building LLM-Powered Apps
- Full Stack LLM Bootcamp
- Full Stack Deep Learning
- Practical Deep Learning for Coders
- Stanford MLSys Seminars
- Machine Learning Engineering for Production (MLOps)
- MIT Introduction to Data-Centric AI
Natural Language Processing
- XCS224U: Natural Language Understanding (2023)
- Stanford CS25 - Transformers United
- NLP Course (Hugging Face)
- CS224N: Natural Language Processing with Deep Learning
- CMU Neural Networks for NLP
- CS224U: Natural Language Understanding
- CMU Advanced NLP 2021/2022/2024
- Multilingual NLP
- Advanced NLP
Computer Vision
- CS231N: Convolutional Neural Networks for Visual Recognition
- Deep Learning for Computer Vision
- Deep Learning for Computer Vision (DL4CV)
- Deep Learning for Computer Vision (neuralearn.ai)
Reinforcement Learning
- Deep Reinforcement Learning
- Reinforcement Learning Lecture Series (DeepMind)
- Reinforcement Learning (Polytechnique Montreal, Fall 2021)
- Foundations of Deep RL
- Stanford CS234: Reinforcement Learning
Graph Machine Learning
Multi-Task Learning
Others
Caltech CS156: Learning from Data
An introductory course in machine learning that covers the basic theory, algorithms, and applications.
- Lecture 1: The Learning Problem
- Lecture 2: Is Learning Feasible?
- Lecture 3: The Linear Model I
- Lecture 4: Error and Noise
- Lecture 5: Training versus Testing
- Lecture 6: Theory of Generalization
- Lecture 7: The VC Dimension
- Lecture 8: Bias-Variance Tradeoff
- Lecture 9: The Linear Model II
- Lecture 10: Neural Networks
- Lecture 11: Overfitting
- Lecture 12: Regularization
- Lecture 13: Validation
- Lecture 14: Support Vector Machines
- Lecture 15: Kernel Methods
- Lecture 16: Radial Basis Functions
- Lecture 17: Three Learning Principles
- Lecture 18: Epilogue
ð Link to Course
Stanford CS229: Machine Learning
To learn some of the basics of ML:
- Linear Regression and Gradient Descent
- Logistic Regression
- Naive Bayes
- SVMs
- Kernels
- Decision Trees
- Introduction to Neural Networks
- Debugging ML Models ...
ð Link to Course
Making Friends with Machine Learning
A series of mini lectures covering various introductory topics in ML:
- Explainability in AI
- Classification vs. Regression
- Precession vs. Recall
- Statistical Significance
- Clustering and K-means
- Ensemble models ...
ð Link to Course
Neural Networks: Zero to Hero (by Andrej Karpathy)
Course providing an in-depth overview of neural networks.
- Backpropagation
- Spelled-out intro to Language Modeling
- Activation and Gradients
- Becoming a Backprop Ninja
ð Link to Course
MIT: Deep Learning for Art, Aesthetics, and Creativity
Covers the application of deep learning for art, aesthetics, and creativity.
- Nostalgia -> Art -> Creativity -> Evolution as Data + Direction
- Efficient GANs
- Explorations in AI for Creativity
- Neural Abstractions
- Easy 3D Content Creation with Consistent Neural Fields ...
ð Link to Course
Stanford CS230: Deep Learning (2018)
Covers the foundations of deep learning, how to build different neural networks(CNNs, RNNs, LSTMs, etc...), how to lead machine learning projects, and career advice for deep learning practitioners.
- Deep Learning Intuition
- Adversarial examples - GANs
- Full-cycle of a Deep Learning Project
- AI and Healthcare
- Deep Learning Strategy
- Interpretability of Neural Networks
- Career Advice and Reading Research Papers
- Deep Reinforcement Learning
ð Link to Course ð Link to Materials
Applied Machine Learning
To learn some of the most widely used techniques in ML:
- Optimization and Calculus
- Overfitting and Underfitting
- Regularization
- Monte Carlo Estimation
- Maximum Likelihood Learning
- Nearest Neighbours
- ...
ð Link to Course
Introduction to Machine Learning (Tübingen)
The course serves as a basic introduction to machine learning and covers key concepts in regression, classification, optimization, regularization, clustering, and dimensionality reduction.
- Linear regression
- Logistic regression
- Regularization
- Boosting
- Neural networks
- PCA
- Clustering
- ...
ð Link to Course
Machine Learning Lecture (Stefan Harmeling)
Covers many fundamental ML concepts:
- Bayes rule
- From logic to probabilities
- Distributions
- Matrix Differential Calculus
- PCA
- K-means and EM
- Causality
- Gaussian Processes
- ...
ð Link to Course
Statistical Machine Learning (Tübingen)
The course covers the standard paradigms and algorithms in statistical machine learning.
- KNN
- Bayesian decision theory
- Convex optimization
- Linear and ridge regression
- Logistic regression
- SVM
- Random Forests
- Boosting
- PCA
- Clustering
- ...
ð Link to Course
Practical Deep Learning for Coders
This course covers topics such as how to:
- Build and train deep learning models for computer vision, natural language processing, tabular analysis, and collaborative filtering problems
- Create random forests and regression models
- Deploy models
- Use PyTorch, the worldâs fastest growing deep learning software, plus popular libraries like fastai and Hugging Face
- Foundations and Deep Dive to Diffusion Models
- ...
Stanford MLSys Seminars
A seminar series on all sorts of topics related to building machine learning systems.
ð Link to Lectures
Machine Learning Engineering for Production (MLOps)
Specialization course on MLOPs by Andrew Ng.
ð Link to Lectures
MIT Introduction to Data-Centric AI
Covers the emerging science of Data-Centric AI (DCAI) that studies techniques to improve datasets, which is often the best way to improve performance in practical ML applications. Topics include:
- Data-Centric AI vs. Model-Centric AI
- Label Errors
- Dataset Creation and Curation
- Data-centric Evaluation of ML Models
- Class Imbalance, Outliers, and Distribution Shift
- ...
ð Course Website
ð Lecture Videos
ð Lab Assignments
Machine Learning with Graphs (Stanford)
To learn some of the latest graph techniques in machine learning:
- PageRank
- Matrix Factorizing
- Node Embeddings
- Graph Neural Networks
- Knowledge Graphs
- Deep Generative Models for Graphs
- ...
ð Link to Course
Probabilistic Machine Learning
To learn the probabilistic paradigm of ML:
- Reasoning about uncertainty
- Continuous Variables
- Sampling
- Markov Chain Monte Carlo
- Gaussian Distributions
- Graphical Models
- Tuning Inference Algorithms
- ...
ð Link to Course
MIT 6.S897: Machine Learning for Healthcare (2019)
This course introduces students to machine learning in healthcare, including the nature of clinical data and the use of machine learning for risk stratification, disease progression modeling, precision medicine, diagnosis, subtype discovery, and improving clinical workflows.
ð Link to Course
Introduction to Deep Learning
To learn some of the fundamentals of deep learning:
- Introduction to Deep Learning
ð Link to Course
CMU Introduction to Deep Learning (11-785)
The course starts off gradually from MLPs (Multi Layer Perceptrons) and then progresses into concepts like attention and sequence-to-sequence models.
ð Link to Course
ð Lectures
ð Tutorials/Recitations
Deep Learning: CS 182
To learn some of the widely used techniques in deep learning:
- Machine Learning Basics
- Error Analysis
- Optimization
- Backpropagation
- Initialization
- Batch Normalization
- Style transfer
- Imitation Learning
- ...
ð Link to Course
Deep Unsupervised Learning
To learn the latest and most widely used techniques in deep unsupervised learning:
- Autoregressive Models
- Flow Models
- Latent Variable Models
- Self-supervised learning
- Implicit Models
- Compression
- ...
ð Link to Course
NYU Deep Learning SP21
To learn some of the advanced techniques in deep learning:
- Neural Nets: rotation and squashing
- Latent Variable Energy Based Models
- Unsupervised Learning
- Generative Adversarial Networks
- Autoencoders
- ...
ð Link to Course
Foundation Models
To learn about foundation models like GPT-3, CLIP, Flamingo, Codex, and DINO.
ð Link to Course
Deep Learning (Tübingen)
This course introduces the practical and theoretical principles of deep neural networks.
- Computation graphs
- Activation functions and loss functions
- Training, regularization and data augmentation
- Basic and state-of-the-art deep neural network architectures including convolutional networks and graph neural networks
- Deep generative models such as auto-encoders, variational auto-encoders and generative adversarial networks
- ...
ð Link to Course
Parallel Computing and Scientific Machine Learning
- The Basics of Scientific Simulators
- Introduction to Parallel Computing
- Continuous Dynamics
- Inverse Problems and Differentiable Programming
- Distributed Parallel Computing
- Physics-Informed Neural Networks and Neural Differential Equations
- Probabilistic Programming, AKA Bayesian Estimation on Programs
- Globalizing the Understanding of Models
ð Link to Course
XCS224U: Natural Language Understanding (2023)
This course covers topics such as:
- Contextual Word Representations
- Information Retrieval
- In-context learning
- Behavioral Evaluation of NLU models
- NLP Methods and Metrics
- ...
ð Link to Course
Stanford CS25 - Transformers United
This course consists of lectures focused on Transformers, providing a deep dive and their applications
- Introduction to Transformers
- Transformers in Language: GPT-3, Codex
- Applications in Vision
- Transformers in RL & Universal Compute Engines
- Scaling transformers
- Interpretability with transformers
- ...
ð Link to Course
NLP Course (Hugging Face)
Learn about different NLP concepts and how to apply language models and Transformers to NLP:
- What is Transfer Learning?
- BPE Tokenization
- Batching inputs
- Fine-tuning models
- Text embeddings and semantic search
- Model evaluation
- ...
ð Link to Course
CS224N: Natural Language Processing with Deep Learning
To learn the latest approaches for deep learning based NLP:
- Dependency parsing
- Language models and RNNs
- Question Answering
- Transformers and pretraining
- Natural Language Generation
- T5 and Large Language Models
- Future of NLP
- ...
ð Link to Course
CMU Neural Networks for NLP
To learn the latest neural network based techniques for NLP:
- Language Modeling
- Efficiency tricks
- Conditioned Generation
- Structured Prediction
- Model Interpretation
- Advanced Search Algorithms
- ...
ð Link to Course
CS224U: Natural Language Understanding
To learn the latest concepts in natural language understanding:
- Grounded Language Understanding
- Relation Extraction
- Natural Language Inference (NLI)
- NLU and Neural Information Extraction
- Adversarial testing
- ...
ð Link to Course
CMU Advanced NLP
To learn:
- Basics of modern NLP techniques
- Multi-task, Multi-domain, multi-lingual learning
- Prompting + Sequence-to-sequence pre-training
- Interpreting and Debugging NLP Models
- Learning from Knowledge-bases
- Adversarial learning
- ...
ð Link to 2021 Edition
ð Link to 2022 Edition
ð Link to 2024 Edition
Multilingual NLP
To learn the latest concepts for doing multilingual NLP:
- Typology
- Words, Part of Speech, and Morphology
- Advanced Text Classification
- Machine Translation
- Data Augmentation for MT
- Low Resource ASR
- Active Learning
- ...
ð Link to 2020 Course
ð Link to 2022 Course
Advanced NLP
To learn advanced concepts in NLP:
- Attention Mechanisms
- Transformers
- BERT
- Question Answering
- Model Distillation
- Vision + Language
- Ethics in NLP
- Commonsense Reasoning
- ...
ð Link to Course
CS231N: Convolutional Neural Networks for Visual Recognition
Stanford's Famous CS231n course. The videos are only available for the Spring 2017 semester. The course is currently known as Deep Learning for Computer Vision, but the Spring 2017 version is titled Convolutional Neural Networks for Visual Recognition.
- Image Classification
- Loss Functions and Optimization
- Introduction to Neural Networks
- Convolutional Neural Networks
- Training Neural Networks
- Deep Learning Software
- CNN Architectures
- Recurrent Neural Networks
- Detection and Segmentation
- Visualizing and Understanding
- Generative Models
- Deep Reinforcement Learning
ð Link to Course ð Link to Materials
Deep Learning for Computer Vision
To learn some of the fundamental concepts in CV:
- Introduction to deep learning for CV
- Image Classification
- Convolutional Networks
- Attention Networks
- Detection and Segmentation
- Generative Models
ð Link to Course
Deep Learning for Computer Vision (DL4CV)
To learn modern methods for computer vision:
- CNNs
- Advanced PyTorch
- Understanding Neural Networks
- RNN, Attention and ViTs
- Generative Models
- GPU Fundamentals
- Self-Supervision
- Neural Rendering
- Efficient Architectures
ð Link to Course
Deep Learning for Computer Vision (neuralearn.ai)
To learn modern methods for computer vision:
- Self-Supervised Learning
- Neural Rendering
- Efficient Architectures
- Machine Learning Operations (MLOps)
- Modern Convolutional Neural Networks
- Transformers in Vision
- Model Deployment
ð Link to Course
AMMI Geometric Deep Learning Course
To learn about concepts in geometric deep learning:
- Learning in High Dimensions
- Geometric Priors
- Grids
- Manifolds and Meshes
- Sequences and Time Warping
- ...
ð Link to Course
Deep Reinforcement Learning
To learn the latest concepts in deep RL:
- Intro to RL
- RL algorithms
- Real-world sequential decision making
- Supervised learning of behaviors
- Deep imitation learning
- Cost functions and reward functions
- ...
ð Link to Course
Reinforcement Learning Lecture Series (DeepMind)
The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence.
- Introduction to RL
- Dynamic Programming
- Model-free algorithms
- Deep reinforcement learning
- ...
ð Link to Course
LLMOps: Building Real-World Applications With Large Language Models
Learn to build modern software with LLMs using the newest tools and techniques in the field.
ð Link to Course
Evaluating and Debugging Generative AI
You'll learn:
- Instrument A Jupyter Notebook
- Manage Hyperparameters Config
- Log Run Metrics
- Collect artifacts for dataset and model versioning
- Log experiment results
- Trace prompts and responses for LLMs
- ...
ð Link to Course
ChatGPT Prompt Engineering for Developers
Learn how to use a large language model (LLM) to quickly build new and powerful applications.
ð Link to Course
LangChain for LLM Application Development
You'll learn:
- Models, Prompt, and Parsers
- Memories for LLMs
- Chains
- Question Answering over Documents
- Agents
ð Link to Course
LangChain: Chat with Your Data
You'll learn about:
- Document Loading
- Document Splitting
- Vector Stores and Embeddings
- Retrieval
- Question Answering
- Chat
ð Link to Course
Building Systems with the ChatGPT API
Learn how to automate complex workflows using chain calls to a large language model.
ð Link to Course
LangChain & Vector Databases in Production
Learn how to use LangChain and Vector DBs in Production:
- LLMs and LangChain
- Learning how to Prompt
- Keeping Knowledge Organized with Indexes
- Combining Components Together with Chains
- ...
ð Link to Course
Building LLM-Powered Apps
Learn how to build LLM-powered applications using LLM APIs
- Unpacking LLM APIs
- Building a Baseline LLM Application
- Enhancing and Optimizing LLM Applications
- ...
ð Link to Course
Full Stack LLM Bootcamp
To learn how to build and deploy LLM-powered applications:
- Learn to Spell: Prompt Engineering
- LLMOPs
- UX for Language User Interfaces
- Augmented Language Models
- Launch an LLM App in One Hour
- LLM Foundations
- Project Walkthrough: askFSDL
- ...
ð Link to Course
Full Stack Deep Learning
To learn full-stack production deep learning:
- ML Projects
- Infrastructure and Tooling
- Experiment Managing
- Troubleshooting DNNs
- Data Management
- Data Labeling
- Monitoring ML Models
- Web deployment
- ...
ð Link to Course
Introduction to Deep Learning and Deep Generative Models
Covers the fundamental concepts of deep learning
- Single-layer neural networks and gradient descent
- Multi-layer neural networks and backpropagation
- Convolutional neural networks for images
- Recurrent neural networks for text
- Autoencoders, variational autoencoders, and generative adversarial networks
- Encoder-decoder recurrent neural networks and transformers
- PyTorch code examples
ð Link to Course ð Link to Materials
Self-Driving Cars (Tübingen)
Covers the most dominant paradigms of self-driving cars: modular pipeline-based approaches as well as deep-learning based end-to-end driving techniques.
- Camera, lidar and radar-based perception
- Localization, navigation, path planning
- Vehicle modeling/control
- Deep Learning
- Imitation learning
- Reinforcement learning
ð Link to Course
Reinforcement Learning (Polytechnique Montreal, Fall 2021)
Designing autonomous decision making systems is one of the longstanding goals of Artificial Intelligence. Such decision making systems, if realized, can have a big impact in machine learning for robotics, game playing, control, health care to name a few. This course introduces Reinforcement Learning as a general framework to design such autonomous decision making systems.
- Introduction to RL
- Multi-armed bandits
- Policy Gradient Methods
- Contextual Bandits
- Finite Markov Decision Process
- Dynamic Programming
- Policy Iteration, Value Iteration
- Monte Carlo Methods
- ...
ð Link to Course ð Link to Materials
Foundations of Deep RL
A mini 6-lecture series by Pieter Abbeel.
- MDPs, Exact Solution Methods, Max-ent RL
- Deep Q-Learning
- Policy Gradients and Advantage Estimation
- TRPO and PPO
- DDPG and SAC
- Model-based RL
ð Link to Course
Stanford CS234: Reinforcement Learning
Covers topics from basic concepts of Reinforcement Learning to more advanced ones:
- Markov decision processes & planning
- Model-free policy evaluation
- Model-free control
- Reinforcement learning with function approximation & Deep RL
- Policy Search
- Exploration
- ...
ð Link to Course ð Link to Materials
Stanford CS330: Deep Multi-Task and Meta Learning
This is a graduate-level course covering different aspects of deep multi-task and meta learning.
- Multi-task learning, transfer learning basics
- Meta-learning algorithms
- Advanced meta-learning topics
- Multi-task RL, goal-conditioned RL
- Meta-reinforcement learning
- Hierarchical RL
- Lifelong learning
- Open problems
ð Link to Course ð Link to Materials
MIT Deep Learning in Life Sciences
A course introducing foundations of ML for applications in genomics and the life sciences more broadly.
- Interpreting ML Models
- DNA Accessibility, Promoters and Enhancers
- Chromatin and gene regulation
- Gene Expression, Splicing
- RNA-seq, Splicing
- Single cell RNA-sequencing
- Dimensionality Reduction, Genetics, and Variation
- Drug Discovery
- Protein Structure Prediction
- Protein Folding
- Imaging and Cancer
- Neuroscience
ð Link to Course
ð Link to Materials
Advanced Robotics: UC Berkeley
This is course is from Peter Abbeel and covers a review on reinforcement learning and continues to applications in robotics.
- MDPs: Exact Methods
- Discretization of Continuous State Space MDPs
- Function Approximation / Feature-based Representations
- LQR, iterative LQR / Differential Dynamic Programming
- ...
ð Link to Course ð Link to Materials
Reach out on Twitter if you have any questions.
If you are interested to contribute, feel free to open a PR with a link to the course. It will take a bit of time, but I have plans to do many things with these individual lectures. We can summarize the lectures, include notes, provide additional reading material, include difficulty of content, etc.
You can now find ML Course notes here.
Top Related Projects
12 weeks, 26 lessons, 52 quizzes, classic Machine Learning for all
A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using Scikit-Learn, Keras and TensorFlow 2.
A roadmap connecting many of the most important concepts in machine learning, how to learn them and what tools to use to perform them.
A complete daily plan for studying to become a machine learning engineer.
A curated list of awesome Machine Learning frameworks, libraries and software.
VIP cheatsheets for Stanford's CS 229 Machine Learning
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot