TensorFlow-Examples
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)
Top Related Projects
Models and examples built with TensorFlow
Deep Learning for humans
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
scikit-learn: machine learning in Python
The fastai deep learning library
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Quick Overview
TensorFlow-Examples is a comprehensive collection of TensorFlow tutorials and code examples. It aims to provide clear and concise demonstrations of various machine learning and deep learning concepts using TensorFlow, making it an excellent resource for both beginners and experienced practitioners.
Pros
- Covers a wide range of topics, from basic machine learning to advanced deep learning techniques
- Well-organized and easy to navigate, with examples categorized by complexity and topic
- Regularly updated to include new TensorFlow features and best practices
- Includes both Jupyter notebooks and Python scripts for flexibility in learning
Cons
- Some examples may not be optimized for the latest TensorFlow versions
- Limited explanations in some code examples, which may be challenging for absolute beginners
- Lacks comprehensive documentation for each example
- Some advanced topics might require additional background knowledge
Code Examples
- Basic linear regression:
import tensorflow as tf
# Define model
X = tf.placeholder(tf.float32, [None, 1])
Y = tf.placeholder(tf.float32, [None, 1])
W = tf.Variable(tf.random_normal([1, 1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')
pred = tf.add(tf.matmul(X, W), b)
# Define loss and optimizer
cost = tf.reduce_mean(tf.square(Y - pred))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
- Convolutional Neural Network (CNN) for MNIST:
import tensorflow as tf
# Create the model
def conv_net(x_dict, n_classes, dropout, reuse, is_training):
with tf.variable_scope('ConvNet', reuse=reuse):
x = x_dict['images']
x = tf.reshape(x, shape=[-1, 28, 28, 1])
conv1 = tf.layers.conv2d(x, 32, 5, activation=tf.nn.relu)
conv1 = tf.layers.max_pooling2d(conv1, 2, 2)
conv2 = tf.layers.conv2d(conv1, 64, 3, activation=tf.nn.relu)
conv2 = tf.layers.max_pooling2d(conv2, 2, 2)
fc1 = tf.contrib.layers.flatten(conv2)
fc1 = tf.layers.dense(fc1, 1024)
fc1 = tf.layers.dropout(fc1, rate=dropout, training=is_training)
out = tf.layers.dense(fc1, n_classes)
return out
- Recurrent Neural Network (RNN) for text classification:
import tensorflow as tf
# Create RNN Model
def RNN(x, weights, biases):
x = tf.unstack(x, timesteps, 1)
lstm_cell = tf.contrib.rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
return tf.matmul(outputs[-1], weights['out']) + biases['out']
Getting Started
To get started with TensorFlow-Examples:
-
Clone the repository:
git clone https://github.com/aymericdamien/TensorFlow-Examples.git
-
Install the required dependencies:
pip install -r requirements.txt
-
Navigate to the desired example directory and run the Python script or open the Jupyter notebook:
cd TensorFlow-Examples/examples/3_NeuralNetworks python neural_network.py
For Jupyter notebooks, start Jupyter and open the desired notebook:
jupyter notebook
Competitor Comparisons
Models and examples built with TensorFlow
Pros of TensorFlow Models
- Comprehensive collection of official models and implementations
- Regularly updated with state-of-the-art architectures and techniques
- Extensive documentation and community support
Cons of TensorFlow Models
- Can be overwhelming for beginners due to its vast scope
- May require more setup and dependencies for specific models
- Less focused on basic concepts and introductory examples
Code Comparison
TensorFlow Models (object detection):
import tensorflow as tf
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as viz_utils
model = tf.saved_model.load('path/to/saved_model')
category_index = label_map_util.create_category_index_from_labelmap('path/to/labelmap.pbtxt')
TensorFlow-Examples (basic neural network):
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(784,)),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
TensorFlow Models offers a more extensive collection of advanced models and implementations, making it suitable for researchers and experienced practitioners. TensorFlow-Examples, on the other hand, provides simpler, more accessible examples for beginners to understand core TensorFlow concepts.
Deep Learning for humans
Pros of Keras
- Higher-level API, making it easier to build and experiment with neural networks
- More user-friendly and intuitive for beginners
- Supports multiple backend engines (TensorFlow, Theano, CNTK)
Cons of Keras
- Less flexibility for low-level operations compared to TensorFlow
- May have slightly slower performance due to its higher-level abstractions
- Limited access to some advanced TensorFlow features
Code Comparison
Keras:
from keras.models import Sequential
from keras.layers import Dense
model = Sequential([
Dense(64, activation='relu', input_shape=(784,)),
Dense(10, activation='softmax')
])
TensorFlow-Examples:
import tensorflow as tf
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
The Keras code is more concise and easier to read, while the TensorFlow-Examples code offers more granular control over the model architecture. Keras abstracts away many of the low-level details, making it more accessible for beginners, while TensorFlow-Examples provides a deeper understanding of the underlying operations.
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
Pros of PyTorch Examples
- More comprehensive coverage of advanced topics and architectures
- Better integration with PyTorch ecosystem and latest features
- Cleaner, more modular code structure for easier understanding
Cons of PyTorch Examples
- Less beginner-friendly, assumes more prior knowledge
- Fewer basic examples and tutorials for newcomers
- Documentation may be less detailed for some examples
Code Comparison
TensorFlow Examples:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
PyTorch Examples:
import torch.nn as nn
model = nn.Sequential(
nn.Linear(784, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.Softmax(dim=1)
)
Both repositories provide valuable resources for learning and implementing deep learning models. TensorFlow Examples offers a more gradual learning curve with its focus on basic concepts, while PyTorch Examples provides a wider range of advanced topics and architectures. The code structure in PyTorch Examples tends to be more Pythonic and modular, which can be beneficial for experienced developers. However, beginners might find TensorFlow Examples more accessible due to its emphasis on fundamental concepts and detailed explanations.
scikit-learn: machine learning in Python
Pros of scikit-learn
- Comprehensive library for traditional machine learning algorithms
- Easier to use for beginners and simpler tasks
- Better integration with other Python scientific libraries (NumPy, Pandas)
Cons of scikit-learn
- Limited support for deep learning and neural networks
- Less suitable for large-scale, distributed machine learning tasks
- Slower performance for certain operations compared to TensorFlow
Code Comparison
scikit-learn:
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
TensorFlow-Examples:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy')
model.fit(X_train, y_train, epochs=10)
scikit-learn is more concise for traditional ML tasks, while TensorFlow-Examples provides more flexibility for deep learning models. scikit-learn is better suited for quick prototyping and simpler models, whereas TensorFlow-Examples offers more control and is better for complex neural networks and large-scale deployments.
The fastai deep learning library
Pros of fastai
- Higher-level API, making it easier for beginners to get started with deep learning
- Integrated with PyTorch, offering more flexibility and customization options
- Includes advanced techniques like transfer learning and progressive resizing out-of-the-box
Cons of fastai
- Less focus on TensorFlow, which is still widely used in industry and research
- May abstract away some low-level details, potentially limiting understanding for those who want to dive deeper
- Smaller community compared to TensorFlow, which might result in fewer resources and third-party extensions
Code Comparison
fastai example:
from fastai.vision.all import *
path = untar_data(URLs.PETS)
dls = ImageDataLoaders.from_pet_files(path, valid_pct=0.2, seed=42, item_tfms=Resize(224))
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)
TensorFlow-Examples example:
import tensorflow as tf
from tensorflow.keras import layers, models
model = models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(224, 224, 3)),
layers.MaxPooling2D((2, 2)),
layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5, validation_data=(x_val, y_val))
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Pros of transformers
- Comprehensive library for state-of-the-art NLP models
- Supports multiple frameworks (PyTorch, TensorFlow, JAX)
- Extensive documentation and community support
Cons of transformers
- Steeper learning curve for beginners
- Larger library size and potentially higher resource requirements
- Focused primarily on NLP tasks, less general-purpose
Code Comparison
transformers:
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
TensorFlow-Examples:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy')
The transformers library provides pre-trained models and easy-to-use interfaces for NLP tasks, while TensorFlow-Examples offers more general-purpose TensorFlow code examples for various machine learning tasks. transformers is more specialized and feature-rich for NLP, while TensorFlow-Examples serves as a broader introduction to TensorFlow concepts and implementations.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
TensorFlow Examples
This tutorial was designed for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and source codes with explanation, for both TF v1 & v2.
It is suitable for beginners who want to find clear and concise examples about TensorFlow. Besides the traditional 'raw' TensorFlow implementations, you can also find the latest TensorFlow API practices (such as layers
, estimator
, dataset
, ...).
Update (05/16/2020): Moving all default examples to TF2. For TF v1 examples: check here.
Tutorial index
0 - Prerequisite
1 - Introduction
- Hello World (notebook). Very simple example to learn how to print "hello world" using TensorFlow 2.0+.
- Basic Operations (notebook). A simple example that cover TensorFlow 2.0+ basic operations.
2 - Basic Models
- Linear Regression (notebook). Implement a Linear Regression with TensorFlow 2.0+.
- Logistic Regression (notebook). Implement a Logistic Regression with TensorFlow 2.0+.
- Word2Vec (Word Embedding) (notebook). Build a Word Embedding Model (Word2Vec) from Wikipedia data, with TensorFlow 2.0+.
- GBDT (Gradient Boosted Decision Trees) (notebooks). Implement a Gradient Boosted Decision Trees with TensorFlow 2.0+ to predict house value using Boston Housing dataset.
3 - Neural Networks
Supervised
- Simple Neural Network (notebook). Use TensorFlow 2.0 'layers' and 'model' API to build a simple neural network to classify MNIST digits dataset.
- Simple Neural Network (low-level) (notebook). Raw implementation of a simple neural network to classify MNIST digits dataset.
- Convolutional Neural Network (notebook). Use TensorFlow 2.0+ 'layers' and 'model' API to build a convolutional neural network to classify MNIST digits dataset.
- Convolutional Neural Network (low-level) (notebook). Raw implementation of a convolutional neural network to classify MNIST digits dataset.
- Recurrent Neural Network (LSTM) (notebook). Build a recurrent neural network (LSTM) to classify MNIST digits dataset, using TensorFlow 2.0 'layers' and 'model' API.
- Bi-directional Recurrent Neural Network (LSTM) (notebook). Build a bi-directional recurrent neural network (LSTM) to classify MNIST digits dataset, using TensorFlow 2.0+ 'layers' and 'model' API.
- Dynamic Recurrent Neural Network (LSTM) (notebook). Build a recurrent neural network (LSTM) that performs dynamic calculation to classify sequences of variable length, using TensorFlow 2.0+ 'layers' and 'model' API.
Unsupervised
- Auto-Encoder (notebook). Build an auto-encoder to encode an image to a lower dimension and re-construct it.
- DCGAN (Deep Convolutional Generative Adversarial Networks) (notebook). Build a Deep Convolutional Generative Adversarial Network (DCGAN) to generate images from noise.
4 - Utilities
- Save and Restore a model (notebook). Save and Restore a model with TensorFlow 2.0+.
- Build Custom Layers & Modules (notebook). Learn how to build your own layers / modules and integrate them into TensorFlow 2.0+ Models.
- Tensorboard (notebook). Track and visualize neural network computation graph, metrics, weights and more using TensorFlow 2.0+ tensorboard.
5 - Data Management
- Load and Parse data (notebook). Build efficient data pipeline with TensorFlow 2.0 (Numpy arrays, Images, CSV files, custom data, ...).
- Build and Load TFRecords (notebook). Convert data into TFRecords format, and load them with TensorFlow 2.0+.
- Image Transformation (i.e. Image Augmentation) (notebook). Apply various image augmentation techniques with TensorFlow 2.0+, to generate distorted images for training.
6 - Hardware
- Multi-GPU Training (notebook). Train a convolutional neural network with multiple GPUs on CIFAR-10 dataset.
TensorFlow v1
The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples. Or see below for a list of the examples.
Dataset
Some examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples. MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook.
Official Website: http://yann.lecun.com/exdb/mnist/.
Installation
To download all the examples, simply clone this repository:
git clone https://github.com/aymericdamien/TensorFlow-Examples
To run them, you also need the latest version of TensorFlow. To install it:
pip install tensorflow
or (with GPU support):
pip install tensorflow_gpu
For more details about TensorFlow installation, you can check TensorFlow Installation Guide
TensorFlow v1 Examples - Index
The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples.
0 - Prerequisite
1 - Introduction
- Hello World (notebook) (code). Very simple example to learn how to print "hello world" using TensorFlow.
- Basic Operations (notebook) (code). A simple example that cover TensorFlow basic operations.
- TensorFlow Eager API basics (notebook) (code). Get started with TensorFlow's Eager API.
2 - Basic Models
- Linear Regression (notebook) (code). Implement a Linear Regression with TensorFlow.
- Linear Regression (eager api) (notebook) (code). Implement a Linear Regression using TensorFlow's Eager API.
- Logistic Regression (notebook) (code). Implement a Logistic Regression with TensorFlow.
- Logistic Regression (eager api) (notebook) (code). Implement a Logistic Regression using TensorFlow's Eager API.
- Nearest Neighbor (notebook) (code). Implement Nearest Neighbor algorithm with TensorFlow.
- K-Means (notebook) (code). Build a K-Means classifier with TensorFlow.
- Random Forest (notebook) (code). Build a Random Forest classifier with TensorFlow.
- Gradient Boosted Decision Tree (GBDT) (notebook) (code). Build a Gradient Boosted Decision Tree (GBDT) with TensorFlow.
- Word2Vec (Word Embedding) (notebook) (code). Build a Word Embedding Model (Word2Vec) from Wikipedia data, with TensorFlow.
3 - Neural Networks
Supervised
- Simple Neural Network (notebook) (code). Build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset. Raw TensorFlow implementation.
- Simple Neural Network (tf.layers/estimator api) (notebook) (code). Use TensorFlow 'layers' and 'estimator' API to build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset.
- Simple Neural Network (eager api) (notebook) (code). Use TensorFlow Eager API to build a simple neural network (a.k.a Multi-layer Perceptron) to classify MNIST digits dataset.
- Convolutional Neural Network (notebook) (code). Build a convolutional neural network to classify MNIST digits dataset. Raw TensorFlow implementation.
- Convolutional Neural Network (tf.layers/estimator api) (notebook) (code). Use TensorFlow 'layers' and 'estimator' API to build a convolutional neural network to classify MNIST digits dataset.
- Recurrent Neural Network (LSTM) (notebook) (code). Build a recurrent neural network (LSTM) to classify MNIST digits dataset.
- Bi-directional Recurrent Neural Network (LSTM) (notebook) (code). Build a bi-directional recurrent neural network (LSTM) to classify MNIST digits dataset.
- Dynamic Recurrent Neural Network (LSTM) (notebook) (code). Build a recurrent neural network (LSTM) that performs dynamic calculation to classify sequences of different length.
Unsupervised
- Auto-Encoder (notebook) (code). Build an auto-encoder to encode an image to a lower dimension and re-construct it.
- Variational Auto-Encoder (notebook) (code). Build a variational auto-encoder (VAE), to encode and generate images from noise.
- GAN (Generative Adversarial Networks) (notebook) (code). Build a Generative Adversarial Network (GAN) to generate images from noise.
- DCGAN (Deep Convolutional Generative Adversarial Networks) (notebook) (code). Build a Deep Convolutional Generative Adversarial Network (DCGAN) to generate images from noise.
4 - Utilities
- Save and Restore a model (notebook) (code). Save and Restore a model with TensorFlow.
- Tensorboard - Graph and loss visualization (notebook) (code). Use Tensorboard to visualize the computation Graph and plot the loss.
- Tensorboard - Advanced visualization (notebook) (code). Going deeper into Tensorboard; visualize the variables, gradients, and more...
5 - Data Management
- Build an image dataset (notebook) (code). Build your own images dataset with TensorFlow data queues, from image folders or a dataset file.
- TensorFlow Dataset API (notebook) (code). Introducing TensorFlow Dataset API for optimizing the input data pipeline.
- Load and Parse data (notebook). Build efficient data pipeline (Numpy arrays, Images, CSV files, custom data, ...).
- Build and Load TFRecords (notebook). Convert data into TFRecords format, and load them.
- Image Transformation (i.e. Image Augmentation) (notebook). Apply various image augmentation techniques, to generate distorted images for training.
6 - Multi GPU
- Basic Operations on multi-GPU (notebook) (code). A simple example to introduce multi-GPU in TensorFlow.
- Train a Neural Network on multi-GPU (notebook) (code). A clear and simple TensorFlow implementation to train a convolutional neural network on multiple GPUs.
More Examples
The following examples are coming from TFLearn, a library that provides a simplified interface for TensorFlow. You can have a look, there are many examples and pre-built operations and layers.
Tutorials
- TFLearn Quickstart. Learn the basics of TFLearn through a concrete machine learning task. Build and train a deep neural network classifier.
Examples
- TFLearn Examples. A large collection of examples using TFLearn.
Top Related Projects
Models and examples built with TensorFlow
Deep Learning for humans
A set of examples around pytorch in Vision, Text, Reinforcement Learning, etc.
scikit-learn: machine learning in Python
The fastai deep learning library
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot