Top Related Projects
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Deep Learning for humans
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
scikit-learn: machine learning in Python
Quick Overview
TensorFlow is an open-source machine learning framework developed by Google. It provides a comprehensive ecosystem of tools, libraries, and community resources for building and deploying machine learning models. TensorFlow is widely used in both research and production environments for various AI applications.
Pros
- Flexible and scalable architecture supporting multiple platforms (CPU, GPU, TPU)
- Extensive documentation, tutorials, and community support
- Powerful visualization tools like TensorBoard for model debugging and optimization
- Supports both high-level (Keras) and low-level APIs for model development
Cons
- Steep learning curve for beginners, especially when using low-level APIs
- Can be slower in development compared to some other frameworks like PyTorch
- Large framework size, which may impact deployment in resource-constrained environments
- Frequent updates and changes can lead to compatibility issues with older code
Code Examples
- Simple neural network using Keras API:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(10,)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
- Custom training loop with gradient tape:
import tensorflow as tf
@tf.function
def train_step(model, inputs, labels, optimizer):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
loss = tf.keras.losses.binary_crossentropy(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
return loss
- Loading and preprocessing data with tf.data:
import tensorflow as tf
def preprocess(image, label):
image = tf.image.resize(image, (224, 224))
image = tf.keras.applications.mobilenet_v2.preprocess_input(image)
return image, label
dataset = tf.keras.preprocessing.image_dataset_from_directory(
'path/to/image/directory',
batch_size=32,
image_size=(256, 256)
)
dataset = dataset.map(preprocess).prefetch(tf.data.AUTOTUNE)
Getting Started
To get started with TensorFlow, follow these steps:
- Install TensorFlow:
pip install tensorflow
- Import TensorFlow and check the version:
import tensorflow as tf
print(tf.__version__)
- Create a simple model and train it:
import tensorflow as tf
from tensorflow.keras import layers
# Create a sequential model
model = tf.keras.Sequential([
layers.Dense(64, activation='relu', input_shape=(10,)),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
# Compile the model
model.compile(optimizer='adam', loss='mse', metrics=['mae'])
# Generate some random data
import numpy as np
x_train = np.random.random((1000, 10))
y_train = np.random.random((1000, 1))
# Train the model
model.fit(x_train, y_train, epochs=5, batch_size=32)
This example creates a simple neural network, compiles it, and trains it on random data. You can modify this code to work with your own datasets and problem domains.
Competitor Comparisons
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pros of PyTorch
- Dynamic computational graphs allow for easier debugging and more intuitive coding
- More Pythonic syntax and better integration with Python data science ecosystem
- Faster prototyping and experimentation due to its imperative programming style
Cons of PyTorch
- Smaller ecosystem and fewer pre-trained models compared to TensorFlow
- Less support for production deployment and mobile/embedded devices
- Steeper learning curve for developers coming from non-Python backgrounds
Code Comparison
PyTorch:
import torch
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = torch.add(x, y)
TensorFlow:
import tensorflow as tf
x = tf.constant([1, 2, 3])
y = tf.constant([4, 5, 6])
z = tf.add(x, y)
Both frameworks offer similar functionality, but PyTorch's syntax is often considered more intuitive and Pythonic. TensorFlow's static graph approach can be more efficient for large-scale production deployments, while PyTorch's dynamic graphs are generally easier for research and experimentation. The choice between the two often depends on specific project requirements and personal preferences.
Deep Learning for humans
Pros of Keras
- More user-friendly and intuitive API for beginners
- Faster prototyping and experimentation
- Better suited for smaller projects and simpler models
Cons of Keras
- Less flexibility for advanced customization
- Fewer low-level operations available
- Potentially slower execution for complex models
Code Comparison
Keras:
from keras.models import Sequential
from keras.layers import Dense
model = Sequential([
Dense(64, activation='relu', input_shape=(10,)),
Dense(1, activation='sigmoid')
])
TensorFlow:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu', input_shape=(10,)),
tf.keras.layers.Dense(1, activation='sigmoid')
])
The code comparison shows that Keras provides a more concise and straightforward approach to building neural networks. TensorFlow, while using the Keras API, requires additional namespace specifications. However, TensorFlow offers more advanced features and customization options for complex models and research-oriented tasks.
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Pros of ONNX Runtime
- Broader model compatibility: Supports models from various frameworks
- Optimized for edge devices and IoT scenarios
- Easier deployment across different platforms and hardware
Cons of ONNX Runtime
- Smaller community and ecosystem compared to TensorFlow
- Less extensive documentation and learning resources
- Fewer built-in high-level APIs for model development
Code Comparison
TensorFlow example:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
ONNX Runtime example:
import onnxruntime as ort
session = ort.InferenceSession("model.onnx")
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name
result = session.run([output_name], {input_name: input_data})
The TensorFlow example shows model creation, while the ONNX Runtime example demonstrates inference using a pre-trained ONNX model. TensorFlow provides a more comprehensive framework for both model development and inference, while ONNX Runtime focuses on efficient cross-platform inference for pre-trained models.
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Pros of MXNet
- More lightweight and flexible, allowing for easier customization
- Better support for multiple programming languages (Python, R, Scala, Julia)
- Efficient memory usage and faster training on multi-GPU systems
Cons of MXNet
- Smaller community and ecosystem compared to TensorFlow
- Less comprehensive documentation and tutorials
- Fewer pre-trained models and high-level APIs available
Code Comparison
MXNet:
import mxnet as mx
from mxnet import nd, autograd, gluon
x = nd.array([[1, 2], [3, 4]])
y = nd.array([[5, 6], [7, 8]])
z = x + y
TensorFlow:
import tensorflow as tf
x = tf.constant([[1, 2], [3, 4]])
y = tf.constant([[5, 6], [7, 8]])
z = tf.add(x, y)
Both frameworks offer similar functionality for basic operations, but MXNet's syntax is often more concise. TensorFlow provides a more extensive ecosystem and better integration with production environments, while MXNet offers greater flexibility and efficiency in certain scenarios. The choice between the two depends on specific project requirements and developer preferences.
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
Pros of JAX
- More flexible and composable, allowing for easier customization of models and training loops
- Better support for just-in-time (JIT) compilation, leading to improved performance
- Simpler API with a more functional programming style
Cons of JAX
- Smaller ecosystem and fewer pre-built models compared to TensorFlow
- Less comprehensive documentation and community support
- Steeper learning curve for developers coming from imperative programming backgrounds
Code Comparison
JAX:
import jax.numpy as jnp
from jax import grad, jit
def loss(params, x, y):
return jnp.mean((params[0] * x + params[1] - y) ** 2)
grad_loss = jit(grad(loss))
TensorFlow:
import tensorflow as tf
def loss(params, x, y):
return tf.reduce_mean(tf.square(params[0] * x + params[1] - y))
with tf.GradientTape() as tape:
grads = tape.gradient(loss(params, x, y), params)
Both examples show a simple linear regression loss function and gradient computation. JAX's approach is more functional and concise, while TensorFlow uses an imperative style with the GradientTape API.
scikit-learn: machine learning in Python
Pros of scikit-learn
- Simpler API and easier to learn for beginners
- Broader range of traditional machine learning algorithms
- Better suited for smaller datasets and simpler models
Cons of scikit-learn
- Limited support for deep learning and neural networks
- Less optimized for large-scale distributed computing
- Fewer options for GPU acceleration
Code Comparison
scikit-learn:
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
TensorFlow:
import tensorflow as tf
model = tf.keras.Sequential([
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy')
model.fit(X_train, y_train, epochs=10)
scikit-learn is more concise for traditional ML tasks, while TensorFlow offers more flexibility for complex neural networks. scikit-learn is ideal for quick prototyping and simpler models, whereas TensorFlow excels in deep learning and large-scale deployments. The choice between them depends on the specific requirements of your project, such as model complexity, dataset size, and computational resources available.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Documentation |
---|
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.
TensorFlow was originally developed by researchers and engineers working within the Machine Intelligence team at Google Brain to conduct research in machine learning and neural networks. However, the framework is versatile enough to be used in other areas as well.
TensorFlow provides stable Python and C++ APIs, as well as a non-guaranteed backward compatible API for other languages.
Keep up-to-date with release announcements and security updates by subscribing to announce@tensorflow.org. See all the mailing lists.
Install
See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.
To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):
$ pip install tensorflow
Other devices (DirectX and MacOS-metal) are supported using Device plugins.
A smaller CPU-only package is also available:
$ pip install tensorflow-cpu
To update TensorFlow to the latest version, add --upgrade
flag to the above
commands.
Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.
Try your first TensorFlow program
$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'
For more examples, see the TensorFlow tutorials.
Contribution guidelines
If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.
We use GitHub issues for tracking requests and bugs, please see TensorFlow Forum for general questions and discussion, and please direct specific questions to Stack Overflow.
The TensorFlow project strives to abide by generally accepted best practices in open-source software development.
Patching guidelines
Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities:
- Clone the TensorFlow repo and switch to the corresponding branch for your
desired TensorFlow version, for example, branch
r2.8
for version 2.8. - Apply (that is, cherry-pick) the desired changes and resolve any code conflicts.
- Run TensorFlow tests and ensure they pass.
- Build the TensorFlow pip package from source.
Continuous build status
You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.
Official Builds
Build Type | Status | Artifacts |
---|---|---|
Linux CPU | PyPI | |
Linux GPU | PyPI | |
Linux XLA | TBA | |
macOS | PyPI | |
Windows CPU | PyPI | |
Windows GPU | PyPI | |
Android | Download | |
Raspberry Pi 0 and 1 | Py3 | |
Raspberry Pi 2 and 3 | Py3 | |
Libtensorflow MacOS CPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Linux CPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Linux GPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Windows CPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Windows GPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Resources
- TensorFlow.org
- TensorFlow Tutorials
- TensorFlow Official Models
- TensorFlow Examples
- TensorFlow Codelabs
- TensorFlow Blog
- Learn ML with TensorFlow
- TensorFlow Twitter
- TensorFlow YouTube
- TensorFlow model optimization roadmap
- TensorFlow White Papers
- TensorBoard Visualization Toolkit
- TensorFlow Code Search
Learn more about the TensorFlow community and how to contribute.
Courses
License
Top Related Projects
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Deep Learning for humans
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
scikit-learn: machine learning in Python
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot