Convert Figma logo to code with AI

keras-team logokeras-io

Keras documentation, hosted live at keras.io

2,730
2,026
2,730
111

Top Related Projects

185,446

An Open Source Machine Learning Framework for Everyone

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

scikit-learn: machine learning in Python

17,500

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

20,763

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

26,090

The fastai deep learning library

Quick Overview

Keras-io is the official documentation repository for Keras, a popular deep learning framework. It contains the source code for the Keras website, including guides, tutorials, and API references. This repository serves as a comprehensive resource for developers and researchers working with Keras.

Pros

  • Comprehensive and well-organized documentation
  • Regularly updated with new features and improvements
  • Includes practical examples and code snippets
  • Supports multiple backend engines (TensorFlow, Theano, CNTK)

Cons

  • May be overwhelming for beginners due to the extensive content
  • Some advanced topics might lack detailed explanations
  • Documentation updates may lag behind the latest Keras releases
  • Limited community contributions compared to other open-source projects

Code Examples

  1. Creating a simple sequential model:
from tensorflow import keras

model = keras.Sequential([
    keras.layers.Dense(64, activation='relu', input_shape=(784,)),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])
  1. Compiling and training a model:
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

history = model.fit(x_train, y_train, epochs=5, validation_split=0.2)
  1. Making predictions with a trained model:
predictions = model.predict(x_test)

Getting Started

To get started with Keras, follow these steps:

  1. Install TensorFlow and Keras:
pip install tensorflow
  1. Import Keras and create a simple model:
from tensorflow import keras

model = keras.Sequential([
    keras.layers.Dense(64, activation='relu', input_shape=(784,)),
    keras.layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
  1. Train the model and make predictions:
model.fit(x_train, y_train, epochs=5)
predictions = model.predict(x_test)

For more detailed information and advanced usage, refer to the official Keras documentation at https://keras.io/.

Competitor Comparisons

185,446

An Open Source Machine Learning Framework for Everyone

Pros of TensorFlow

  • More comprehensive and lower-level framework, offering greater flexibility and control
  • Supports distributed computing and deployment across various platforms
  • Larger ecosystem with more tools, extensions, and community support

Cons of TensorFlow

  • Steeper learning curve, especially for beginners
  • More verbose code compared to Keras' high-level API
  • Can be overkill for simpler machine learning tasks

Code Comparison

TensorFlow (low-level API):

import tensorflow as tf

x = tf.constant([[1], [2], [3], [4]], dtype=tf.float32)
y = tf.constant([[0], [-1], [-2], [-3]], dtype=tf.float32)

model = tf.keras.Sequential([
    tf.keras.layers.Dense(1, input_shape=(1,))
])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(x, y, epochs=50)

Keras (high-level API):

from tensorflow import keras

model = keras.Sequential([
    keras.layers.Dense(1, input_shape=(1,))
])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(x, y, epochs=50)

The Keras code is more concise and easier to read, while the TensorFlow code offers more flexibility and control over the model creation process.

82,049

Tensors and Dynamic neural networks in Python with strong GPU acceleration

Pros of PyTorch

  • More flexible and dynamic computational graph
  • Better support for research and prototyping
  • Stronger community and ecosystem for deep learning

Cons of PyTorch

  • Steeper learning curve for beginners
  • Less integrated high-level APIs compared to Keras
  • Slightly more verbose code for simple models

Code Comparison

Keras:

from keras.models import Sequential
from keras.layers import Dense

model = Sequential([
    Dense(64, activation='relu', input_shape=(10,)),
    Dense(1, activation='sigmoid')
])

PyTorch:

import torch.nn as nn

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.layer1 = nn.Linear(10, 64)
        self.layer2 = nn.Linear(64, 1)
        self.relu = nn.ReLU()
        self.sigmoid = nn.Sigmoid()

    def forward(self, x):
        x = self.relu(self.layer1(x))
        return self.sigmoid(self.layer2(x))

model = Model()

The code comparison shows that PyTorch requires more explicit definition of the model structure, while Keras offers a more concise, high-level API for simple models. However, PyTorch's approach provides greater flexibility for complex architectures and custom operations.

scikit-learn: machine learning in Python

Pros of scikit-learn

  • Comprehensive library for traditional machine learning algorithms
  • Extensive documentation and community support
  • Seamless integration with NumPy and SciPy

Cons of scikit-learn

  • Limited support for deep learning and neural networks
  • Less suitable for large-scale, distributed machine learning tasks
  • Steeper learning curve for beginners compared to Keras

Code Comparison

scikit-learn:

from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification

X, y = make_classification(n_samples=1000, n_features=4)
clf = RandomForestClassifier()
clf.fit(X, y)

Keras:

from tensorflow import keras

model = keras.Sequential([
    keras.layers.Dense(64, activation='relu', input_shape=(4,)),
    keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy')
model.fit(X, y, epochs=10)

Both libraries offer powerful machine learning capabilities, but they serve different purposes. scikit-learn excels in traditional machine learning algorithms and provides a wide range of tools for data preprocessing and model evaluation. Keras, on the other hand, focuses on deep learning and neural networks, offering a more intuitive API for building complex models. The choice between the two depends on the specific requirements of your project and your familiarity with machine learning concepts.

17,500

Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

Pros of CNTK

  • Offers better performance and scalability for large-scale deep learning models
  • Provides native support for distributed training across multiple GPUs and machines
  • Includes built-in support for recurrent neural networks (RNNs) and long short-term memory (LSTM) networks

Cons of CNTK

  • Steeper learning curve compared to Keras, especially for beginners
  • Less extensive documentation and community support
  • Fewer pre-trained models and examples available

Code Comparison

CNTK example:

import cntk as C

with C.layers.default_options(activation=C.relu):
    model = C.layers.Sequential([
        C.layers.Dense(128),
        C.layers.Dense(64),
        C.layers.Dense(10, activation=None)
    ])

Keras example:

from tensorflow import keras

model = keras.Sequential([
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(10)
])

Both examples create a simple neural network with three dense layers. CNTK uses a more explicit approach to defining activation functions, while Keras allows for a more concise syntax. Keras also benefits from its integration with TensorFlow, providing access to a wider range of tools and resources.

20,763

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Pros of MXNet

  • More flexible and lower-level API, allowing for greater customization
  • Better support for distributed training and multi-GPU setups
  • Hybrid programming model that combines symbolic and imperative programming

Cons of MXNet

  • Steeper learning curve compared to Keras' user-friendly API
  • Smaller community and ecosystem, with fewer pre-built models and resources
  • Less frequent updates and maintenance

Code Comparison

MXNet example:

import mxnet as mx
from mxnet import gluon, autograd

net = gluon.nn.Sequential()
with net.name_scope():
    net.add(gluon.nn.Dense(128, activation='relu'))
    net.add(gluon.nn.Dense(64, activation='relu'))
    net.add(gluon.nn.Dense(10))

Keras example:

from tensorflow import keras

model = keras.Sequential([
    keras.layers.Dense(128, activation='relu'),
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(10)
])

Both examples create a simple neural network with three dense layers. MXNet's approach is more verbose but offers more flexibility, while Keras provides a more concise and intuitive API for common tasks.

26,090

The fastai deep learning library

Pros of fastai

  • More comprehensive, offering a complete deep learning library with high-level APIs and low-level components
  • Includes advanced features like mixed precision training and learning rate finder
  • Provides a more opinionated approach, which can speed up development for common tasks

Cons of fastai

  • Steeper learning curve due to its more complex ecosystem
  • Less flexibility compared to Keras for custom model architectures
  • Smaller community and ecosystem compared to Keras

Code Comparison

fastai:

from fastai.vision.all import *
path = untar_data(URLs.PETS)
dls = ImageDataLoaders.from_folder(path, valid_pct=0.2, size=224)
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(1)

Keras:

from tensorflow import keras
model = keras.Sequential([
    keras.layers.Dense(64, activation='relu'),
    keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Keras.io documentation generator

This repository hosts the code used to generate the keras.io website.

Generating a local copy of the website

pip install -r requirements.txt
# Update Keras version to 3
pip install keras==3.0.2
cd scripts
python autogen.py make
python autogen.py serve

If you have Docker (you don't need the gpu version of Docker), you can run instead:

docker build -t keras-io . && docker run --rm -p 8000:8000 keras-io

It will take a while the first time because it's going to pull the image and the dependencies, but on the next times it'll be much faster.

Another way of testing using Docker is via our Makefile:

make container-test

This command will build a Docker image with a documentation server and run it.

Call for examples

Are you interested in submitting new examples for publication on keras.io? We welcome your contributions! Please read the information below about adding new code examples.

We are currently interested in the following examples.

Fixing something in an existing code example

Fixing typos

If your fix is very simple, please send out a PR simultaneously updating the .py, the .md, and the .ipynb files for the example.

More extensive fixes

For larger fixes, please send a PR that only includes the .py file, so we only update the other two files once the code has been reviewed and approved.

Adding a new code example

Keras code examples are implemented as tutobooks.

A tutobook is a script available simultaneously as a notebook, as a Python file, and as a nicely-rendered webpage.

Its source-of-truth (for manual edition and version control) is its Python script form, but you can also create one by starting from a notebook and converting it with the command nb2py.

Text cells are stored in markdown-formatted comment blocks. the first line (starting with """) may optionally contain a special annotation, one of:

  • shell: execute this block while prefixing each line with !.
  • invisible: do not render this block.

The script form should start with a header with the following fields:

Title: (title)
Author: (could be `Authors`: as well, and may contain markdown links)
Date created: (date in yyyy/mm/dd format)
Last modified: (date in yyyy/mm/dd format)
Description: (one-line text description)
Accelerator: (could be GPU, TPU, or None)

To see examples of tutobooks, you can check out any .py file in examples/ or guides/.

Creating a new example starting from a ipynb file

  1. Save the ipynb file to local disk.
  2. Convert the file to a tutobook by running: (assuming you are in the scripts/ directory)
python tutobooks.py nb2py path_to_your_nb.ipynb ../examples/vision/script_name.py

This will create the file examples/vision/script_name.py.

  1. Open it, fill in the headers, and generally edit it so that it looks nice.

NOTE THAT THE CONVERSION SCRIPT MAY MAKE MISTAKES IN ITS ATTEMPTS TO SHORTEN LINES. MAKE SURE TO PROOFREAD THE GENERATED .py IN FULL. Or alternatively, make sure to keep your lines reasonably-sized (<90 char) to start with, so that the script won't have to shorten them.

  1. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  2. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  3. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Creating a new example starting from a Python script

  1. Format the script with black: black script_name.py
  2. Add tutobook header
  3. Put the script in the relevant subfolder of examples/ (e.g. examples/vision/script_name)
  4. Run python autogen.py add_example vision/script_name. This will generate an ipynb and markdown rendering of your example, creating files in examples/vision/ipynb, examples/vision/md, and examples/vision/img. Do not modify any of these files by hand; only the original Python script should ever be edited manually.
  5. Submit a PR adding examples/vision/script_name.py (only the .py, not the generated files). Get a review and approval.
  6. Once the PR is approved, add to the PR the files created by the add_example command. Then we will merge the PR.

Previewing a new example

You can locally preview what the example looks like by running:

cd scripts
python autogen.py add_example vision/script_name

(Assuming the tutobook file is examples/vision/script_name.py.)

NOTE THAT THIS COMMAND WILL ERROR OUT IF ANY CELLS TAKES TOO LONG TO EXECUTE. In that case, make your code lighter/faster. Remember that examples are meant to demonstrate workflows, not train state-of-the-art models. They should stay very lightweight.

Then serving the website:

python autogen.py make
python autogen.py serve

And navigating to 0.0.0.0:8000/examples.

Read-only autogenerated files

The contents of the following folders should not be modified by hand:

  • site/*
  • sources/*
  • templates/examples/*
  • templates/guides/*
  • examples/*/md/*, examples/*/ipynb/*, examples/*/img/*
  • guides/md/*, guides/ipynb/*, guides/img/*

Modifiable files

These are the only files that should be edited by hand:

  • templates/*.md, with the exception of templates/examples/* and templates/guides/*
  • examples/*/*.py
  • guides/*.py
  • theme/*
  • scripts/*.py