makisu
Fast and flexible Docker image building tool, works in unprivileged containerized environments like Mesos and Kubernetes.
Top Related Projects
concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
Build Container Images In Kubernetes
Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.
A tool that facilitates building OCI images.
A tool for exploring each layer in a docker image
Quick Overview
Makisu is a fast and efficient Docker image build tool designed for containerized environments. It was developed by Uber to address the challenges of building large Docker images in a distributed manner. Makisu offers features like layer caching, distributed builds, and compatibility with various container registries.
Pros
- Fast and efficient image builds, especially for large Docker images
- Supports distributed builds, allowing for parallel processing of layers
- Implements advanced layer caching mechanisms to reduce build times
- Compatible with various container registries, including Docker Hub and Amazon ECR
Cons
- Limited community support compared to Docker's native build tools
- May require additional setup and configuration for complex build scenarios
- Not as widely adopted as Docker's native build tools
- Documentation could be more comprehensive for advanced use cases
Getting Started
To use Makisu, follow these steps:
- Install Makisu:
go get github.com/uber/makisu/bin/makisu
- Create a Makisu YAML configuration file (e.g.,
makisu.yaml
):
registry:
"index.docker.io":
security:
tls:
client:
disabled: false
basic:
username: <your-username>
password: <your-password>
- Build an image using Makisu:
makisu build -t username/image:tag -f Dockerfile .
- Push the built image to a registry:
makisu push -t username/image:tag
For more detailed instructions and advanced usage, refer to the project's GitHub repository and documentation.
Competitor Comparisons
concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
Pros of BuildKit
- More active development and community support
- Broader feature set, including multi-stage builds and cache sharing
- Better integration with Docker ecosystem
Cons of BuildKit
- Steeper learning curve due to more complex architecture
- May be overkill for simpler build scenarios
Code Comparison
Makisu:
FROM alpine:3.6
RUN apk add --no-cache bash
COPY --from=builder /makisu-internal /makisu-internal
ENTRYPOINT ["/makisu-internal/makisu"]
BuildKit:
# syntax=docker/dockerfile:1.4
FROM alpine:3.6
RUN apk add --no-cache bash
COPY --from=builder /buildkit-internal /buildkit-internal
ENTRYPOINT ["/buildkit-internal/buildkitd"]
Key Differences
- Makisu is designed specifically for containerized environments, while BuildKit offers a more flexible approach
- BuildKit provides advanced caching mechanisms and parallel execution of build steps
- Makisu focuses on simplicity and ease of use, while BuildKit prioritizes performance and extensibility
Use Cases
- Makisu: Ideal for straightforward Docker image builds in CI/CD pipelines
- BuildKit: Better suited for complex build scenarios, multi-arch builds, and advanced caching requirements
Build Container Images In Kubernetes
Pros of kaniko
- More actively maintained with frequent updates and contributions
- Better integration with Google Cloud Platform and Kubernetes
- Supports building images without root access, enhancing security
Cons of kaniko
- Can be slower for large builds compared to makisu
- May have compatibility issues with certain Dockerfile instructions
- Requires more setup and configuration for non-GCP environments
Code Comparison
makisu:
FROM alpine:3.9
RUN apk add --no-cache libc6-compat
COPY --from=builder /makisu-internal/makisu /usr/local/bin/makisu
ENTRYPOINT ["/usr/local/bin/makisu"]
kaniko:
FROM gcr.io/kaniko-project/executor:latest
COPY --from=builder /kaniko/executor /kaniko/executor
ENTRYPOINT ["/kaniko/executor"]
Both tools use multi-stage builds and minimal base images, but kaniko's Dockerfile is slightly simpler. makisu includes additional package installation, while kaniko focuses on copying the executor binary.
Overall, kaniko offers better cloud integration and security features, while makisu may provide faster builds in certain scenarios. The choice between them depends on specific project requirements and infrastructure preferences.
Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.
Pros of img
- Supports unprivileged image builds without requiring root access
- Offers a more user-friendly CLI interface
- Provides additional features like image pushing and image inspection
Cons of img
- Less optimized for large-scale, distributed builds
- May have slower build times for complex images
- Limited integration with CI/CD systems compared to Makisu
Code Comparison
img:
img build -t myimage:latest .
img push myimage:latest
Makisu:
makisu build -t myimage:latest --push registry.example.com .
Summary
img is a more versatile tool for individual developers, offering a user-friendly experience and supporting unprivileged builds. It provides additional features like image pushing and inspection, making it suitable for local development and small-scale projects.
Makisu, on the other hand, is designed for large-scale, distributed builds in production environments. It offers better performance for complex images and integrates well with CI/CD systems, making it more suitable for enterprise-level deployments.
The choice between img and Makisu depends on the specific use case, with img being more appropriate for individual developers and small teams, while Makisu is better suited for large organizations with complex build requirements and distributed environments.
A tool that facilitates building OCI images.
Pros of Buildah
- More flexible and versatile, allowing for greater customization of container images
- Supports rootless builds, enhancing security and reducing privileges required
- Can build images without a full Docker daemon, making it more lightweight
Cons of Buildah
- Steeper learning curve compared to Makisu's simpler approach
- May require more manual configuration for complex build scenarios
- Less optimized for large-scale, distributed builds in cloud environments
Code Comparison
Makisu example:
steps:
- name: build
args:
- --commit=explicit
- --modifyfs=true
- --tag=myimage:latest
- --push=registry.example.com
Buildah example:
buildah from alpine
buildah copy alpine-container /path/to/file /destination/in/container
buildah config --entrypoint ["/bin/sh"] alpine-container
buildah commit alpine-container myimage:latest
Both tools aim to build container images, but Buildah offers a more script-like approach with individual commands, while Makisu uses a YAML configuration for its build process. Buildah provides finer-grained control over each step of the image creation, whereas Makisu focuses on simplicity and integration with existing CI/CD pipelines.
A tool for exploring each layer in a docker image
Pros of dive
- Focuses on analyzing and exploring existing Docker images
- Provides a user-friendly CLI interface for image inspection
- Offers layer-by-layer analysis and file tree visualization
Cons of dive
- Limited to image analysis, doesn't build or modify images
- May require more manual intervention for complex image optimizations
- Less suitable for large-scale, automated image building pipelines
Code comparison
dive:
func analyzeImageLayers(imageID string) ([]layer.Layer, error) {
layers, err := image.GetImageLayers(imageID)
if err != nil {
return nil, err
}
return layers, nil
}
makisu:
func (builder *Builder) Build(
ctx context.Context,
parsedStages []*stage.Stage,
) (*image.DistributionManifest, error) {
// Build image stages and create manifest
}
The code snippets highlight the different focus areas of the two projects. dive emphasizes image analysis, while makisu is geared towards image building and distribution.
Both projects serve different purposes in the Docker ecosystem. dive is ideal for developers and operators who need to inspect and optimize existing images, while makisu is better suited for large-scale, automated image building processes, especially in distributed environments.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
This project will be deprecated and be archived by 4th of May 2021
The makisu project is no longer actively maintained and will soon be archived. Please read the details in this issue.
Makisu is a fast and flexible Docker image build tool designed for unprivileged containerized environments such as Mesos or Kubernetes.
Some highlights of Makisu:
- Requires no elevated privileges or containerd/Docker daemon, making the build process portable.
- Uses a distributed layer cache to improve performance across a build cluster.
- Provides control over generated layers with a new optional keyword
#!COMMIT
, reducing the number of layers in images. - Is Docker compatible. Note, the Dockerfile parser in Makisu is opinionated in some scenarios. More details can be found here.
Makisu has been in use at Uber since early 2018, building thousands of images every day across 4 different languages. The motivation and mechanism behind it are explained in https://eng.uber.com/makisu/.
- Building Makisu
- Running Makisu
- Using Cache
- Configuring Docker Registry
- Comparison With Similar Tools
- Contributing
- Contact
Building Makisu
Building Makisu image
To build a Docker image that can perform builds inside a container:
make images
Building Makisu binary and build simple images
To get the makisu binary locally:
go get github.com/uber/makisu/bin/makisu
For a Dockerfile that doesn't have RUN, makisu can build it without Docker daemon, containerd or runc:
makisu build -t ${TAG} --dest ${TAR_PATH} ${CONTEXT}
Running Makisu
For a full list of flags, run makisu build --help
or refer to the README here.
Makisu anywhere
To build Dockerfiles that contain RUN, Makisu needs to run in a container.
To try it locally, the following snippet can be placed inside your ~/.bashrc
or ~/.zshrc
:
function makisu_build() {
makisu_version=${MAKISU_VERSION:-latest}
cd ${@: -1}
docker run -i --rm --net host \
-v /var/run/docker.sock:/docker.sock \
-e DOCKER_HOST=unix:///docker.sock \
-v $(pwd):/makisu-context \
-v /tmp/makisu-storage:/makisu-storage \
gcr.io/uber-container-tools/makisu:$makisu_version build \
--commit=explicit \
--modifyfs=true \
--load \
${@:1:${#@}-1} /makisu-context
cd -
}
Now you can use makisu_build
like you would use docker build
:
$ makisu_build -t myimage .
Note:
- Docker socket mount is optional. It's used together with
--load
for loading images back into Docker daemon for convenience of local development. So does the mount to /makisu-storage, which is used for local cache. If the image would be pushed to registry directly, please remove--load
for better performance. - The
--modifyfs=true
option let Makisu assume ownership of the filesystem inside the container. Files in the container that don't belong to the base image will be overwritten at the beginning of build. - The
--commit=explicit
option let Makisu only commit layer when it sees#COMMIT
and at the end of the Dockerfile. See "Explicit Commit and Cache" for more details.
Makisu on Kubernetes
Makisu makes it easy to build images from a GitHub repository inside Kubernetes. A single pod (or job) is created with an init container, which will fetch the build context through git or other means, and place that context in a designated volume. Once it completes, the Makisu container will be created and executes the build, using that volume as its build context.
Creating registry configuration
Makisu needs registry configuration mounted in to push to a secure registry. The config format is described in documentation. After creating configuration file on local filesystem, run the following command to create the k8s secret:
$ kubectl create secret generic docker-registry-config --from-file=./registry.yaml
secret/docker-registry-config created
Creating Kubernetes job spec
To setup a Kubernetes job to build a GitHub repository and push to a secure registry, you can refer to our Kubernetes job spec template (and out of the box example) .
With such a job spec, a simple kubectl create -f job.yaml
will start the build.
The job status will reflect whether the build succeeded or failed
Using cache
Configuring distributed cache
Makisu supports distributed cache, which can significantly reduce build time, by up to 90% for some of Uber's code repos. Makisu caches docker image layers both locally and in docker registry (if --push parameter is provided), and uses a separate key-value store to map lines of a Dockerfile to names of the layers.
For example, Redis can be setup as a distributed cache key-value store with this Kubernetes job spec.
Then connect Makisu to redis cache by passing --redis-cache-addr=redis:6379
argument.
If the Redis server is password-protected, use --redis-cache-password=password
argument.
Cache has a 14 day TTL by default, which can be configured with --local-cache-ttl=14d
argument.
For more options on cache, please see Cache.
Explicit commit and cache
By default, Makisu will cache each directive in a Dockerfile. To avoid committing and caching everything, the layer cache can be further optimized via explicit caching with the --commit=explicit
flag.
Dockerfile directives may then be manually cached using the #!COMMIT
annotation:
FROM node:8.1.3
ADD package.json package.json
ADD pre-build.sh
# A bunch of pre-install steps here.
...
...
...
# A step to be cached. A single layer will be committed and cached here on top of base image.
RUN npm install #!COMMIT
...
...
...
# The last step of the last stage always commit by default, generating and caching another layer.
ENTRYPOINT ["/bin/bash"]
In this example, only 2 additional layers on top of base image will be generated and cached.
Configuring Docker Registry
For the convenience to work with any public Docker Hub repositories including library/.*, a default config is provided:
index.docker.io:
.*:
security:
tls:
client:
disabled: false
// Docker Hub requires basic auth with empty username and password for all public repositories.
basic:
username: ""
password: ""
Registry configs can be passed in through the --registry-config
flag, either as a file path of as a raw json blob (converted to json using yq):
--registry-config='{"gcr.io": {"uber-container-tools/*": {"push_chunk": -1, "security": {"basic": {"username": "_json_key", "password": "<escaped key here>"}}}}}'
For more details on configuring Makisu to work with your registry client, see the documentation.
Comparison With Similar Tools
Bazel
We were inspired by the Bazel project in early 2017. It is one of the first few tools that could build Docker compatible images without using Docker or any form of containerizer.
It works very well with a subset of Docker build scenarios given a Bazel build file. However, it does not support RUN
, making it hard to replace most docker build workflows.
Kaniko
Kaniko provides good compatibility with Docker and executes build commands in userspace without the need for Docker daemon, although it must still run inside a container. Kaniko offers smooth integration with Kubernetes, making it a competent tool for Kubernetes users. On the other hand, Makisu has some performance tweaks for large images with multi-phase builds by avoiding unnecessary disk scans, and offers more control over cache generation and layer size through #!COMMIT, make it optimal for complex workflows.
BuildKit / img
BuildKit and img depend on runc/containerd and supports parallel stage executions, whereas Makisu and most other tools execute Dockefile in order. However, BuildKit and img still need seccomp and AppArmor to be disabled to launch nested containers, which is not ideal and may not be doable in some production environments.
Contributing
Please check out our guide.
Contact
To contact us, please join our Slack channel.
Top Related Projects
concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
Build Container Images In Kubernetes
Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.
A tool that facilitates building OCI images.
A tool for exploring each layer in a docker image
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot