Convert Figma logo to code with AI

rancher logorke

Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution that runs entirely within containers.

3,215
582
3,215
39

Top Related Projects

109,710

Production-Grade Container Scheduling and Management

16,045

Deploy a Production Ready Kubernetes Cluster

13,406

Kubernetes IN Docker - local clusters for testing Kubernetes

27,529

Lightweight Kubernetes

Install and config an OpenShift 3.x cluster

Quick Overview

RKE (Rancher Kubernetes Engine) is an open-source, CNCF-certified Kubernetes distribution that runs entirely within Docker containers. It simplifies the deployment and management of Kubernetes clusters, making it easier for organizations to set up and maintain production-ready Kubernetes environments.

Pros

  • Easy installation and setup process
  • Highly customizable and flexible configuration options
  • Supports a wide range of operating systems and cloud providers
  • Integrates well with Rancher for enhanced cluster management

Cons

  • Requires Docker to be installed on all nodes
  • May have a steeper learning curve for those new to Kubernetes
  • Limited support for Windows nodes compared to Linux nodes
  • Can be resource-intensive for smaller deployments

Getting Started

To get started with RKE, follow these steps:

  1. Install Docker on all nodes
  2. Download the RKE binary:
curl -s https://get.rke2.io | sudo sh -
  1. Create a cluster configuration file (cluster.yml):
nodes:
  - address: 1.2.3.4
    user: ubuntu
    role: [controlplane,worker,etcd]

services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h
  1. Run RKE to deploy the cluster:
rke up
  1. Use the generated kube_config_cluster.yml file to interact with your cluster:
export KUBECONFIG=kube_config_cluster.yml
kubectl get nodes

For more detailed instructions and advanced configurations, refer to the official RKE documentation.

Competitor Comparisons

109,710

Production-Grade Container Scheduling and Management

Pros of kubernetes

  • More comprehensive and feature-rich, offering a complete container orchestration platform
  • Larger community and ecosystem, resulting in extensive documentation and third-party tools
  • Direct control over core Kubernetes components and configurations

Cons of kubernetes

  • Steeper learning curve and more complex setup process
  • Requires more resources and expertise to manage and maintain
  • Less opinionated, which can lead to decision fatigue for new users

Code comparison

kubernetes:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx

rke:

nodes:
  - address: 1.2.3.4
    user: ubuntu
    role:
      - controlplane
      - etcd
      - worker

Summary

kubernetes is a more comprehensive and flexible solution, offering direct control over Kubernetes components but requiring more expertise. rke simplifies the deployment process, making it easier for beginners but with less customization options. The choice between the two depends on the user's specific needs, expertise level, and desired level of control over the Kubernetes environment.

16,045

Deploy a Production Ready Kubernetes Cluster

Pros of kubespray

  • Supports multiple operating systems and cloud providers
  • Highly customizable with extensive configuration options
  • Actively maintained by the Kubernetes community

Cons of kubespray

  • Steeper learning curve due to complexity
  • Slower deployment process compared to RKE
  • Requires more manual configuration and maintenance

Code Comparison

kubespray:

all:
  vars:
    ansible_user: ubuntu
    ansible_become: true
    kubernetes_version: v1.21.0
    kube_network_plugin: calico

RKE:

nodes:
- address: 1.2.3.4
  user: ubuntu
  role: [controlplane,worker,etcd]
kubernetes_version: v1.21.0
network:
  plugin: calico

Key Differences

  1. Deployment method: kubespray uses Ansible, while RKE uses its own CLI tool
  2. Configuration: kubespray offers more granular control, RKE is simpler
  3. Scope: kubespray is more comprehensive, RKE focuses on core Kubernetes components
  4. Community: kubespray is part of kubernetes-sigs, RKE is maintained by Rancher
  5. Use case: kubespray for complex, multi-cloud deployments; RKE for simpler, single-cluster setups

Both tools are effective for deploying Kubernetes clusters, with kubespray offering more flexibility and RKE providing a more streamlined experience.

13,406

Kubernetes IN Docker - local clusters for testing Kubernetes

Pros of kind

  • Lightweight and easy to set up, ideal for local development and testing
  • Supports multi-node clusters using Docker containers as "nodes"
  • Integrates well with CI/CD pipelines for Kubernetes testing

Cons of kind

  • Limited production-like features compared to RKE
  • Less suitable for managing large-scale, production-grade clusters
  • Fewer customization options for advanced Kubernetes configurations

Code Comparison

kind configuration:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

RKE configuration:

nodes:
- address: 1.2.3.4
  user: ubuntu
  role: [controlplane,worker,etcd]
- address: 5.6.7.8
  user: ubuntu
  role: [worker]

Both kind and RKE aim to simplify Kubernetes cluster creation, but they serve different purposes. kind is primarily designed for local development and testing, while RKE is more suited for production-grade cluster management. kind uses Docker containers to simulate nodes, making it lightweight and easy to set up, while RKE provides more advanced features and customization options for managing real infrastructure. The code examples show the difference in configuration complexity, with kind having a simpler setup process compared to RKE's more detailed node specifications.

27,529

Lightweight Kubernetes

Pros of k3s

  • Lightweight and resource-efficient, ideal for edge computing and IoT devices
  • Single binary installation, simplifying deployment and maintenance
  • Includes built-in storage and load balancing solutions

Cons of k3s

  • Limited customization options compared to full Kubernetes distributions
  • May not be suitable for large-scale enterprise deployments
  • Some Kubernetes features are stripped out to reduce footprint

Code Comparison

k3s installation:

curl -sfL https://get.k3s.io | sh -

RKE cluster configuration:

nodes:
  - address: 1.2.3.4
    user: ubuntu
    role: [controlplane,worker,etcd]

Key Differences

  • k3s is designed for simplicity and minimal resource usage, while RKE offers more flexibility and control
  • RKE follows a standard Kubernetes architecture, whereas k3s uses a simplified structure
  • k3s is better suited for edge computing and small deployments, while RKE is more appropriate for traditional data center environments

Use Cases

  • k3s: Edge computing, IoT devices, development environments, small-scale production
  • RKE: Enterprise Kubernetes deployments, multi-node clusters, production environments requiring full Kubernetes feature set

Install and config an OpenShift 3.x cluster

Pros of OpenShift-Ansible

  • More comprehensive and feature-rich, offering a complete enterprise-grade Kubernetes platform
  • Supports advanced networking and security features out-of-the-box
  • Provides integrated CI/CD pipelines and developer tools

Cons of OpenShift-Ansible

  • Steeper learning curve and more complex setup process
  • Requires more resources and has higher system requirements
  • Less flexibility in terms of customization compared to RKE

Code Comparison

OpenShift-Ansible playbook example:

- name: Install OpenShift
  hosts: all
  roles:
    - openshift_facts
    - openshift_repos
    - openshift_docker
    - openshift_node

RKE cluster configuration example:

nodes:
  - address: 1.2.3.4
    user: ubuntu
    role: [controlplane,worker,etcd]
services:
  etcd:
    snapshot: true
    creation: 6h
    retention: 24h

Both repositories provide tools for deploying Kubernetes clusters, but they differ in scope and complexity. OpenShift-Ansible offers a more comprehensive solution with additional features and enterprise-grade capabilities, while RKE focuses on simplicity and ease of use for deploying vanilla Kubernetes clusters. The choice between the two depends on specific project requirements, available resources, and desired level of customization.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

rke

This file is auto-generated from README-template.md, please make any changes there.

Rancher Kubernetes Engine, an extremely simple, lightning fast Kubernetes installer that works everywhere.

Latest Release

  • v1.5
    • v1.5.8 - Read the full release notes.
  • v1.4
    • v1.4.17 - Read the full release notes.

Download

Please check the releases page.

Requirements

Please review the Requirements for each node in your Kubernetes cluster.

Getting Started

Please refer to our RKE docs for information on how to get started! For cluster config examples, refer to RKE cluster.yml examples

Installing Rancher HA using rke

Please use Setting up a High-availability RKE Kubernetes Cluster to install Rancher in a high-availability configuration.

Building

RKE can be built using the make command, and will use the scripts in the scripts directory as subcommands. The default subcommand is ci and will use scripts/ci. Cross compiling can be enabled by setting the environment variable CROSS=1. The compiled binaries can be found in the build/bin directory. Dependencies are managed by Go modules and can be found in go.mod.

Read codegen/codegen.go to check the default location for fetching data.json. You can override the default location as seen in the example below:

# Fetch data.json from default location
go generate

# Fetch data.json from URL using RANCHER_METADATA_URL
RANCHER_METADATA_URL=${URL} go generate

# Use data.json from local file
RANCHER_METATDATA_URL=./local/data.json go generate

# Compile RKE
make

To override RANCHER_METADATA_URL at runtime, populate the environment variable when running rke CLI. For example:

RANCHER_METADATA_URL=${URL} rke [commands] [options]

RANCHER_METADATA_URL=${./local/data.json} rke [commands] [options]

License

Copyright © 2017 - 2023 SUSE LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.