Convert Figma logo to code with AI

tonarino logoinnernet

A private network system that uses WireGuard under the hood.

4,939
184
4,939
86

Top Related Projects

Netmaker makes networks with WireGuard. Netmaker automates fast, secure, and distributed virtual networks.

18,532

The easiest, most secure way to use WireGuard and 2FA.

14,290

A scalable overlay networking tool with a focus on performance, simplicity and security

Mirror only. Official repository is at https://git.zx2c4.com/wireguard-go

A Smart Ethernet Switch for Earth

22,053

An open source, self-hosted implementation of the Tailscale control server

Quick Overview

innernet is a private network system that uses WireGuard under the hood. It allows you to create and manage a private network overlay on top of an existing network, providing secure communication between devices. innernet is designed to be user-friendly and offers features like automatic IP address management and a web UI for administration.

Pros

  • Easy to set up and manage private networks
  • Built on top of WireGuard, providing strong security and performance
  • Supports automatic IP address management
  • Offers a web UI for convenient administration

Cons

  • Requires a central server for coordination
  • Limited to the features provided by WireGuard
  • May have a learning curve for users unfamiliar with VPN concepts
  • Dependency on Rust ecosystem for development and maintenance

Getting Started

To get started with innernet, follow these steps:

  1. Install innernet on your server and clients:

    curl -sSL https://github.com/tonarino/innernet/releases/latest/download/innernet-installer.sh | sudo bash
    
  2. Initialize the server:

    sudo innernet-server init
    
  3. Add a client:

    sudo innernet-server add-peer <client-name>
    
  4. On the client, install the configuration:

    sudo innernet install <config-file>
    
  5. Start the innernet service:

    sudo systemctl start innernet
    

For more detailed instructions and advanced configuration options, refer to the official documentation on the GitHub repository.

Competitor Comparisons

Netmaker makes networks with WireGuard. Netmaker automates fast, secure, and distributed virtual networks.

Pros of Netmaker

  • Supports multiple platforms including Windows, macOS, and Linux
  • Offers a user-friendly web UI for network management
  • Provides automatic key rotation and certificate management

Cons of Netmaker

  • Requires more complex setup and configuration
  • May have higher resource usage due to additional features
  • Less focus on simplicity and minimalism

Code Comparison

Innernet (Rust):

let server = Server::new(config)?;
server.run().await?;

Netmaker (Go):

server := netmaker.NewServer(config)
err := server.Start()
if err != nil {
    log.Fatal(err)
}

Both projects use modern, performant languages (Rust and Go) for their server implementations. Innernet's code appears more concise, while Netmaker's code follows Go's error handling conventions.

Innernet focuses on simplicity and security, using WireGuard as its core technology. It's designed for internal network management with a minimalist approach. Netmaker, on the other hand, offers a more feature-rich solution with cross-platform support and a web-based management interface. While Netmaker provides more flexibility, it may require more resources and setup time compared to Innernet's streamlined approach.

18,532

The easiest, most secure way to use WireGuard and 2FA.

Pros of Tailscale

  • More mature and widely adopted, with a larger community and ecosystem
  • Offers a managed service option, simplifying setup and maintenance
  • Supports a broader range of platforms and devices

Cons of Tailscale

  • Closed-source core components, limiting customization and self-hosting options
  • Requires reliance on Tailscale's infrastructure for coordination

Code Comparison

Tailscale (Go):

func (c *Conn) Close() error {
    c.mu.Lock()
    defer c.mu.Unlock()
    if c.closed {
        return nil
    }
    c.closed = true
    return c.pconn.Close()
}

Innernet (Rust):

pub fn close(&mut self) -> Result<()> {
    if self.closed {
        return Ok(());
    }
    self.closed = true;
    self.inner.close()
}

Both projects implement VPN-like functionality, but Innernet focuses on self-hosted, fully open-source solutions, while Tailscale provides a more user-friendly, managed approach. Innernet may appeal to those prioritizing complete control and privacy, whereas Tailscale offers easier setup and broader device support. The code snippets demonstrate similar connection closing logic, with Tailscale using Go and Innernet using Rust.

14,290

A scalable overlay networking tool with a focus on performance, simplicity and security

Pros of Nebula

  • More mature project with a larger community and wider adoption
  • Supports a broader range of platforms, including mobile devices
  • Offers more advanced networking features, such as UDP hole punching

Cons of Nebula

  • More complex setup and configuration process
  • Lacks built-in user management and access control features
  • May require more resources to run, especially on low-powered devices

Code Comparison

Nebula configuration example:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/host.crt
  key: /etc/nebula/host.key

static_host_map:
  "10.0.0.1": ["public1.example.com:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60

Innernet configuration example:

[server]
name = "innernet-server"
listen_port = 51820

[network]
name = "My Network"
cidr = "10.0.0.0/16"

[peers]
auto_add = true

Both projects aim to create secure, decentralized networks, but Nebula offers more flexibility and features at the cost of complexity, while Innernet focuses on simplicity and ease of use with built-in user management.

Mirror only. Official repository is at https://git.zx2c4.com/wireguard-go

Pros of wireguard-go

  • Pure Go implementation, offering better portability across platforms
  • Directly maintained by the WireGuard project, ensuring up-to-date features and security patches
  • Simpler codebase, focusing solely on the WireGuard protocol implementation

Cons of wireguard-go

  • Lacks built-in network management features
  • Requires additional configuration and tooling for complex network setups
  • Does not provide a user-friendly interface for managing connections and peers

Code Comparison

wireguard-go:

device := NewDevice(tun, conn, logger)
device.IpcSet(uapi.Config{
    PrivateKey: privateKey,
    Peers: []uapi.Peer{
        {
            PublicKey:  peerPublicKey,
            AllowedIPs: []net.IPNet{allowedIP},
        },
    },
})

innernet:

let network = Network::new(config_path)?;
network.start()?;
network.add_peer(PeerConfig {
    name: "new-peer",
    ip: "10.0.0.2",
    public_key: "abcdef1234567890",
})?;

The code snippets demonstrate that wireguard-go focuses on low-level device configuration, while innernet provides higher-level network management functions.

A Smart Ethernet Switch for Earth

Pros of ZeroTierOne

  • More mature and widely adopted, with a larger user base and community support
  • Offers a centralized management interface for easier network administration
  • Supports a broader range of platforms and devices

Cons of ZeroTierOne

  • Closed-source core components, which may raise privacy and security concerns
  • Relies on centralized infrastructure for network coordination
  • Less focus on end-to-end encryption compared to innernet

Code Comparison

ZeroTierOne (C++):

void Node::processVirtualNetworkFrame(const SharedPtr<Network> &network,const MAC &from,const MAC &to,unsigned int etherType,const void *data,unsigned int len)
{
    if (!network->hasConfig())
        return;
    // ... (additional code)
}

innernet (Rust):

pub fn process_packet(&mut self, packet: &[u8]) -> Result<()> {
    let ethertype = u16::from_be_bytes([packet[12], packet[13]]);
    match ethertype {
        ETHERTYPE_IPV4 => self.process_ipv4_packet(&packet[14..])?,
        // ... (additional code)
    }
    Ok(())
}

Both projects implement packet processing functions, but ZeroTierOne uses C++ with a focus on object-oriented design, while innernet employs Rust with a more functional approach and stronger type safety.

22,053

An open source, self-hosted implementation of the Tailscale control server

Pros of Headscale

  • Designed as a drop-in replacement for Tailscale's coordination server, offering compatibility with existing Tailscale clients
  • Supports multiple users and namespaces, allowing for more granular access control
  • Actively maintained with frequent updates and a growing community

Cons of Headscale

  • Lacks some advanced features present in Innernet, such as automatic IP address management
  • May require more setup and configuration compared to Innernet's streamlined approach
  • Does not provide a built-in CLI tool for managing the network, unlike Innernet's innernet command

Code Comparison

Headscale (Go):

func (h *Headscale) RegisterMachine(ctx context.Context, key *machine.MachineKey) (*machine.Machine, error) {
    // Machine registration logic
}

Innernet (Rust):

pub fn register_peer(&mut self, peer: Peer) -> Result<()> {
    // Peer registration logic
}

Both projects implement similar functionality for registering devices/peers, but use different programming languages and slightly different approaches to achieve this goal.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

innernet

Actively Maintained MIT

A private network system that uses WireGuard under the hood. See the announcement blog post for a longer-winded explanation.

innernet is similar in its goals to Slack's nebula or Tailscale, but takes a bit of a different approach. It aims to take advantage of existing networking concepts like CIDRs and the security properties of WireGuard to turn your computer's basic IP networking into more powerful ACL primitives.

innernet is not an official WireGuard project, and WireGuard is a registered trademark of Jason A. Donenfeld.

This has not received an independent security audit, and should be considered experimental software at this early point in its lifetime.

Usage

Server Creation

Every innernet network needs a coordination server to manage peers and provide endpoint information so peers can directly connect to each other. Create a new one with

sudo innernet-server new

The init wizard will ask you questions about your network and give you some reasonable defaults. It's good to familiarize yourself with network CIDRs as a lot of innernet's access control is based upon them. As an example, let's say the root CIDR for this network is 10.60.0.0/16. Server initialization creates a special "infra" CIDR which contains the innernet server itself and is reachable from all CIDRs on the network.

Next we'll also create a humans CIDR where we can start adding some peers.

sudo innernet-server add-cidr <interface>

For the parent CIDR, you can simply choose your network's root CIDR. The name will be humans, and the CIDR will be 10.60.64.0/24 (not a great example unless you only want to support 256 humans, but it works for now...).

By default, peers which exist in this new CIDR will only be able to contact peers in the same CIDR, and the special "infra" CIDR which was created when the server was initialized.

A typical workflow for creating a new network is to create an admin peer from the innernet-server CLI, and then continue using that admin peer via the innernet client CLI to add any further peers or network CIDRs.

sudo innernet-server add-peer <interface>

Select the humans CIDR, and the CLI will automatically suggest the next available IP address. Any name is fine, just answer "yes" when asked if you would like to make the peer an admin. The process of adding a peer results in an invitation file. This file contains just enough information for the new peer to contact the innernet server and redeem its invitation. It should be transferred securely to the new peer, and it can only be used once to initialize the peer.

You can run the server with innernet-server serve <interface>, or if you're on Linux and want to run it via systemctl, run systemctl enable --now innernet-server@<interface>. If you're on a home network, don't forget to configure port forwarding to the Listen Port you specified when creating the innernet server.

Peer Initialization

Let's assume the invitation file generated in the steps above have been transferred to the machine a network admin will be using.

You can initialize the client with

sudo innernet install /path/to/invitation.toml

You can customize the network name if you want to, or leave it at the default. innernet will then connect to the innernet server via WireGuard, generate a new key pair, and register that pair with the server. The private key in the invitation file can no longer be used.

If everything was successful, the new peer is on the network. You can run things like

sudo innernet list

or

sudo innernet list --tree

to view the current network and all CIDRs visible to this peer.

Since we created an admin peer, we can also add new peers and CIDRs from this peer via innernet instead of having to always run commands on the server.

Adding Associations between CIDRs

In order for peers from one CIDR to be able to contact peers in another CIDR, those two CIDRs must be "associated" with each other.

With the admin peer we created above, let's add a new CIDR for some theoretical CI servers we have.

sudo innernet add-cidr <interface>

The name is ci-servers and the CIDR is 10.60.64.0/24, but for this example it can be anything.

For now, we want peers in the humans CIDR to be able to access peers in the ci-servers CIDR.

sudo innernet add-association <interface>

The CLI will ask you to select the two CIDRs you want to associate. That's all it takes to allow peers in two different CIDRs to communicate!

You can verify the association with

sudo innernet list-associations <interface>

and associations can be deleted with

sudo innernet delete-associations <interface>

Enabling/Disabling Peers

For security reasons, IP addresses cannot be re-used by new peers, and therefore peers cannot be deleted. However, they can be disabled. Disabled peers will not show up in the list of peers when fetching the config for an interface.

Disable a peer with

sudo innernet disable-peer <interface>

Or re-enable a peer with

sudo innernet enable-peer <interface>

Specifying a Manual Endpoint

The innernet server will try to use the internet endpoint it sees from a peer so other peers can connect to that peer as well. This doesn't always work and you may want to set an endpoint explicitly. To set an endpoint, use

sudo innernet override-endpoint <interface>

You can go back to automatic endpoint discovery with

sudo innernet override-endpoint -u <interface>

Setting the Local WireGuard Listen Port

If you want to change the port which WireGuard listens on, use

sudo innernet set-listen-port <interface>

or unset the port and use a randomized port with

sudo innernet set-listen-port -u <interface>

Remove Network

To permanently uninstall a created network, use

sudo innernet-server uninstall <interface>

Use with care!

Security recommendations

If you're running a service on innernet, there are some important security considerations.

Enable strict Reverse Path Filtering (RFC 3704)

Strict RPF prevents packets from other interfaces from having internal source IP addresses. This is not the default on Linux, even though it is the right choice for 99.99% of situations. You can enable it by adding the following to a /etc/sysctl.d/60-network-security.conf:

net.ipv4.conf.all.rp_filter=1
net.ipv4.conf.default.rp_filter=1

Bind to the WireGuard device

If possible, to ensure that packets are only ever transmitted over the WireGuard interface, it's recommended that you use SO_BINDTODEVICE on Linux or IP_BOUND_IF on macOS/BSDs. If you have strict reverse path filtering, though, this is less of a concern.

IP addresses alone often aren't enough authentication

Even following all the above precautions, rogue applications on a peer's machines could be able to make requests on their behalf unless you add extra layers of authentication to mitigate this CSRF-type vector.

It's recommended that you carefully consider this possibility before deciding that the source IP is sufficient for your authentication needs on a service.

Installation

innernet has only officially been tested on Linux and MacOS, but we hope to support as many platforms as is feasible!

Runtime Dependencies

It's assumed that WireGuard is installed on your system, either via the kernel module in Linux 5.6 and later, or via the wireguard-go userspace implementation.

WireGuard Installation Instructions

Arch Linux

pacman -S innernet

Debian and Ubuntu

@tommie is kindly providing Debian/Ubuntu innernet builds in the https://github.com/tommie/innernet-debian repository.

Other Linux Distributions

We're looking for volunteers who are able to set up external builds for popular distributions. Please see issue #203.

macOS

brew install tonarino/innernet/innernet

Cargo

# to install innernet:
cargo install --git https://github.com/tonarino/innernet --tag v1.6.1 client

# to install innernet-server:
cargo install --git https://github.com/tonarino/innernet --tag v1.6.1 server

Note that you'll be responsible for updating manually.

Development

Cargo build feature for SELinux

If your target system uses SELinux, you will want to enable the 'selinux' feature when building the innernet binary. This will ensure that innernet maintains the correct selinux context on the /etc/hosts file when adding hosts. To do so add --features selinux to the cargo build options. The selinux-devel package will need to be installed for the correct headers.

innernet-server Build dependencies

Build:

cargo build --release --bin innernet-server

The resulting binary will be located at ./target/release/innernet-server

innernet Client CLI Build dependencies

Build:

cargo build --release --bin innernet

The resulting binary will be located at ./target/release/innernet

Testing

You can manually invoke Docker-based tests assuming you have Docker daemon running. If you specify --interactive flag, it allows you to attach to the server and client innernet Docker containers, so you can test various innernet commands inside a sandboxed environment.

docker-tests/build-docker-images.sh
docker-tests/run-docker-tests.sh [--interactive]

If you are developing a new feature, please consider adding a new test case to run-docker-tests.sh (example PR).

Releases

Please run the release script from a Linux machine: generated shell completions depend on available wireguard backends and Mac doesn't support the kernel backend.

  1. Fetch and check-out the main branch.
  2. Run ./release.sh [patch|major|minor|rc]
  3. Push the main branch and the created tag to the repo.