Top Related Projects
Cross-platform asynchronous I/O
A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
Actor framework for Rust.
Netty project - an event-driven asynchronous network application framework
ZeroMQ core engine in C++, implements ZMTP/3.1
Node.js JavaScript runtime ✨🐢🚀✨
Quick Overview
Mio is a low-level, cross-platform I/O library for the Rust programming language. It provides a non-blocking, event-driven I/O API that allows developers to build high-performance network applications. Mio is a core component of the Tokio project, a popular asynchronous runtime for Rust.
Pros
- Cross-platform: Mio supports multiple operating systems, including Windows, macOS, and Linux, making it a versatile choice for building cross-platform applications.
- High-performance: Mio is designed for performance, utilizing efficient I/O primitives and event-driven architecture to enable the creation of scalable network applications.
- Asynchronous: Mio's API is built around asynchronous programming, allowing developers to write efficient, non-blocking code that can handle multiple concurrent connections.
- Lightweight: Mio has a small footprint and minimal dependencies, making it a suitable choice for resource-constrained environments.
Cons
- Complexity: Mio's low-level nature and focus on performance can make it more complex to use compared to higher-level networking libraries.
- Steep learning curve: Developers new to asynchronous programming and event-driven architectures may find the initial learning curve for Mio to be challenging.
- Limited documentation: While the Mio project has good documentation, some users may find that the documentation could be more comprehensive and easier to navigate.
- Dependency on Tokio: Mio is closely tied to the Tokio project, which may be a concern for developers who prefer to use a different asynchronous runtime.
Code Examples
Here are a few examples of how to use Mio in Rust code:
- Listening for incoming connections:
use mio::{Events, Interest, Poll, Token};
use mio::net::TcpListener;
use std::io;
fn main() -> io::Result<()> {
let mut poll = Poll::new()?;
let mut events = Events::with_capacity(1024);
let mut listener = TcpListener::bind("127.0.0.1:8080")?;
poll.registry().register(&mut listener, Token(0), Interest::READABLE)?;
loop {
poll.poll(&mut events, None)?;
for event in events.iter() {
if event.token() == Token(0) {
let (mut socket, _) = listener.accept()?;
// Handle the new connection
}
}
}
}
- Sending and receiving data over a TCP connection:
use mio::{Events, Interest, Poll, Token};
use mio::net::TcpStream;
use std::io::{self, Read, Write};
fn main() -> io::Result<()> {
let mut poll = Poll::new()?;
let mut events = Events::with_capacity(1024);
let mut stream = TcpStream::connect("127.0.0.1:8080")?;
poll.registry().register(&mut stream, Token(0), Interest::READABLE | Interest::WRITABLE)?;
let data = b"Hello, Mio!";
stream.write_all(data)?;
let mut buf = [0; 1024];
let n = stream.read(&mut buf)?;
println!("Received: {}", String::from_utf8_lossy(&buf[..n]));
Ok(())
}
- Using a custom event source:
use mio::{Events, Interest, Poll, Registry, Token};
use mio::event::Source;
use std::io;
struct CustomSource {
// Custom state
}
impl Source for CustomSource {
fn register(&mut self, registry: &Registry, token: Token, interest: Interest) -> io::Result<()> {
// Register the custom source with the registry
registry.register(self, token, interest)
}
fn reregister(&mut self, registry: &Registry, token: Token, interest: Interest) -> io::Result<()> {
// Re-register the custom source with the registry
registry.reregister
Competitor Comparisons
Cross-platform asynchronous I/O
Pros of libuv/libuv
- Mature and widely-used library with a large community and extensive documentation
- Supports a wide range of platforms, including Windows, macOS, and various Unix-like systems
- Provides a comprehensive set of APIs for handling I/O, networking, and other system-level tasks
Cons of libuv/libuv
- Relatively low-level and requires more manual management of resources compared to higher-level frameworks like Mio
- May have a steeper learning curve for developers who are more familiar with higher-level abstractions
- Potentially less performant for certain use cases due to its more general-purpose nature
Code Comparison
Here's a simple example of creating a TCP server using libuv/libuv and tokio-rs/mio:
libuv/libuv:
uv_tcp_t server;
uv_tcp_init(uv_default_loop(), &server);
uv_tcp_bind(&server, (const struct sockaddr*)&addr, sizeof(addr));
uv_listen((uv_stream_t*)&server, 128, on_new_connection);
tokio-rs/mio:
let listener = TcpListener::bind(&addr).await?;
listener.accept().await?;
The libuv/libuv example requires more manual setup and management of the server socket, while the tokio-rs/mio example provides a more concise and higher-level API.
A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
Pros of Tokio
- Tokio provides a higher-level, more comprehensive set of APIs and features compared to Mio, making it easier to build complex asynchronous applications.
- Tokio includes built-in support for various protocols and transports, such as TCP, UDP, and HTTP, reducing the need for additional dependencies.
- Tokio's ecosystem includes a wide range of complementary libraries and tools, providing a more complete solution for building modern, scalable applications.
Cons of Tokio
- Tokio's larger feature set and complexity can make it more challenging to understand and use, especially for beginners or simple use cases.
- Tokio's runtime and event loop may introduce more overhead compared to a more lightweight library like Mio, which could be a concern for performance-critical applications.
- Tokio's dependency on the Rust standard library may limit its use in certain environments or scenarios where a more minimal runtime is required.
Code Comparison
Mio (simplified):
let mut poll = Poll::new()?;
let mut events = Events::with_capacity(1024);
poll.register(&socket, Token(0), Ready::readable(), PollOpt::edge())?;
loop {
poll.poll(&mut events, None)?;
for event in events.iter() {
// Handle event
}
}
Tokio (simplified):
#[tokio::main]
async fn main() {
let listener = TcpListener::bind("127.0.0.1:8080").await?;
loop {
let (socket, _) = listener.accept().await?;
tokio::spawn(async move {
// Handle connection
});
}
}
Actor framework for Rust.
Pros of actix/actix
- Actix provides a higher-level, actor-based concurrency model, which can be easier to reason about and scale than the lower-level event-driven model of Mio.
- Actix has a more extensive set of features and utilities, including support for websockets, HTTP/2, and more.
- Actix has a more active community and ecosystem, with a larger number of third-party crates and integrations.
Cons of actix/actix
- Actix has a larger codebase and dependency tree, which can make it more complex to understand and integrate into certain projects.
- Actix's actor model may not be the best fit for all use cases, and can add overhead compared to a more lightweight event-driven approach.
- Actix has faced some controversies and concerns around its development and maintenance, which may make some users hesitant to adopt it.
Code Comparison
Mio (tokio-rs/mio):
let mut poll = Poll::new()?;
let mut events = Events::with_capacity(1024);
poll.register(&socket, Token(0), Ready::readable(), PollOpt::edge())?;
loop {
poll.poll(&mut events, None)?;
for event in events.iter() {
if event.readiness().is_readable() {
// Handle readable event
}
}
}
Actix (actix/actix):
let sys = System::new();
let addr = MyActor::default().start();
let res = addr.send(MyMessage).await?;
Netty project - an event-driven asynchronous network application framework
Pros of Netty
- Netty provides a rich set of protocols and codecs, making it easier to implement complex network applications.
- Netty has a large and active community, with extensive documentation and support.
- Netty is highly performant and scalable, with a focus on non-blocking I/O and event-driven architecture.
Cons of Netty
- Netty has a steeper learning curve compared to Mio, especially for developers new to event-driven programming.
- Netty is a larger and more complex library, which may be overkill for simpler network applications.
- Netty is primarily written in Java, which may be a disadvantage for developers working in Rust-based ecosystems.
Code Comparison
Mio (Rust):
let mut poll = Poll::new()?;
let mut events = Events::with_capacity(1024);
poll.register(&socket, Token(0), Ready::readable(), PollOpt::edge())?;
loop {
poll.poll(&mut events, None)?;
for event in events.iter() {
// Handle the event
}
}
Netty (Java):
EventLoopGroup group = new NioEventLoopGroup();
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(group)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel ch) throws Exception {
// Add your channel handlers here
}
});
bootstrap.bind(8080).sync().channel().closeFuture().sync();
ZeroMQ core engine in C++, implements ZMTP/3.1
Pros of zeromq/libzmq
- Mature and widely-used library for distributed messaging, with support for a variety of communication patterns (request-reply, publish-subscribe, etc.).
- Supports a wide range of programming languages, including C, C++, Python, Java, and more.
- Provides a high-level API that abstracts away the complexities of low-level socket programming.
Cons of zeromq/libzmq
- Larger codebase and more complex to understand and integrate into a project compared to Mio.
- May have a higher learning curve for developers who are not familiar with the ZeroMQ ecosystem.
- Potentially less performant for certain use cases due to the additional abstraction layers.
Code Comparison
Here's a simple example of using Mio to create a TCP server:
use mio::{Events, Interest, Poll, Token};
use mio::net::TcpListener;
use std::io;
fn main() -> io::Result<()> {
let mut poll = Poll::new()?;
let mut events = Events::with_capacity(1024);
let mut listener = TcpListener::bind("127.0.0.1:8080")?;
poll.registry().register(&mut listener, Token(0), Interest::READABLE)?;
loop {
poll.poll(&mut events, None)?;
// Handle events
}
}
And here's an example of using ZeroMQ to create a simple request-reply server:
#include <zmq.h>
#include <stdio.h>
#include <unistd.h>
int main () {
void *context = zmq_ctx_new();
void *responder = zmq_socket(context, ZMQ_REP);
zmq_bind(responder, "tcp://*:5555");
while (1) {
char buffer [10];
zmq_recv(responder, buffer, 10, 0);
printf("Received Hello\n");
zmq_send(responder, "World", 5, 0);
}
zmq_close(responder);
zmq_ctx_destroy(context);
return 0;
}
Node.js JavaScript runtime ✨🐢🚀✨
Pros of Node.js
- Extensive Ecosystem: Node.js has a vast and mature ecosystem with a wide range of libraries and tools available through the npm package manager, making it easier to build complex applications.
- Cross-Platform Compatibility: Node.js is designed to be cross-platform, allowing developers to write code that can run on various operating systems, including Windows, macOS, and Linux.
- Asynchronous Programming: Node.js is built on an asynchronous, event-driven architecture, which can be beneficial for building scalable network applications that handle a large number of concurrent connections.
Cons of Node.js
- Callback Hell: The traditional callback-based asynchronous programming in Node.js can lead to the "callback hell" problem, where the code becomes difficult to read and maintain due to deeply nested callbacks.
- Single-Threaded Nature: Node.js is single-threaded, which means it can only utilize a single CPU core at a time. This can be a limitation for CPU-intensive tasks, as they may not take full advantage of modern multi-core processors.
- Steep Learning Curve: Compared to Mio, which is a lower-level library, Node.js has a steeper learning curve, especially for developers who are new to asynchronous programming and event-driven architectures.
Code Comparison
Node.js (using the http
module):
const http = require('http');
http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello, World!\n');
}).listen(3000, () => {
console.log('Server running at http://localhost:3000/');
});
Mio (using the mio
crate):
use mio::{Events, Interest, Poll, Token};
use std::io::{self, Write};
fn main() -> io::Result<()> {
let mut poll = Poll::new()?;
let mut events = Events::with_capacity(1024);
// Add a listener and register it with the poll
let listener = mio::net::TcpListener::bind("127.0.0.1:3000")?;
poll.registry().register(&listener, Token(0), Interest::READABLE)?;
loop {
poll.poll(&mut events, None)?;
// Handle events
}
}
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Mio â Metal I/O
Mio is a fast, low-level I/O library for Rust focusing on non-blocking APIs and event notification for building high performance I/O apps with as little overhead as possible over the OS abstractions.
API documentation
This is a low level library, if you are looking for something easier to get started with, see Tokio.
Usage
To use mio
, first add this to your Cargo.toml
:
[dependencies]
mio = "1"
Next we can start using Mio. The following is quick introduction using
TcpListener
and TcpStream
. Note that features = ["os-poll", "net"]
must be
specified for this example.
use std::error::Error;
use mio::net::{TcpListener, TcpStream};
use mio::{Events, Interest, Poll, Token};
// Some tokens to allow us to identify which event is for which socket.
const SERVER: Token = Token(0);
const CLIENT: Token = Token(1);
fn main() -> Result<(), Box<dyn Error>> {
// Create a poll instance.
let mut poll = Poll::new()?;
// Create storage for events.
let mut events = Events::with_capacity(128);
// Setup the server socket.
let addr = "127.0.0.1:13265".parse()?;
let mut server = TcpListener::bind(addr)?;
// Start listening for incoming connections.
poll.registry()
.register(&mut server, SERVER, Interest::READABLE)?;
// Setup the client socket.
let mut client = TcpStream::connect(addr)?;
// Register the socket.
poll.registry()
.register(&mut client, CLIENT, Interest::READABLE | Interest::WRITABLE)?;
// Start an event loop.
loop {
// Poll Mio for events, blocking until we get an event.
poll.poll(&mut events, None)?;
// Process each event.
for event in events.iter() {
// We can use the token we previously provided to `register` to
// determine for which socket the event is.
match event.token() {
SERVER => {
// If this is an event for the server, it means a connection
// is ready to be accepted.
//
// Accept the connection and drop it immediately. This will
// close the socket and notify the client of the EOF.
let connection = server.accept();
drop(connection);
}
CLIENT => {
if event.is_writable() {
// We can (likely) write to the socket without blocking.
}
if event.is_readable() {
// We can (likely) read from the socket without blocking.
}
// Since the server just shuts down the connection, let's
// just exit from our event loop.
return Ok(());
}
// We don't expect any events with tokens other than those we provided.
_ => unreachable!(),
}
}
}
}
Features
- Non-blocking TCP, UDP, UDS
- I/O event queue backed by epoll, kqueue, and IOCP
- Zero allocations at runtime
- Platform specific extensions
Non-goals
The following are specifically omitted from Mio and are left to the user or higher-level libraries.
- File operations
- Thread pools / multi-threaded event loop
- Timers
Platforms
Currently supported platforms:
- Android (API level 21)
- DragonFly BSD
- FreeBSD
- Linux
- NetBSD
- OpenBSD
- Windows
- iOS
- macOS
Mio can handle interfacing with each of the event systems of the aforementioned
platforms. The details of their implementation are further discussed in the
Poll
type of the API documentation (see above).
Mio generally supports the same versions of the above mentioned platforms as Rust the language (rustc) does, unless otherwise noted.
The Windows implementation for polling sockets is using the wepoll strategy. This uses the Windows AFD system to access socket readiness events.
Unsupported
- Wine, see issue #1444
MSRV Policy
The MSRV (Minimum Supported Rust Version) is fixed for a given minor (1.x) version. However it can be increased when bumping minor versions, i.e. going from 1.0 to 1.1 allows us to increase the MSRV. Users unable to increase their Rust version can use an older minor version instead. Below is a list of Mio versions and their MSRV:
- v0.8: Rust 1.46.
- v1.0: Rust 1.70.
Note however that Mio also has dependencies, which might have different MSRV policies. We try to stick to the above policy when updating dependencies, but this is not always possible.
Unsupported flags
Mio uses different implementations to support the same functionality depending on the platform. Mio generally uses the "best" implementation possible, where "best" usually means most efficient for Mio's use case. However this means that the implementation is often specific to a limited number of platforms, meaning we often have multiple implementations for the same functionality. In some cases it might be required to not use the "best" implementation, but another implementation Mio supports (on other platforms). Mio does not officially support secondary implementations on platforms, however we do have various cfg flags to force another implementation for these situations.
Current flags:
mio_unsupported_force_poll_poll
, uses an implementation based onpoll(2)
formio::Poll
.mio_unsupported_force_waker_pipe
, uses an implementation based onpipe(2)
formio::Waker
.
Again, Mio does not officially supports this. Furthermore these flags may disappear in the future.
Community
A group of Mio users hang out on Discord, this can be a good place to go for questions. It's also possible to open a new issue on GitHub to ask questions, report bugs or suggest new features.
Contributing
Interested in getting involved? We would love to help you! For simple bug fixes, just submit a PR with the fix and we can discuss the fix directly in the PR. If the fix is more complex, start with an issue.
If you want to propose an API change, create an issue to start a discussion with the community. Also, feel free to talk with us in Discord.
Finally, be kind. We support the Rust Code of Conduct.
Top Related Projects
Cross-platform asynchronous I/O
A runtime for writing reliable asynchronous applications with Rust. Provides I/O, networking, scheduling, timers, ...
Actor framework for Rust.
Netty project - an event-driven asynchronous network application framework
ZeroMQ core engine in C++, implements ZMTP/3.1
Node.js JavaScript runtime ✨🐢🚀✨
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot