Top Related Projects
:floppy_disk: peer-to-peer sharing & live syncronization of files via command line
Peer-to-peer hypermedia protocol
Open Source Continuous File Synchronization
Deduplicating archiver with compression and authenticated encryption.
the Crypto Undertaker
Ceph is a distributed object, block, and file storage platform
Quick Overview
ZboxFS is a zero-details, privacy-focused embedded file system written in Rust. It provides a secure and efficient way to store files and directories, with built-in encryption and compression features. ZboxFS can be used as both an in-memory file system and a persistent storage solution.
Pros
- Strong encryption and privacy features
- Cross-platform compatibility (Windows, macOS, Linux, and WebAssembly)
- Efficient compression and deduplication
- Supports both in-memory and persistent storage
Cons
- Limited ecosystem compared to more established file systems
- Potential performance overhead due to encryption and compression
- Requires Rust knowledge for integration and customization
- Still in active development, may have stability issues
Code Examples
- Creating a new ZboxFS instance:
use zbox::{init_env, RepoOpener};
fn main() {
init_env();
let repo = RepoOpener::new()
.create(true)
.open("file://./my_repo", "password")
.unwrap();
}
- Writing data to a file:
use std::io::Write;
use zbox::OpenOptions;
let mut file = OpenOptions::new()
.create(true)
.open(&repo, "/hello.txt")
.unwrap();
file.write_all(b"Hello, ZboxFS!").unwrap();
- Reading data from a file:
use std::io::Read;
let mut file = repo.open_file("/hello.txt").unwrap();
let mut content = String::new();
file.read_to_string(&mut content).unwrap();
println!("File content: {}", content);
Getting Started
To use ZboxFS in your Rust project, add the following to your Cargo.toml
:
[dependencies]
zbox = "0.9.1"
Then, in your Rust code:
use zbox::{init_env, RepoOpener};
fn main() {
// Initialize ZboxFS environment
init_env();
// Create and open a new repository
let repo = RepoOpener::new()
.create(true)
.open("file://./my_repo", "password")
.unwrap();
// Use the repository for file operations
// ...
}
Competitor Comparisons
:floppy_disk: peer-to-peer sharing & live syncronization of files via command line
Pros of dat
- Focuses on distributed data sharing and syncing
- Has a larger community and ecosystem of tools
- Supports versioning and history tracking
Cons of dat
- Less emphasis on encryption and security
- Not designed specifically for embedded file systems
- May have higher overhead for simple local storage use cases
Code comparison
dat:
const Dat = require('dat-node')
Dat('./my-dataset', (err, dat) => {
dat.importFiles()
dat.joinNetwork()
})
zbox:
use zbox::{init_env, RepoOpener};
init_env();
let mut repo = RepoOpener::new()
.create(true)
.open("file://path/to/repo", "password")?;
Key differences
- dat is primarily for peer-to-peer data sharing, while zbox focuses on secure local storage
- zbox provides built-in encryption, while dat relies on external encryption methods
- dat has a more extensive ecosystem of tools and applications
- zbox is designed for embedded use and has a smaller footprint
- dat uses JavaScript/Node.js, while zbox is implemented in Rust
Both projects serve different primary purposes: dat for distributed data sharing and zbox for secure local file systems. The choice between them depends on specific use cases and requirements.
Peer-to-peer hypermedia protocol
Pros of IPFS
- Decentralized and distributed architecture, enabling global content addressing
- Large and active community with extensive ecosystem and tooling
- Supports multiple protocols and integrations with existing web technologies
Cons of IPFS
- Higher complexity and learning curve for implementation
- Potential performance issues with large-scale data retrieval
- Requires more resources for node operation and content pinning
Code Comparison
IPFS (JavaScript example):
import { create } from 'ipfs-core'
const ipfs = await create()
const { cid } = await ipfs.add('Hello world')
console.log(cid.toString())
ZboxFS (Rust example):
use zbox::{init_env, RepoOpener};
init_env();
let repo = RepoOpener::new()
.create(true)
.open("mem://test", "password")
.unwrap();
Key Differences
- IPFS focuses on content-addressable, peer-to-peer file sharing, while ZboxFS is designed for encrypted, local file systems
- IPFS has a broader scope and use cases, including decentralized web applications, while ZboxFS targets secure local storage
- ZboxFS offers built-in encryption and versioning, whereas IPFS relies on additional layers for these features
- IPFS has a larger community and more extensive documentation, while ZboxFS is more specialized and compact
Open Source Continuous File Synchronization
Pros of Syncthing
- Mature, widely-used project with active community support
- Cross-platform synchronization across multiple devices
- Decentralized architecture for enhanced privacy and security
Cons of Syncthing
- Primarily focused on file synchronization, not encryption or secure storage
- Can be complex to set up and configure for non-technical users
- Requires devices to be online simultaneously for synchronization
Code Comparison
Syncthing (Go):
func (m *Model) Index(deviceID protocol.DeviceID, folder string, files []protocol.FileInfo, flags uint32, options []protocol.Option) {
m.fmut.Lock()
defer m.fmut.Unlock()
if !m.folderSharedWith(folder, deviceID) {
l.Infof("Unexpected index for folder %q from device %v", folder, deviceID)
return
}
}
ZBox (Rust):
pub fn open<P: AsRef<Path>>(path: P) -> Result<Repo> {
let path = path.as_ref();
let mut repo = Repo::new();
repo.load(path)?;
Ok(repo)
}
While both projects deal with file management, Syncthing focuses on synchronization across devices, whereas ZBox is centered around secure, encrypted storage. Syncthing's code snippet demonstrates its device and folder management, while ZBox's code shows its focus on opening and loading encrypted repositories.
Deduplicating archiver with compression and authenticated encryption.
Pros of Borg
- Mature and widely-used backup solution with a large community
- Supports remote backups and repositories
- Offers compression and encryption features
Cons of Borg
- Primarily focused on backups, not general-purpose file storage
- Requires separate mount tools for accessing backups as filesystems
Code Comparison
Borg (Python):
import borg.repository
repo = borg.repository.Repository('/path/to/repo')
with repo:
manifest, key = Manifest.load(repo)
ZBox (Rust):
use zbox::RepoOpener;
let mut repo = RepoOpener::new()
.create(true)
.open("file://path/to/repo", "password")?;
Key Differences
- Borg is designed for backups, while ZBox is a general-purpose encrypted filesystem
- Borg uses Python, ZBox is implemented in Rust
- ZBox provides a virtual filesystem interface, Borg focuses on archive management
- Borg has more advanced deduplication and compression features
- ZBox offers better integration for applications needing secure storage
Both projects aim to provide secure data storage, but with different primary use cases. Borg excels in backup scenarios, while ZBox is more suitable for applications requiring an encrypted filesystem.
the Crypto Undertaker
Pros of Tomb
- Designed specifically for secure file storage on Linux systems
- Supports multiple encryption algorithms and key management options
- Integrates well with existing Linux tools and workflows
Cons of Tomb
- Limited to Linux platforms, not cross-platform like ZBox
- Requires root privileges for many operations, which may be a security concern
- Less focus on performance optimization compared to ZBox
Code Comparison
Tomb (creating and opening a tomb):
tomb dig -s 100 secret.tomb
tomb forge secret.tomb.key
tomb lock secret.tomb -k secret.tomb.key
tomb open secret.tomb -k secret.tomb.key
ZBox (creating and opening a encrypted repository):
let mut repo = ZboxRepo::create("repo_path", "password", RepoOpener::new()).unwrap();
let mut file = repo.create_file("/foo").unwrap();
file.write_once(b"hello, world").unwrap();
Both Tomb and ZBox provide encrypted storage solutions, but they target different use cases and platforms. Tomb is a Linux-specific tool for creating encrypted containers, while ZBox is a cross-platform, embedded file system focused on security and performance. ZBox offers a more programmatic approach with its Rust API, whereas Tomb is primarily used via command-line interface and shell scripts.
Ceph is a distributed object, block, and file storage platform
Pros of Ceph
- Highly scalable and distributed storage system
- Supports object, block, and file storage in a unified platform
- Active community and extensive documentation
Cons of Ceph
- Complex setup and configuration process
- Higher resource requirements for deployment
- Steeper learning curve for management and maintenance
Code Comparison
Ceph (C++):
class ObjectStore {
public:
virtual int mount() = 0;
virtual int umount() = 0;
virtual int mkfs() = 0;
virtual int mkjournal() = 0;
};
ZBox (Rust):
pub struct Repo {
inner: Arc<Inner>,
}
impl Repo {
pub fn open(uri: &str) -> Result<Self> {
// Implementation details
}
}
Key Differences
- Ceph is a distributed storage system, while ZBox is an embedded file system
- Ceph is written in C++, ZBox is written in Rust
- Ceph offers more storage options and scalability, ZBox focuses on encryption and portability
- Ceph has a larger codebase and more complex architecture, ZBox is more lightweight
Use Cases
Ceph:
- Large-scale storage clusters
- Cloud storage infrastructure
- Enterprise data centers
ZBox:
- Embedded applications
- Secure local file storage
- Cross-platform data synchronization
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
ZboxFS
ZboxFS is a zero-details, privacy-focused in-app file system. Its goal is to help application store files securely, privately and reliably. By encapsulating files and directories into an encrypted repository, it provides a virtual file system and exclusive access to authorised application.
Unlike other system-level file systems, such as ext4, XFS and Btrfs, which provide shared access to multiple processes, ZboxFS is a file system that runs in the same memory space as the application. It provides access to only one process at a time.
By abstracting IO access, ZboxFS supports a variety of underlying storage layers, including memory, OS file system, RDBMS and key-value object store.
Disclaimer
ZboxFS is under active development, we are not responsible for any data loss or leak caused by using it. Always back up your files and use at your own risk!
Features
- Everything is encrypted :lock:, including metadata and directory structure, no knowledge can be leaked to underlying storage
- State-of-the-art cryptography: AES-256-GCM (hardware), XChaCha20-Poly1305, Argon2 password hashing and etc., powered by libsodium
- Support varieties of underlying storages, including memory, OS file system, RDBMS, Key-value object store and more
- Files and directories are packed into same-sized blocks to eliminate metadata leakage
- Content-based data chunk deduplication and file-based deduplication
- Data compression using LZ4 in fast mode, optional
- Data integrity is guaranteed by authenticated encryption primitives (AEAD crypto)
- File contents versioning
- Copy-on-write (COW :cow:) semantics
- ACID transactional operations
- Built with Rust :hearts:
Comparison
Many OS-level file systems support encryption, such as EncFS, APFS and ZFS. Some disk encryption tools also provide virtual file system, such as TrueCrypt, LUKS and VeraCrypt.
This diagram shows the difference between ZboxFS and them.
Below is the feature comparison list.
ZboxFS | OS-level File Systems | Disk Encryption Tools | |
---|---|---|---|
Encrypts file contents | :heavy_check_mark: | partial | :heavy_check_mark: |
Encrypts file metadata | :heavy_check_mark: | partial | :heavy_check_mark: |
Encrypts directory | :heavy_check_mark: | partial | :heavy_check_mark: |
Data integrity | :heavy_check_mark: | partial | :heavy_multiplication_x: |
Shared access for processes | :heavy_multiplication_x: | :heavy_check_mark: | :heavy_check_mark: |
Deduplication | :heavy_check_mark: | :heavy_multiplication_x: | :heavy_multiplication_x: |
Compression | :heavy_check_mark: | partial | :heavy_multiplication_x: |
Content versioning | :heavy_check_mark: | :heavy_multiplication_x: | :heavy_multiplication_x: |
COW semantics | :heavy_check_mark: | partial | :heavy_multiplication_x: |
ACID Transaction | :heavy_check_mark: | :heavy_multiplication_x: | :heavy_multiplication_x: |
Varieties of storages | :heavy_check_mark: | :heavy_multiplication_x: | :heavy_multiplication_x: |
API access | :heavy_check_mark: | through VFS | through VFS |
Symbolic links | :heavy_multiplication_x: | :heavy_check_mark: | depends on inner FS |
Users and permissions | :heavy_multiplication_x: | :heavy_check_mark: | :heavy_check_mark: |
FUSE support | :heavy_multiplication_x: | :heavy_check_mark: | :heavy_check_mark: |
Linux and macOS support | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Windows support | :heavy_check_mark: | partial | :heavy_check_mark: |
Supported Storage
ZboxFS supports a variety of underlying storages. Memory storage is enabled by default. All the other storages can be enabled individually by specifying its corresponding Cargo feature when building ZboxFS.
Storage | URI identifier | Cargo Feature |
---|---|---|
Memory | "mem://" | N/A |
OS file system | "file://" | storage-file |
SQLite | "sqlite://" | storage-sqlite |
Redis | "redis://" | storage-redis |
Zbox Cloud Storage | "zbox://" | storage-zbox-native |
* Visit zbox.io to learn more about Zbox Cloud Storage.
Specs
Algorithm and data structure | Value |
---|---|
Authenticated encryption | AES-256-GCM or XChaCha20-Poly1305 |
Password hashing | Argon2 |
Key derivation | BLAKE2B |
Content dedup | Rabin rolling hash |
File dedup | Merkle tree |
Index structure | Log-structured merge-tree |
Compression | LZ4 in fast mode |
Limits
Limit | Value |
---|---|
Data block size | 8 KiB |
Maximum encryption frame size | 128 KiB |
Super block size | 8 KiB |
Maximum filename length | No limit |
Allowable characters in directory entries | Any UTF-8 character except / |
Maximum pathname length | No limit |
Maximum file size | 16 EiB |
Maximum repo size | 16 EiB |
Max number of files | No limit |
Metadata
Metadata | Value |
---|---|
Stores file owner | No |
POSIX file permissions | No |
Creation timestamps | Yes |
Last access / read timestamps | No |
Last change timestamps | Yes |
Access control lists | No |
Security | Integrated with crypto |
Extended attributes | No |
Capabilities
Capability | Value |
---|---|
Hard links | No |
Symbolic links | No |
Case-sensitive | Yes |
Case-preserving | Yes |
File Change Log | By content versioning |
Filesystem-level encryption | Yes |
Data deduplication | Yes |
Data checksums | Integrated with crypto |
Offline grow | No |
Online grow | Auto |
Offline shrink | No |
Online shrink | Auto |
Allocation and layout policies
Feature | Value |
---|---|
Address allocation scheme | Append-only, linear address space |
Sparse files | No |
Transparent compression | Yes |
Extents | No |
Copy on write | Yes |
Storage fragmentation
Fragmentation | Value |
---|---|
Memory storage | No |
File storage | fragment unit size < 32 MiB |
RDBMS storage | No |
Key-value storage | No |
Zbox cloud storage | fragment unit size < 128 KiB |
How to use
For reference documentation, please visit documentation.
Requirements
Supported Platforms
- 64-bit Debian-based Linux, such as Ubuntu
- 64-bit macOS
- 64-bit Windows
- 64-bit Android, API level >= 21
32-bit and other OS are NOT
supported yet.
Usage
Add the following dependency to your Cargo.toml
:
[dependencies]
zbox = "0.9.2"
If you don't want to install libsodium by yourself, simply specify
libsodium-bundled
feature in dependency, which will automatically download,
verify and build libsodium.
[dependencies]
zbox = { version = "0.9.2", features = ["libsodium-bundled"] }
Example
extern crate zbox;
use std::io::{Read, Write, Seek, SeekFrom};
use zbox::{init_env, RepoOpener, OpenOptions};
fn main() {
// initialise zbox environment, called first
init_env();
// create and open a repository in current OS directory
let mut repo = RepoOpener::new()
.create(true)
.open("file://./my_repo", "your password")
.unwrap();
// create and open a file in repository for writing
let mut file = OpenOptions::new()
.create(true)
.open(&mut repo, "/my_file.txt")
.unwrap();
// use std::io::Write trait to write data into it
file.write_all(b"Hello, World!").unwrap();
// finish writing to make a permanent content version
file.finish().unwrap();
// read file content using std::io::Read trait
let mut content = String::new();
file.seek(SeekFrom::Start(0)).unwrap();
file.read_to_string(&mut content).unwrap();
assert_eq!(content, "Hello, World!");
}
Build with Docker
ZboxFS comes with Docker support, which made building ZboxFS easier. Check each repo for more details.
-
zboxfs/base Base image for building ZboxFS on Linux
-
zboxfs/wasm Docker image for building WebAssembly binding
-
zboxfs/nodejs Docker image for building Node.js binding
-
zboxfs/android Docker image for building Android Java binding
Static linking with libsodium
By default, ZboxFS uses dynamic linking when it is linked with libsodium. If you want to change this behavior and use static linking, you can enable below two environment variables.
On Linux/macOS,
export SODIUM_LIB_DIR=/path/to/your/libsodium/lib
export SODIUM_STATIC=true
On Windows,
set SODIUM_LIB_DIR=C:\path\to\your\libsodium\lib
set SODIUM_STATIC=true
And then re-build the code.
cargo build
Performance
The performance test is run on a Macbook Pro 2017 laptop with spec as below.
Spec | Value |
---|---|
Processor Name: | Intel Core i7 |
Processor Speed: | 3.5 GHz |
Number of Processors: | 1 |
Total Number of Cores: | 2 |
L2 Cache (per Core): | 256 KB |
L3 Cache: | 4 MB |
Memory: | 16 GB |
OS Version: | macOS High Sierra 10.13.6 |
Test result:
Read | Write | TPS | |
---|---|---|---|
Baseline (memcpy): | 3658.23 MB/s | 3658.23 MB/s | N/A |
Baseline (file): | 1307.97 MB/s | 2206.30 MB/s | N/A |
Memory storage (no compress): | 605.01 MB/s | 186.20 MB/s | 1783 tx/s |
Memory storage (compress): | 505.04 MB/s | 161.11 MB/s | 1180 tx/s |
File storage (no compress): | 445.28 MB/s | 177.39 MB/s | 313 tx/s |
File storage (compress): | 415.85 MB/s | 158.22 MB/s | 325 tx/s |
To run the performance test on your own computer, please follow the instructions in CONTRIBUTING.md.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms of conditions.
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.
Community
License
ZboxFS
is licensed under the Apache 2.0 License - see the LICENSE
file for details.
Top Related Projects
:floppy_disk: peer-to-peer sharing & live syncronization of files via command line
Peer-to-peer hypermedia protocol
Open Source Continuous File Synchronization
Deduplicating archiver with compression and authenticated encryption.
the Crypto Undertaker
Ceph is a distributed object, block, and file storage platform
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot