Top Related Projects
:floppy_disk: peer-to-peer sharing & live syncronization of files via command line
Peer-to-peer hypermedia protocol
Hypercore is a secure, distributed append-only log.
A database of unforgeable append-only feeds, optimized for efficient replication for peer to peer protocols
Peer-to-Peer Databases for the Decentralized Web
libp2p implementation in Go
Quick Overview
Hypercore is a distributed, append-only log that can be used to build decentralized applications. It provides a simple and efficient way to share and replicate data across a network of peers, making it a powerful tool for building peer-to-peer applications.
Pros
- Decentralized and Distributed: Hypercore is a decentralized, peer-to-peer protocol, which means that data is stored and shared across a network of peers, rather than on a central server.
- Efficient Data Replication: Hypercore uses a technique called "content-addressable storage" to efficiently replicate data across the network, reducing the amount of data that needs to be transferred.
- Scalable and Performant: Hypercore is designed to be highly scalable and performant, allowing for the efficient handling of large amounts of data and high levels of concurrency.
- Flexible and Extensible: Hypercore is designed to be a flexible and extensible platform, allowing developers to build a wide range of decentralized applications on top of it.
Cons
- Complexity: Hypercore is a relatively complex system, with a steep learning curve for developers who are new to decentralized technologies.
- Limited Adoption: While Hypercore is a powerful and innovative technology, it has not yet achieved widespread adoption, which may limit its ecosystem and available tooling.
- Security Concerns: As with any decentralized system, there are potential security concerns that need to be carefully considered, such as the risk of data tampering or network attacks.
- Lack of Standardization: Hypercore is a relatively new technology, and there may be a lack of standardization and interoperability with other decentralized protocols and tools.
Code Examples
Here are a few examples of how to use Hypercore in your code:
// Creating a new Hypercore
const hypercore = require('hypercore')
const core = hypercore('./my-hypercore')
// Appending data to the Hypercore
core.append('Hello, world!')
// Replicating the Hypercore with another peer
const replicator = core.replicate()
replicator.pipe(otherPeer.replicate()).pipe(replicator)
// Querying data from the Hypercore
core.get(0, (err, data) => {
console.log(data.toString()) // 'Hello, world!'
})
// Subscribing to updates on the Hypercore
core.on('append', () => {
console.log('New data appended!')
})
// Closing the Hypercore
core.close()
Getting Started
To get started with Hypercore, you'll need to have Node.js and npm installed on your system. You can then install the Hypercore library using npm:
npm install hypercore
Once you have the library installed, you can start using it in your code. Here's a simple example of how to create a new Hypercore and append some data to it:
const hypercore = require('hypercore')
const core = hypercore('./my-hypercore')
core.append('Hello, world!')
core.get(0, (err, data) => {
console.log(data.toString()) // 'Hello, world!'
})
core.close()
For more detailed information on using Hypercore, you can check out the project's documentation.
Competitor Comparisons
:floppy_disk: peer-to-peer sharing & live syncronization of files via command line
Pros of Dat
- More established ecosystem with a wider range of tools and applications
- Better documentation and community resources
- Supports a broader range of use cases, including scientific data sharing
Cons of Dat
- Less active development and maintenance
- Older codebase with potential legacy issues
- May have performance limitations compared to newer alternatives
Code Comparison
Dat:
const Dat = require('dat-node')
Dat('./my-dataset', (err, dat) => {
if (err) throw err
dat.importFiles()
dat.joinNetwork()
})
Hypercore:
const Hypercore = require('hypercore')
const feed = new Hypercore('./my-dataset')
feed.append('Hello, World!', (err) => {
if (err) throw err
console.log('Data appended!')
})
Summary
Dat offers a more mature ecosystem with better documentation, but Hypercore provides a more modern and actively maintained codebase. Dat is suitable for a wider range of applications, especially in scientific data sharing, while Hypercore focuses on providing a low-level append-only log that can be used as a building block for various distributed systems. The choice between the two depends on specific project requirements, desired level of community support, and performance needs.
Peer-to-peer hypermedia protocol
Pros of IPFS
- Larger, more established ecosystem with broader adoption
- Content-addressed storage provides built-in data integrity
- Supports a wider range of use cases beyond file sharing
Cons of IPFS
- Can be more resource-intensive and slower for certain operations
- More complex architecture, potentially harder to implement and maintain
- Less focused on real-time data synchronization
Code Comparison
IPFS (JavaScript):
import { create } from 'ipfs-core'
const ipfs = await create()
const { cid } = await ipfs.add('Hello, IPFS!')
console.log(cid.toString())
Hypercore:
import Hypercore from 'hypercore'
const core = new Hypercore('./my-hypercore')
await core.append('Hello, Hypercore!')
console.log(core.key.toString('hex'))
Key Differences
- IPFS uses content-addressing, while Hypercore uses append-only logs
- Hypercore is more focused on real-time data synchronization
- IPFS has a more complex architecture but offers broader functionality
- Hypercore is generally faster for certain operations but has a narrower scope
Both projects aim to decentralize data storage and sharing, but they approach the problem from different angles. IPFS is more versatile and widely adopted, while Hypercore offers better performance for specific use cases, particularly those involving real-time data updates.
Hypercore is a secure, distributed append-only log.
Pros of Hypercore
- More established project with a longer history and larger community
- Extensive documentation and examples available
- Broader ecosystem of tools and extensions
Cons of Hypercore
- Potentially more complex API due to its maturity and feature set
- May have more legacy code to maintain
Code Comparison
Hypercore:
const Hypercore = require('hypercore')
const core = new Hypercore('./my-first-core', {valueEncoding: 'utf-8'})
core.append('Hello World!', (err) => {
if (err) throw err
console.log('Data appended!')
})
Note
The comparison you requested is not possible as both repositories (holepunchto/hypercore and holepunchto/hypercore) are the same project. Hypercore is a single repository maintained by Holepunch. There isn't a separate repository to compare it against within the same organization. The provided information focuses on the characteristics of Hypercore itself, without a direct comparison to another project.
A database of unforgeable append-only feeds, optimized for efficient replication for peer to peer protocols
Pros of ssb-db
- ssb-db is designed for the Secure Scuttlebutt (SSB) protocol, which provides a decentralized and privacy-focused approach to data storage and communication.
- ssb-db integrates well with the broader SSB ecosystem, allowing for seamless interaction with other SSB-based applications.
- ssb-db has a strong focus on security and privacy, with features like end-to-end encryption and decentralized data storage.
Cons of ssb-db
- ssb-db is primarily focused on the SSB protocol and may not be as versatile or adaptable to other use cases as Hypercore.
- The learning curve for ssb-db may be steeper, as it requires understanding the SSB protocol and ecosystem.
- The performance and scalability of ssb-db may be more limited compared to Hypercore, as it is designed for a specific use case.
Code Comparison
Hypercore (holepunchto/hypercore):
const hypercore = require('hypercore')
const feed = hypercore('./my-feed')
feed.append('hello world', (err) => {
if (err) throw err
console.log('data appended!')
})
ssb-db (ssbc/ssb-db):
const ssbKeys = require('ssb-keys')
const ssbClient = require('ssb-client')
ssbClient((err, sbot) => {
if (err) throw err
sbot.publish({ type: 'post', text: 'hello world' }, (err, msg) => {
if (err) throw err
console.log('message published:', msg.key)
})
})
Peer-to-Peer Databases for the Decentralized Web
Pros of OrbitDB
- Supports multiple database types (key-value, log, feed, documents)
- Built-in access control and encryption features
- More extensive documentation and examples
Cons of OrbitDB
- Higher-level abstraction, potentially less flexible for custom use cases
- Larger codebase and dependencies, which may impact performance
- Less active development compared to Hypercore
Code Comparison
OrbitDB:
const orbitdb = await OrbitDB.createInstance(ipfs)
const db = await orbitdb.keyvalue('my-database')
await db.put('key', 'value')
const value = await db.get('key')
Hypercore:
const feed = hypercore('./my-dataset')
await feed.append('some data')
const data = await feed.get(0)
Summary
OrbitDB offers a more feature-rich, database-oriented solution with multiple data models and built-in security features. It's well-documented but may be less flexible for custom implementations. Hypercore provides a lower-level, append-only log structure that's more lightweight and actively developed, potentially offering better performance for specific use cases. The choice between them depends on the specific requirements of your project, such as data model complexity, performance needs, and desired level of abstraction.
libp2p implementation in Go
Pros of go-libp2p
- More extensive and mature ecosystem with broader protocol support
- Better suited for large-scale distributed systems and enterprise applications
- Stronger focus on interoperability and modularity
Cons of go-libp2p
- Higher complexity and steeper learning curve
- Potentially overkill for simpler peer-to-peer applications
- Larger codebase and resource footprint
Code Comparison
go-libp2p:
host, err := libp2p.New(
libp2p.ListenAddrStrings("/ip4/0.0.0.0/tcp/0"),
libp2p.Identity(priv),
)
Hypercore:
const feed = new Hypercore('./my-first-dataset', {
valueEncoding: 'json',
sparse: true
})
Summary
go-libp2p is a more comprehensive and flexible networking stack, ideal for complex distributed systems. It offers a wide range of protocols and is designed for scalability and interoperability. However, this comes at the cost of increased complexity and resource usage.
Hypercore, on the other hand, is more focused on providing a simple, efficient append-only log structure for peer-to-peer applications. It's easier to get started with and has a smaller footprint, but may lack some of the advanced features and protocol support found in go-libp2p.
The choice between the two depends on the specific requirements of your project, with go-libp2p being better suited for large-scale, diverse network applications, while Hypercore excels in scenarios requiring efficient, append-only data structures in peer-to-peer contexts.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Hypercore
See the full API docs at docs.pears.com
Hypercore is a secure, distributed append-only log.
Built for sharing large datasets and streams of real time data
Features
- Sparse replication. Only download the data you are interested in.
- Realtime. Get the latest updates to the log fast and securely.
- Performant. Uses a simple flat file structure to maximize I/O performance.
- Secure. Uses signed merkle trees to verify log integrity in real time.
- Modular. Hypercore aims to do one thing and one thing well - distributing a stream of data.
Note that the latest release is Hypercore 10, which adds support for truncate and many other things. Version 10 is not compatible with earlier versions (9 and earlier), but is considered LTS, meaning the storage format and wire protocol is forward compatible with future versions.
Install
npm install hypercore
API
const core = new Hypercore(storage, [key], [options])
Make a new Hypercore instance.
storage
should be set to a directory where you want to store the data and core metadata.
const core = new Hypercore('./directory') // store data in ./directory
Alternatively you can pass a function instead that is called with every filename Hypercore needs to function and return your own abstract-random-access instance that is used to store the data.
const RAM = require('random-access-memory')
const core = new Hypercore((filename) => {
// filename will be one of: data, bitfield, tree, signatures, key, secret_key
// the data file will contain all your data concatenated.
// just store all files in ram by returning a random-access-memory instance
return new RAM()
})
Per default Hypercore uses random-access-file. This is also useful if you want to store specific files in other directories.
Hypercore will produce the following files:
oplog
- The internal truncating journal/oplog that tracks mutations, the public key and other metadata.tree
- The Merkle Tree file.bitfield
- The bitfield of which data blocks this core has.data
- The raw data of each block.
Note that tree
, data
, and bitfield
are normally heavily sparse files.
key
can be set to a Hypercore public key. If you do not set this the public key will be loaded from storage. If no key exists a new key pair will be generated.
options
include:
{
createIfMissing: true, // create a new Hypercore key pair if none was present in storage
overwrite: false, // overwrite any old Hypercore that might already exist
sparse: true, // enable sparse mode, counting unavailable blocks towards core.length and core.byteLength
valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to binary
encodeBatch: batch => { ... }, // optionally apply an encoding to complete batches
keyPair: kp, // optionally pass the public key and secret key as a key pair
encryptionKey: k, // optionally pass an encryption key to enable block encryption
onwait: () => {}, // hook that is called if gets are waiting for download
timeout: 0, // wait at max some milliseconds (0 means no timeout)
writable: true, // disable appends and truncates
inflightRange: null // Advanced option. Set to [minInflight, maxInflight] to change the min and max inflight blocks per peer when downloading.
}
You can also set valueEncoding to any abstract-encoding or compact-encoding instance.
valueEncodings will be applied to individual blocks, even if you append batches. If you want to control encoding at the batch-level, you can use the encodeBatch
option, which is a function that takes a batch and returns a binary-encoded batch. If you provide a custom valueEncoding, it will not be applied prior to encodeBatch
.
const { length, byteLength } = await core.append(block)
Append a block of data (or an array of blocks) to the core. Returns the new length and byte length of the core.
// simple call append with a new block of data
await core.append(Buffer.from('I am a block of data'))
// pass an array to append multiple blocks as a batch
await core.append([Buffer.from('batch block 1'), Buffer.from('batch block 2')])
const block = await core.get(index, [options])
Get a block of data. If the data is not available locally this method will prioritize and wait for the data to be downloaded.
// get block #42
const block = await core.get(42)
// get block #43, but only wait 5s
const blockIfFast = await core.get(43, { timeout: 5000 })
// get block #44, but only if we have it locally
const blockLocal = await core.get(44, { wait: false })
options
include:
{
wait: true, // wait for block to be downloaded
onwait: () => {}, // hook that is called if the get is waiting for download
timeout: 0, // wait at max some milliseconds (0 means no timeout)
valueEncoding: 'json' | 'utf-8' | 'binary', // defaults to the core's valueEncoding
decrypt: true // automatically decrypts the block if encrypted
}
const has = await core.has(start, [end])
Check if the core has all blocks between start
and end
.
const updated = await core.update([options])
Waits for initial proof of the new core length until all findingPeers
calls has finished.
const updated = await core.update()
console.log('core was updated?', updated, 'length is', core.length)
options
include:
{
wait: false
}
Use core.findingPeers()
or { wait: true }
to make await core.update()
blocking.
const [index, relativeOffset] = await core.seek(byteOffset, [options])
Seek to a byte offset.
Returns [index, relativeOffset]
, where index
is the data block the byteOffset
is contained in and relativeOffset
is
the relative byte offset in the data block.
await core.append([Buffer.from('abc'), Buffer.from('d'), Buffer.from('efg')])
const first = await core.seek(1) // returns [0, 1]
const second = await core.seek(3) // returns [1, 0]
const third = await core.seek(5) // returns [2, 1]
{
wait: true, // wait for data to be downloaded
timeout: 0 // wait at max some milliseconds (0 means no timeout)
}
const stream = core.createReadStream([options])
Make a read stream to read a range of data out at once.
// read the full core
const fullStream = core.createReadStream()
// read from block 10-15
const partialStream = core.createReadStream({ start: 10, end: 15 })
// pipe the stream somewhere using the .pipe method on Node.js or consume it as
// an async iterator
for await (const data of fullStream) {
console.log('data:', data)
}
options
include:
{
start: 0,
end: core.length,
live: false,
snapshot: true // auto set end to core.length on open or update it on every read
}
const bs = core.createByteStream([options])
Make a byte stream to read a range of bytes.
// Read the full core
const fullStream = core.createByteStream()
// Read from byte 3, and from there read 50 bytes
const partialStream = core.createByteStream({ byteOffset: 3, byteLength: 50 })
// Consume it as an async iterator
for await (const data of fullStream) {
console.log('data:', data)
}
// Or pipe it somewhere like any stream:
partialStream.pipe(process.stdout)
options
include:
{
byteOffset: 0,
byteLength: core.byteLength - options.byteOffset,
prefetch: 32
}
const cleared = await core.clear(start, [end], [options])
Clear stored blocks between start
and end
, reclaiming storage when possible.
await core.clear(4) // clear block 4 from your local cache
await core.clear(0, 10) // clear block 0-10 from your local cache
The core will also gossip to peers it is connected to, that is no longer has these blocks.
options
include:
{
diff: false // Returned `cleared` bytes object is null unless you enable this
}
await core.truncate(newLength, [forkId])
Truncate the core to a smaller length.
Per default this will update the fork id of the core to + 1
, but you can set the fork id you prefer with the option.
Note that the fork id should be monotonely incrementing.
await core.purge()
Purge the hypercore from your storage, completely removing all data.
const hash = await core.treeHash([length])
Get the Merkle Tree hash of the core at a given length, defaulting to the current length of the core.
const range = core.download([range])
Download a range of data.
You can await when the range has been fully downloaded by doing:
await range.done()
A range can have the following properties:
{
start: startIndex,
end: nonInclusiveEndIndex,
blocks: [index1, index2, ...],
linear: false // download range linearly and not randomly
}
To download the full core continuously (often referred to as non sparse mode) do
// Note that this will never be considered downloaded as the range
// will keep waiting for new blocks to be appended.
core.download({ start: 0, end: -1 })
To download a discrete range of blocks pass a list of indices.
core.download({ blocks: [4, 9, 7] })
To cancel downloading a range simply destroy the range instance.
// will stop downloading now
range.destroy()
const session = await core.session([options])
Creates a new Hypercore instance that shares the same underlying core.
You must close any session you make.
Options are inherited from the parent instance, unless they are re-set.
options
are the same as in the constructor.
const info = await core.info([options])
Get information about this core, such as its total size in bytes.
The object will look like this:
Info {
key: Buffer(...),
discoveryKey: Buffer(...),
length: 18,
contiguousLength: 16,
byteLength: 742,
fork: 0,
padding: 8,
storage: {
oplog: 8192,
tree: 4096,
blocks: 4096,
bitfield: 4096
}
}
options
include:
{
storage: false // get storage estimates in bytes, disabled by default
}
await core.close()
Fully close this core.
core.on('close')
Emitted when the core has been fully closed.
await core.ready()
Wait for the core to fully open.
After this has called core.length
and other properties have been set.
In general you do NOT need to wait for ready
, unless checking a synchronous property,
as all internals await this themself.
core.on('ready')
Emitted after the core has initially opened all its internal state.
core.writable
Can we append to this core?
Populated after ready
has been emitted. Will be false
before the event.
core.readable
Can we read from this core? After closing the core this will be false.
Populated after ready
has been emitted. Will be false
before the event.
core.id
String containing the id (z-base-32 of the public key) identifying this core.
Populated after ready
has been emitted. Will be null
before the event.
core.key
Buffer containing the public key identifying this core.
Populated after ready
has been emitted. Will be null
before the event.
core.keyPair
Object containing buffers of the core's public and secret key
Populated after ready
has been emitted. Will be null
before the event.
core.discoveryKey
Buffer containing a key derived from the core's public key.
In contrast to core.key
this key does not allow you to verify the data but can be used to announce or look for peers that are sharing the same core, without leaking the core key.
Populated after ready
has been emitted. Will be null
before the event.
core.encryptionKey
Buffer containing the optional block encryption key of this core. Will be null
unless block encryption is enabled.
core.length
How many blocks of data are available on this core? If sparse: false
, this will equal core.contiguousLength
.
Populated after ready
has been emitted. Will be 0
before the event.
core.contiguousLength
How many blocks are contiguously available starting from the first block of this core?
Populated after ready
has been emitted. Will be 0
before the event.
core.fork
What is the current fork id of this core?
Populated after ready
has been emitted. Will be 0
before the event.
core.padding
How much padding is applied to each block of this core? Will be 0
unless block encryption is enabled.
const stream = core.replicate(isInitiatorOrReplicationStream)
Create a replication stream. You should pipe this to another Hypercore instance.
The isInitiator
argument is a boolean indicating whether you are the initiator of the connection (ie the client)
or if you are the passive part (ie the server).
If you are using a P2P swarm like Hyperswarm you can know this by checking if the swarm connection is a client socket or server socket. In Hyperswarm you can check that using the client property on the peer details object
If you want to multiplex the replication over an existing Hypercore replication stream you can pass
another stream instance instead of the isInitiator
boolean.
// assuming we have two cores, localCore + remoteCore, sharing the same key
// on a server
const net = require('net')
const server = net.createServer(function (socket) {
socket.pipe(remoteCore.replicate(false)).pipe(socket)
})
// on a client
const socket = net.connect(...)
socket.pipe(localCore.replicate(true)).pipe(socket)
const done = core.findingPeers()
Create a hook that tells Hypercore you are finding peers for this core in the background. Call done
when your current discovery iteration is done.
If you're using Hyperswarm, you'd normally call this after a swarm.flush()
finishes.
This allows core.update
to wait for either the findingPeers
hook to finish or one peer to appear before deciding whether it should wait for a merkle tree update before returning.
core.on('append')
Emitted when the core has been appended to (i.e. has a new length / byteLength), either locally or remotely.
core.on('truncate', ancestors, forkId)
Emitted when the core has been truncated, either locally or remotely.
core.on('peer-add')
Emitted when a new connection has been established with a peer.
core.on('peer-remove')
Emitted when a peer's connection has been closed.
core.on('upload', index, byteLength, peer)
Emitted when a block is uploaded to a peer.
core.on('download', index, byteLength, peer)
Emitted when a block is downloaded from a peer.
Top Related Projects
:floppy_disk: peer-to-peer sharing & live syncronization of files via command line
Peer-to-peer hypermedia protocol
Hypercore is a secure, distributed append-only log.
A database of unforgeable append-only feeds, optimized for efficient replication for peer to peer protocols
Peer-to-Peer Databases for the Decentralized Web
libp2p implementation in Go
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot