Convert Figma logo to code with AI

jimmywarting logoStreamSaver.js

StreamSaver writes stream to the filesystem directly asynchronous

3,974
413
3,974
79

Top Related Projects

An HTML5 saveAs() FileSaver implementation

file downloading using client-side javascript

12,413

Fast and powerful CSV (delimited text) parser that gracefully handles large files and malformed input

Quick Overview

StreamSaver.js is a JavaScript library that enables efficient downloading of large files by utilizing the Streams API. It allows for saving data directly to the file system without holding the entire file in memory, making it ideal for handling large files or streams of data in web applications.

Pros

  • Efficient memory usage for large file downloads
  • Cross-browser compatibility, including support for older browsers
  • Ability to handle streams of data, not just static files
  • No dependencies, lightweight implementation

Cons

  • Requires a service worker for full functionality
  • May have limitations in some browser security contexts
  • Learning curve for developers unfamiliar with Streams API
  • Limited control over the download process compared to traditional methods

Code Examples

  1. Basic usage to save a fetch response:
import { createWriteStream } from 'streamsaver'

fetch('https://example.com/large-file.zip')
  .then(res => {
    const fileStream = createWriteStream('large-file.zip')
    return res.body.pipeTo(fileStream)
  })
  1. Saving a stream of data:
const stream = new ReadableStream({
  start(controller) {
    for (let i = 0; i < 1000000; i++) {
      controller.enqueue('Hello World\n')
    }
    controller.close()
  }
})

const fileStream = createWriteStream('hello.txt')
stream.pipeTo(fileStream)
  1. Using with Web Workers for background processing:
// In main script
const worker = new Worker('worker.js')
const { writable, abort } = createWriteStream('generated-file.bin')
worker.postMessage({ writable }, [writable])

// In worker.js
self.onmessage = ({ data: { writable } }) => {
  const writer = writable.getWriter()
  // Write data to the stream
  writer.write(new Uint8Array([1, 2, 3, 4]))
  writer.close()
}

Getting Started

  1. Install StreamSaver.js:

    npm install streamsaver
    
  2. Import and use in your project:

    import { createWriteStream } from 'streamsaver'
    
    const fileStream = createWriteStream('example.txt')
    const writer = fileStream.getWriter()
    
    writer.write('Hello, StreamSaver!')
    writer.close()
    
  3. Ensure you have a compatible service worker set up for full functionality. Refer to the project documentation for detailed setup instructions.

Competitor Comparisons

An HTML5 saveAs() FileSaver implementation

Pros of FileSaver.js

  • Simpler implementation for basic file saving needs
  • Wider browser compatibility, especially for older browsers
  • Smaller file size, making it lighter to include in projects

Cons of FileSaver.js

  • Limited to smaller file sizes due to memory constraints
  • Lacks support for streaming large files or handling progressive downloads
  • Cannot save files larger than available RAM on the client-side

Code Comparison

FileSaver.js:

import { saveAs } from 'file-saver';

const blob = new Blob(["Hello, world!"], {type: "text/plain;charset=utf-8"});
saveAs(blob, "hello world.txt");

StreamSaver.js:

import streamSaver from 'streamsaver';

const fileStream = streamSaver.createWriteStream('hello world.txt');
const writer = fileStream.getWriter();
writer.write(new TextEncoder().encode('Hello, world!'));
writer.close();

StreamSaver.js is designed for handling larger files and streaming data, making it more suitable for scenarios involving big data or progressive downloads. It uses the Streams API, allowing for more efficient memory usage. FileSaver.js, on the other hand, is simpler to use for basic file saving needs and has broader browser support, but is limited in handling large files due to memory constraints.

file downloading using client-side javascript

Pros of download

  • Simpler implementation with fewer dependencies
  • Works well for smaller files and basic download scenarios
  • Easier to integrate into existing projects

Cons of download

  • Limited support for large file downloads
  • Lacks advanced streaming capabilities
  • May encounter memory limitations with very large files

Code Comparison

StreamSaver.js:

const fileStream = streamSaver.createWriteStream('filename.txt', {
  size: 22, // (optional) Will show progress
  writableStrategy: undefined, // (optional)
  readableStrategy: undefined  // (optional)
})

new Response('StreamSaver is awesome').body
  .pipeTo(fileStream)
  .then(() => console.log('done writing'))

download:

download('hello world', 'dltest.txt', 'text/plain')

StreamSaver.js offers more control over the download process, allowing for streaming of large files and progress tracking. It's better suited for handling large files or scenarios where memory usage is a concern.

download provides a simpler API for basic file downloads, making it easier to use for small to medium-sized files. However, it may struggle with very large files due to memory constraints.

Both libraries serve different use cases, with StreamSaver.js being more powerful for advanced scenarios and download offering simplicity for basic download needs.

12,413

Fast and powerful CSV (delimited text) parser that gracefully handles large files and malformed input

Pros of PapaParse

  • Specialized in parsing CSV and other delimited text files
  • Supports both browser and Node.js environments
  • Extensive configuration options for parsing and data handling

Cons of PapaParse

  • Limited to parsing text-based data formats
  • Not designed for handling large file downloads or streaming

Code Comparison

PapaParse:

Papa.parse(file, {
  complete: function(results) {
    console.log(results);
  }
});

StreamSaver:

const fileStream = streamSaver.createWriteStream('filename.txt');
const writer = fileStream.getWriter();
writer.write(data);
writer.close();

Summary

PapaParse is a powerful library for parsing CSV and delimited text files, offering extensive configuration options and cross-platform support. It excels in data processing but is limited to text-based formats. StreamSaver, on the other hand, focuses on saving large files and streams directly to the user's disk, which PapaParse doesn't handle. While PapaParse is ideal for working with structured text data, StreamSaver is better suited for managing large file downloads and streams in web applications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

StreamSaver.js (legacy-ish)

... Don't worry it's not deprecated. It's still maintained and i still recommend using this when needed. Just want to let you know that there is this new native way to save files to the HD: https://github.com/whatwg/fs which is more or less going to make FileSaver, StreamSaver and similar packages a bit obsolete in the future, it'still in a experimental stage and not implemented by all browser. That is why I also built native-file-system-adapter so you can have it in all Browsers, Deno, and NodeJS with different storages

npm version

StreamSaver.js is the solution to saving streams in the web browser. It is perfect for web apps where there's a need to save large amounts of data on devices with e.g. limited RAM.

First I want to thank Eli Grey for a fantastic work implementing the FileSaver.js to save files & blobs so easily! But there is one obstacle - The RAM it can hold and the max blob size limitation

StreamSaver.js takes a different approach. Instead of saving data in client-side storage or in memory you could now actually create a writable stream directly to the file system (I'm not talking about chromes sandboxed file system or any other web storage). This is accomplish by emulating how a server would instruct the browser to save a file using some response header + service worker

If the file you are trying to save comes from the cloud/server use the server instead of emulating what the browser does to save files on the disk using StreamSaver. Add those extra Response headers and don't use AJAX to get it. FileSaver has a good wiki about using headers. If you can't change the headers then you may use StreamSaver as a last resort. FileSaver, streamsaver and others alike are mostly for client generated content inside the browser.

Getting started

StreamSaver in it's simplest form

<script src="https://cdn.jsdelivr.net/npm/web-streams-polyfill@2.0.2/dist/ponyfill.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/streamsaver@2.0.3/StreamSaver.min.js"></script>
<script>
  import streamSaver from 'streamsaver'
  const streamSaver = require('streamsaver')
  const streamSaver = window.streamSaver
</script>
<script>
  const uInt8 = new TextEncoder().encode('StreamSaver is awesome')

  // streamSaver.createWriteStream() returns a writable byte stream
  // The WritableStream only accepts Uint8Array chunks
  // (no other typed arrays, arrayBuffers or strings are allowed)
  const fileStream = streamSaver.createWriteStream('filename.txt', {
    size: uInt8.byteLength, // (optional filesize) Will show progress
    writableStrategy: undefined, // (optional)
    readableStrategy: undefined  // (optional)
  })

  if (manual) {
    const writer = fileStream.getWriter()
    writer.write(uInt8)
    writer.close()
  } else {
    // using Response can be a great tool to convert
    // mostly anything (blob, string, buffers) into a byte stream
    // that can be piped to StreamSaver
    //
    // You could also use a transform stream that would sit
    // between and convert everything to Uint8Arrays
    new Response('StreamSaver is awesome').body
      .pipeTo(fileStream)
      .then(success, error)
  }
</script>

Some browser have ReadableStream but not WritableStream. web-streams-polyfill can fix this gap. It's better to load the ponyfill instead of the polyfill and override the existing implementation because StreamSaver works better when a native ReadableStream is transferable to the service worker. hopefully MattiasBuelens will fix the missing implementations instead of overriding the existing. If you think you can help out here is the issue

Best practice

Use https if you can. That way you don't have to open the man in the middle in a popup to install the service worker from another secure context. Popups are often blocked but if you can't it's best that you initiate the createWriteStream on user interaction. Even if you don't have any data ready - this is so that you can get around the popup blockers. (In secure context this don't matter) Another benefit of using https is that the mitm-iframe can ping the service worker to prevent it from going idle. (worker goes idle after 30 sec in firefox, 5 minutes in blink) but also this won't mater if the browser supports transferable streams throught postMessage since service worker don't have to handle any logic. (the stream that you transfer to the service worker will be the stream we respond with)

Handle unload event when user leaves the page. The download gets broken when you leave the page. Because it looks like a regular native download process some might think that it's okey to leave the page beforehand since it's is downloading in the background directly from some a server, but it isn't.

// abort so it dose not look stuck
window.onunload = () => {
  writableStream.abort()
  // also possible to call abort on the writer you got from `getWriter()`
  writer.abort()
}

window.onbeforeunload = evt => {
  if (!done) {
    evt.returnValue = `Are you sure you want to leave?`;
  }
}

Note that when using insecure context StreamSaver will navigate to the download url instead of using an hidden iframe to initiate the download, this will trigger the onbefureunload event when the download starts, but it will not call the onunload event... In secure context you can add this handler immediately. Otherwise this has to be added sometime later.

Configuration

There a some few settings you can apply to StreamSaver to configure what it should use

// StreamSaver can detect and use the Ponyfill that is loaded from the cdn.
streamSaver.WritableStream = streamSaver.WritableStream
streamSaver.TransformStream = streamSaver.TransformStream
// if you decide to host mitm + sw yourself
streamSaver.mitm = 'https://example.com/custom_mitm.html'

Examples

There are a few examples in the examples directory

In the wild

How does it work?

There is no magical saveAs() function that saves a stream, file or blob. (at least not if/when native-filesystem api becomes available) The way we mostly save Blobs/Files today is with the help of Object URLs and a[download] attribute FileSaver.js takes advantage of this and create a convenient saveAs(blob, filename). fantastic! But you can't create a objectUrl from a stream and attach it to a link...

link = document.createElement('a')
link.href = URL.createObjectURL(stream) // DOES NOT WORK
link.download = 'filename'
link.click() // Save

So the one and only other solution is to do what the server does: Send a stream with Content-Disposition header to tell the browser to save the file. But we don't have a server or the content isn't on a server! So the solution is to create a service worker that can intercept request and use respondWith() and act as a server.
But a service workers are only allowed in secure contexts and it requires some effort to put up. Most of the time you are working in the main thread and the service worker are only alive for < 5 minutes before it goes idle.

  1. So StreamSaver creates a own man in the middle that installs the service worker in a secure context hosted on github static pages. either from a iframe (in secure context) or a new popup if your page is insecure.
  2. Transfer the stream (or DataChannel) over to the service worker using postMessage.
  3. And then the worker creates a download link that we then open.

if a "transferable" readable stream was not passed to the service worker then the mitm will also try to keep the service worker alive by pinging it every x second to prevent it from going idle.

To test this locally, spin up a local server
(we don't use any pre compiler or such)

# A simple php or python server is enough
php -S localhost:3001
python -m SimpleHTTPServer 3001
# then open localhost:3001/example.html

NPM DownloadsLast 30 Days