Convert Figma logo to code with AI

magenta logomagenta-js

Magenta.js: Music and Art Generation with Machine Learning in the browser

1,962
312
1,962
104

Top Related Projects

7,760

Code for the paper "Jukebox: A Generative Model for Music"

A lightweight yet powerful audio-to-MIDI converter with pitch bend detection

25,598

Deezer source separation library including pretrained models.

Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.

Quick Overview

Magenta.js is a JavaScript library for music and art generation using machine learning models. It provides a set of pre-trained models and utilities for creating interactive musical and artistic experiences in the browser or Node.js environments.

Pros

  • Easy integration with web applications for creative AI-powered projects
  • Supports both browser and Node.js environments
  • Includes a variety of pre-trained models for music generation, melody continuation, and more
  • Active development and community support

Cons

  • Limited documentation for some advanced features
  • Performance can be slow for complex models in browser environments
  • Requires understanding of music theory and machine learning concepts for optimal use
  • Some models may produce inconsistent or unexpected results

Code Examples

  1. Generating a melody using MusicRNN:
const model = new mm.MusicRNN('https://storage.googleapis.com/magentadata/js/checkpoints/music_rnn/basic_rnn');
await model.initialize();

const seed = {
  notes: [
    {pitch: 60, startTime: 0.0, endTime: 0.5},
    {pitch: 62, startTime: 0.5, endTime: 1.0},
  ],
  totalTime: 1.0
};

const result = await model.continueSequence(seed, 20, 1.5);
  1. Playing generated MIDI using SoundFont:
const player = new mm.SoundFontPlayer('https://storage.googleapis.com/magentadata/js/soundfonts/sgm_plus');

player.callbackObject = {
  run: (note) => console.log(note),
  stop: () => console.log('done')
};

player.start(result);
  1. Using MusicVAE to interpolate between two melodies:
const model = new mm.MusicVAE('https://storage.googleapis.com/magentadata/js/checkpoints/music_vae/mel_4bar_small_q2');
await model.initialize();

const melody1 = [60, 62, 64, 65, 67, 69, 71, 72];
const melody2 = [72, 71, 69, 67, 65, 64, 62, 60];

const interpolations = await model.interpolate([melody1, melody2], 4);

Getting Started

To use Magenta.js in your project, first install it via npm:

npm install @magenta/music

Then, import and use the desired modules in your JavaScript code:

import * as mm from '@magenta/music';

async function generateMelody() {
  const model = new mm.MusicRNN('https://storage.googleapis.com/magentadata/js/checkpoints/music_rnn/basic_rnn');
  await model.initialize();
  
  const seed = {
    notes: [{pitch: 60, startTime: 0.0, endTime: 0.5}],
    totalTime: 0.5
  };
  
  const result = await model.continueSequence(seed, 20, 1.5);
  console.log(result);
}

generateMelody();

This example initializes a MusicRNN model and generates a melody based on a single-note seed sequence.

Competitor Comparisons

7,760

Code for the paper "Jukebox: A Generative Model for Music"

Pros of Jukebox

  • More advanced and capable of generating high-quality, multi-instrumental music
  • Produces complete songs with vocals and lyrics
  • Utilizes state-of-the-art deep learning techniques for music generation

Cons of Jukebox

  • Requires significant computational resources and training time
  • Less accessible for beginners or those with limited hardware
  • Fewer pre-trained models and examples available for immediate use

Code Comparison

Magenta-js (MusicVAE model):

const model = new mm.MusicVAE('https://storage.googleapis.com/magentadata/js/checkpoints/music_vae/mel_4bar_small_q2');
model.initialize().then(() => {
  const sample = model.sample(1);
  mm.Player.tone.Transport.start();
  mm.Player.tone.Transport.scheduleOnce(() => {
    mm.Player.tone.Transport.stop();
  }, '+4m');
});

Jukebox (Python):

import jukebox
import torch as t
import librosa

vqvae, prior = jukebox.load_model('5b')
sample_rate, duration = 44100, 20
artist = 'Alan Jackson'
genre = 'Country'
lyrics = 'I\'m walking on sunshine'
tokens = jukebox.make_tokens(artist, genre, lyrics, sample_rate, duration)

Note: The code snippets demonstrate basic usage and are not directly comparable due to different languages and functionalities.

A lightweight yet powerful audio-to-MIDI converter with pitch bend detection

Pros of Basic-pitch

  • Focused specifically on pitch detection and transcription
  • Lightweight and easy to integrate into web applications
  • Provides pre-trained models for immediate use

Cons of Basic-pitch

  • Limited scope compared to Magenta-js's broader music generation capabilities
  • Less extensive documentation and community support
  • Fewer pre-built models and tools for music creation

Code Comparison

Basic-pitch example:

import { BasicPitch } from '@spotify/basic-pitch';

const basicPitch = new BasicPitch();
const audioBuffer = // ... load audio buffer
const result = await basicPitch.predict(audioBuffer);

Magenta-js example:

import * as mm from '@magenta/music';

const model = new mm.OnsetsAndFrames('https://storage.googleapis.com/magentadata/js/checkpoints/transcription/onsets_frames_uni');
const ns = await model.transcribeFromAudioFile(audioFile);

Both libraries offer straightforward APIs for their respective tasks. Basic-pitch focuses on pitch detection, while Magenta-js provides a wider range of music-related functionalities, including transcription, generation, and more.

Basic-pitch is ideal for projects specifically requiring pitch detection, while Magenta-js is better suited for more comprehensive music AI applications. The choice between them depends on the specific requirements of your project and the depth of music-related features needed.

25,598

Deezer source separation library including pretrained models.

Pros of Spleeter

  • Specialized in audio source separation, particularly for isolating vocals and instruments
  • Offers pre-trained models for quick and easy use
  • Supports both CPU and GPU processing for flexibility

Cons of Spleeter

  • Limited to audio source separation tasks
  • Requires more computational resources for processing large audio files
  • Less versatile compared to Magenta.js's broader range of music and audio capabilities

Code Comparison

Spleeter (Python):

from spleeter.separator import Separator

separator = Separator('spleeter:2stems')
separator.separate_to_file('audio_example.mp3', 'output/')

Magenta.js (JavaScript):

const mm = require('@magenta/music');

const model = new mm.OnsetsAndFrames('https://storage.googleapis.com/magentadata/js/checkpoints/onsets_frames_uni/acoustic_guitar');
model.initialize().then(() => {
  // Perform transcription or other tasks
});

Key Differences

  • Spleeter focuses on audio source separation, while Magenta.js offers a wider range of music generation and analysis tools
  • Spleeter is primarily Python-based, whereas Magenta.js is JavaScript-oriented, making it more suitable for web applications
  • Magenta.js provides more extensive options for music creation and manipulation, including melody generation and style transfer

Both projects serve different purposes within the audio processing domain, with Spleeter excelling in source separation tasks and Magenta.js offering a broader toolkit for music-related applications.

Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.

Pros of Audiocraft

  • More focused on audio generation and manipulation
  • Includes advanced models like MusicGen and AudioGen
  • Offers better support for long-form audio generation

Cons of Audiocraft

  • Less extensive documentation compared to Magenta.js
  • Narrower scope, primarily focused on audio generation
  • Steeper learning curve for beginners

Code Comparison

Audiocraft example (PyTorch):

import torchaudio
from audiocraft.models import MusicGen

model = MusicGen.get_pretrained('medium')
wav = model.generate_unconditional(4, progress=True)
torchaudio.save('generated_music.wav', wav[0], model.sample_rate)

Magenta.js example (JavaScript):

import * as mm from '@magenta/music';

const player = new mm.SoundFontPlayer('https://storage.googleapis.com/magentadata/js/soundfonts/sgm_plus');
const melody = {
  notes: [
    {pitch: 60, startTime: 0.0, endTime: 0.5},
    {pitch: 62, startTime: 0.5, endTime: 1.0},
    {pitch: 64, startTime: 1.0, endTime: 1.5},
    {pitch: 65, startTime: 1.5, endTime: 2.0},
  ],
};

player.start(melody);

Both repositories offer powerful tools for audio and music generation, but they cater to different needs and skill levels. Audiocraft is more specialized in audio generation with advanced models, while Magenta.js provides a broader range of music-related functionalities with easier integration for web applications.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Build Status

Magenta.js is a collection of TypeScript libraries for doing inference with pre-trained Magenta models. All libraries are published as npm packages. More information and example applications can be found at g.co/magenta/js.

Complete documentation is available at https://magenta.github.io/magenta-js.

Learn more about the Magenta project on our blog and main Magenta repo.

Libraries

NPM DownloadsLast 30 Days