Convert Figma logo to code with AI

sokrypton logoColabFold

Making Protein folding accessible to all!

1,921
482
1,921
310

Top Related Projects

12,587

Open source code for AlphaFold.

This package contains deep learning models and related scripts for RoseTTAFold

3,194

Evolutionary Scale Modeling (esm): Pretrained language models for proteins

Google Research

Quick Overview

ColabFold is a project that brings protein structure prediction to Google Colab, making it accessible to researchers without the need for powerful local hardware. It integrates AlphaFold2 and RoseTTAFold, allowing users to predict protein structures and complexes using a user-friendly interface in a cloud environment.

Pros

  • Democratizes access to state-of-the-art protein structure prediction tools
  • Runs on Google Colab, eliminating the need for local high-performance computing resources
  • Provides an easy-to-use interface for both AlphaFold2 and RoseTTAFold
  • Regularly updated to incorporate the latest improvements in protein structure prediction

Cons

  • Dependent on Google Colab's availability and resource limitations
  • May experience slower performance compared to local high-performance setups
  • Limited customization options compared to running the tools locally
  • Requires an internet connection and Google account to use

Code Examples

  1. Running AlphaFold2 prediction:
from colabfold import run_alphafold2

sequence = "MVKVGVNGFGRIGRLVTRAAFNSGKVDIVAINDPFIDLNYMVYMFQYDSTHGKFHGTVKAENGKLVINGNPITIFQERDPSKIKWGDAGAEYVVESTGVFTTMEKAGAHLQGGAKRVIISAPSADAPMFVMGVNHEKYDNSLKIISNASCTTNCLAPLAKVIHDNFGIVEGLMTTVHAITATQKTVDGPSGKLWRDGRGALQNIIPASTGAAKAVGKVIPELDGKLTGMAFRVPTANVSVVDLTCRLEKPAKYDDIKKVVKQASEGPLKGILGYTEHQVVSSDFNSDTHSSTFDAGAGIALNDHFVKLISWYDNEFGYSNRVVDLMAHMASKE"
output = run_alphafold2(sequence)
print(output)
  1. Predicting protein complex structure:
from colabfold import predict_complex

sequences = ["MVKVGVNGFGRIGRLVTRAAFNSGKVDIVAINDPFIDLNYMVYMFQYDSTHGKFHGTVKAENGKLVINGNPITIFQERDPSKIKWGDAGAEYVVESTGVFTTMEKAGAHLQGGAKRVIISAPSADAPMFVMGVNHEKYDNSLKIISNASCTTNCLAPLAKVIHDNFGIVEGLMTTVHAITATQKTVDGPSGKLWRDGRGALQNIIPASTGAAKAVGKVIPELDGKLTGMAFRVPTANVSVVDLTCRLEKPAKYDDIKKVVKQASEGPLKGILGYTEHQVVSSDFNSDTHSSTFDAGAGIALNDHFVKLISWYDNEFGYSNRVVDLMAHMASKE",
             "MSKGEELFTGVVPILVELDGDVNGHKFSVSGEGEGDATYGKLTLKFICTTGKLPVPWPTLVTTFSYGVQCFSRYPDHMKQHDFFKSAMPEGYVQERTIFFKDDGNYKTRAEVKFEGDTLVNRIELKGIDFKEDGNILGHKLEYNYNSHNVYIMADKQKNGIKVNFKIRHNIEDGSVQLADHYQQNTPIGDGPVLLPDNHYLSTQSALSKDPNEKRDHMVLLEFVTAAGITHGMDELYK"]
output = predict_complex(sequences)
print(output)
  1. Visualizing predicted structure:
from colabfold import visualize_structure

pdb_file = "predicted_structure.pdb"
visualize_structure(pdb_file)

Getting Started

To use ColabFold, follow these steps:

  1. Open Google Colab (https://colab.research.google.com/)
  2. Create a new notebook
  3. Install ColabFold by running:
    !pip install colabfold
    
  4. Import the necessary modules:

Competitor Comparisons

12,587

Open source code for AlphaFold.

Pros of AlphaFold

  • Original implementation by DeepMind, offering the complete, official codebase
  • Extensive documentation and detailed explanations of the algorithm
  • Highly optimized for performance on powerful hardware

Cons of AlphaFold

  • Requires significant computational resources and expertise to set up and run
  • Less user-friendly for researchers without extensive computational background
  • Limited flexibility for customization or integration with other tools

Code Comparison

AlphaFold:

def predict_structure(
    fasta_path: str,
    output_dir: str,
    data_pipeline: pipeline.DataPipeline,
    model_runners: Dict[str, model.RunModel],
    amber_relaxer: relax.AmberRelaxation,
    benchmark: bool,
    random_seed: int,
    models_to_relax: ModelsToRelax):
  """Predicts structure using AlphaFold for the given sequence."""
  # Implementation details...

ColabFold:

def predict_structure(sequence, jobname='test', num_recycle=3):
    """Predicts protein structure using ColabFold."""
    results = []
    for model_name in model_names:
        model = load_model(model_name)
        pred = model.predict(sequence, num_recycle=num_recycle)
        results.append(pred)
    return results

The code snippets illustrate the difference in complexity and abstraction level between the two implementations. AlphaFold's code is more detailed and parameterized, while ColabFold offers a simpler interface for quick predictions.

This package contains deep learning models and related scripts for RoseTTAFold

Pros of RoseTTAFold

  • More comprehensive and flexible protein structure prediction pipeline
  • Integrates Rosetta energy functions for refinement and scoring
  • Supports additional features like complex modeling and design

Cons of RoseTTAFold

  • Steeper learning curve and more complex setup
  • Requires more computational resources
  • Less user-friendly for beginners or those without extensive bioinformatics experience

Code Comparison

RoseTTAFold:

# Example of running RoseTTAFold
from pyrosetta import *
from pyrosetta.rosetta.protocols.rosetta_scripts import XmlObjects
xml = XmlObjects.create_from_file("rosettafold.xml")
pose = pose_from_sequence("ACDEFGHIKLMNPQRSTVWY")
xml.get_mover("rosettafold").apply(pose)

ColabFold:

# Example of running ColabFold
from colabfold import batch
batch.predict("input.fasta", "output_dir", use_templates=True)

ColabFold offers a more streamlined and user-friendly approach, making it easier for researchers to quickly predict protein structures. RoseTTAFold, while more complex, provides greater flexibility and integration with the Rosetta suite of tools, allowing for more advanced modeling and design capabilities. The choice between the two depends on the user's specific needs, expertise, and available computational resources.

3,194

Evolutionary Scale Modeling (esm): Pretrained language models for proteins

Pros of ESM

  • Broader scope: Focuses on protein language models and sequence-based predictions
  • More extensive documentation and examples for various use cases
  • Larger community and more frequent updates

Cons of ESM

  • Less specialized for protein structure prediction
  • Requires more setup and configuration for specific tasks
  • May be more complex for beginners to use effectively

Code Comparison

ESM:

import torch
from esm import pretrained

model, alphabet = pretrained.load_model_and_alphabet("esm2_t33_650M_UR50D")
batch_converter = alphabet.get_batch_converter()
model.eval()  # disables dropout for deterministic results

ColabFold:

from colabfold.batch import predict_structure_batch
from colabfold.download import default_data_dir
from colabfold.utils import setup_logging

predict_structure_batch(
    "sequence.fasta",
    "output_dir",
    data_dir=default_data_dir,
    num_recycle=3
)

The code snippets demonstrate the different focus areas of each project. ESM provides a more general-purpose protein language model, while ColabFold offers a streamlined interface for structure prediction.

Google Research

Pros of google-research

  • Broader scope, covering various research areas beyond protein folding
  • Larger community and more frequent updates
  • Official repository from Google, potentially more stable and well-maintained

Cons of google-research

  • Less focused on protein structure prediction specifically
  • May be more complex to navigate and use for specific tasks
  • Potentially steeper learning curve for newcomers to the field

Code comparison

ColabFold:

def run_mmseqs2(x, prefix, use_env=True, use_filter=True):
    return_value = os.system(f"mmseqs easy-search {x} {DB} {prefix} {TMP_DIR} \
                   --format-output query,target,fident,alnlen,mismatch,gapopen,qstart,qend,tstart,tend,evalue,bits,tcov,qcov \
                   -s 7.5 --alignment-mode 3 --slice-search")

google-research:

def run_alphafold(fasta_path, output_dir, max_template_date=None):
    model_runners = {}
    for model_name in config.MODEL_PRESETS['alphafold2_ptm']:
        model_config = config.model_config(model_name)
        model_runner = model.RunModel(model_config, model_params)
        model_runners[model_name] = model_runner

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

ColabFold - v1.5.5

For details of what was changed in v1.5, see change log!

Making Protein folding accessible to all via Google Colab!

Notebooksmonomerscomplexesmmseqs2jackhmmertemplates
AlphaFold2_mmseqs2YesYesYesNoYes
AlphaFold2_batchYesYesYesNoYes
AlphaFold2 (from Deepmind)YesYesNoYesNo
relax_amber (relax input structure)
ESMFoldYesMaybeNoNoNo
BETA (in development) notebooks
RoseTTAFold2YesYesYesNoWIP
OmegaFoldYesMaybeNoNoNo
AlphaFold2_advanced_v2 (new experimental notebook)YesYesYesNoYes

Check the wiki page old retired notebooks for unsupported notebooks.

FAQ

  • Where can I chat with other ColabFold users?
  • Can I use the models for Molecular Replacement?
    • Yes, but be CAREFUL, the bfactor column is populated with pLDDT confidence values (higher = better). Phenix.phaser expects a "real" bfactor, where (lower = better). See post from Claudia Millán.
  • What is the maximum length?
    • Limits depends on free GPU provided by Google-Colab fingers-crossed
    • For GPU: Tesla T4 or Tesla P100 with ~16G the max length is ~2000
    • For GPU: Tesla K80 with ~12G the max length is ~1000
    • To check what GPU you got, open a new code cell and type !nvidia-smi
  • Is it okay to use the MMseqs2 MSA server (cf.run_mmseqs2) on a local computer?
    • You can access the server from a local computer if you queries are serial from a single IP. Please do not use multiple computers to query the server.
  • Where can I download the databases used by ColabFold?
  • I want to render my own images of the predicted structures, how do I color by pLDDT?
    • In pymol for AlphaFold structures: spectrum b, red_yellow_green_cyan_blue, minimum=50, maximum=90
    • If you want to use AlphaFold Colours (credit: Konstantin Korotkov)
      set_color n0, [0.051, 0.341, 0.827]
      set_color n1, [0.416, 0.796, 0.945]
      set_color n2, [0.996, 0.851, 0.212]
      set_color n3, [0.992, 0.490, 0.302]
      color n0, b < 100; color n1, b < 90
      color n2, b < 70;  color n3, b < 50
      
    • In pymol for RoseTTAFold structures: spectrum b, red_yellow_green_cyan_blue, minimum=0.5, maximum=0.9
  • What is the difference between the AlphaFold2_advanced and AlphaFold2_mmseqs2 (_batch) notebook for complex prediction?
    • We currently have two different ways to predict protein complexes: (1) using the AlphaFold2 model with residue index jump and (2) using the AlphaFold2-multimer model. AlphaFold2_advanced supports (1) and AlphaFold2_mmseqs2 (_batch) (2).
  • What is the difference between localcolabfold and the pip installable colabfold_batch?
    • LocalColabFold is an installer script designed to make ColabFold functionality available on local users' machines. It supports wide range of operating systems, such as Windows 10 or later (using Windows Subsystem for Linux 2), macOS, and Linux.
  • Is there a way to amber-relax structures without having to rerun alphafold/colabfold from scratch?
  • Where can I find the old notebooks that were previously developed and are now retired?
  • Where can I find the history of MSA Server Databases used in ColabFold?

Running locally

For instructions on how to install ColabFold locally refer to localcolabfold or see our wiki on how to run ColabFold within Docker.

Generating MSAs for small scale local structure/complex predictions using the MSA server

When you pass a FASTA or CSV file containing your sequences to colabfold_batch it will automatically query the public MSA server to generate MSAs. You might want to split this into two steps for better GPU resource utilization:

# Query the MSA server and predict the structure on local GPU in one go:
colabfold_batch input_sequences.fasta out_dir

# Split querying MSA server and GPU predictions into two steps
colabfold_batch input_sequences.fasta out_dir --msa-only
colabfold_batch input_sequences.fasta out_dir

Generating MSAs for large scale structure/complex predictions

First create a directory for the databases on a disk with sufficient storage (940GB (!)). Depending on where you are, this will take a couple of hours:

Note: MMseqs2 71dd32ec43e3ac4dabf111bbc4b124f1c66a85f1 (May 28, 2023) is used to create the databases and perform sequece search in the ColabFold MSA server. Please use this version if you want to obtain the same MSAs as the server.

MMSEQS_NO_INDEX=1 ./setup_databases.sh /path/to/db_folder

If MMseqs2 is not installed in your PATH, add --mmseqs <path to mmseqs> to your mmseqs in colabfold_search:

# This needs a lot of CPU
colabfold_search --mmseqs /path/to/bin/mmseqs input_sequences.fasta /path/to/db_folder msas
# This needs a GPU
colabfold_batch msas predictions

This will create intermediate folder msas that contains all input multiple sequence alignments formated as a3m files and a predictions folder with all predicted pdb,json and png files.

The procedure above disables MMseqs2 preindexing of the various ColabFold databases by setting the MMSEQS_NO_INDEX=1 environment variable before calling the database setup script. For most use-cases of colabfold_search precomputing the index is not required and might hurt search speed. The precomputed index is necessary for fast response times of the ColabFold server, where the whole database is permamently kept in memory. In any case the batch searches will require a machine with about 128GB RAM or, if the databases are to be kept permamently in RAM, with over 1TB RAM.

In some cases using precomputed database can still be useful. For the following cases, call the setup_databases.sh script without the MMSEQS_NO_INDEX environment variable:

(0) As mentioned above, if you want to set-up a server.

(1) If the precomputed index is stored on a very fast storage system (e.g., NVMe-SSDs) it might be faster to read the index from disk than computing in on the fly. In this case, the search should be performed on the same machine that called setup_databases.sh since the precomputed index is created to fit within the given main memory size. Additionaly, pass the --db-load-mode 0 option to make sure the database is read once from the storage system before use.

(2) Fast single query searches require the full index (the .idx files) to be kept in memory. This can be done with e.g. by using vmtouch. Thus, this type of search requires a machine with at least 768GB to 1TB RAM for the ColabfoldDB. If the index is present in memory, use the --db-load-mode 2 parameter in colabfold_search to avoid index loading overhead.

If no index was created (MMSEQS_NO_INDEX=1 was set), then --db-load-mode does not do anything and can be ignored.

Tutorials & Presentations

  • ColabFold Tutorial presented at the Boston Protein Design and Modeling Club. [video] [slides].

Projects based on ColabFold or helpers

Acknowledgments

  • We would like to thank the RoseTTAFold and AlphaFold team for doing an excellent job open sourcing the software.
  • Also credit to David Koes for his awesome py3Dmol plugin, without whom these notebooks would be quite boring!
  • A colab by Sergey Ovchinnikov (@sokrypton), Milot Mirdita (@milot_mirdita) and Martin Steinegger (@thesteinegger).

How do I reference this work?

  • Mirdita M, Schütze K, Moriwaki Y, Heo L, Ovchinnikov S and Steinegger M. ColabFold: Making protein folding accessible to all.
    Nature Methods (2022) doi: 10.1038/s41592-022-01488-1
  • If you’re using AlphaFold, please also cite:
    Jumper et al. "Highly accurate protein structure prediction with AlphaFold."
    Nature (2021) doi: 10.1038/s41586-021-03819-2
  • If you’re using AlphaFold-multimer, please also cite:
    Evans et al. "Protein complex prediction with AlphaFold-Multimer."
    biorxiv (2021) doi: 10.1101/2021.10.04.463034v1
  • If you are using RoseTTAFold, please also cite:
    Minkyung et al. "Accurate prediction of protein structures and interactions using a three-track neural network."
    Science (2021) doi: 10.1126/science.abj8754

DOI


OLD Updates

  31Jul2023: 2023/07/31: The ColabFold MSA server is back to normal
             It was using older DB (UniRef30 2202/PDB70 220313) from 27th ~8:30 AM CEST to 31st ~11:10 AM CEST.
  27Jul2023: ColabFold MSA server issue:
             We are using the backup server with old databases
             (UniRef30 2202/PDB70 220313) starting from ~8:30 AM CEST until we resolve the issue.
             Resolved on 31Jul2023 ~11:10 CEST.
  12Jun2023: New databases! UniRef30 updated to 2302 and PDB to 230517.
             We now use PDB100 instead of PDB70 (see notes in the [main](https://colabfold.com) notebook).
  12Jun2023: We introduced a new default pairing strategy:
             Previously, for multimer predictions with more than 2 chains,
             we only pair if all sequences taxonomically match ("complete" pairing).
             The new default "greedy" strategy pairs any taxonomically matching subsets.
  30Apr2023: Amber is working again in our ColabFold Notebook
  29Apr2023: Amber is not working in our Notebook due to Colab update
  18Feb2023: v1.5.2 - fixing: fixing memory leak for large proteins
                    - fixing: --use_dropout (random seed was not changing between recycles)
  06Feb2023: v1.5.1 - fixing: --save-all/--save-recycles
  04Feb2023: v1.5.0 - ColabFold updated to use AlphaFold v2.3.1!
  03Jan2023: The MSA server's faulty hardware from 12/26 was replaced.
             There were intermittent failures on 12/26 and 1/3. Currently,
             there are no known issues. Let us know if you experience any.
  10Oct2022: Bugfix: random_seed was not being used for alphafold-multimer.
             Same structure was returned regardless of defined seed. This
             has been fixed!
  13Jul2022: We have set up a new ColabFold MSA server provided by Korean
             Bioinformation Center. It provides accelerated MSA generation,
             we updated the UniRef30 to 2022_02 and PDB/PDB70 to 220313.
  11Mar2022: We use in default AlphaFold-multimer-v2 weights for complex modeling.
             We also offer the old complex modes "AlphaFold-ptm" or "AlphaFold-multimer-v1"
  04Mar2022: ColabFold now uses a much more powerful server for MSAs and searches through the ColabFoldDB instead of BFD/MGnify.
             Please let us know if you observe any issues.
  26Jan2022: AlphaFold2_mmseqs2, AlphaFold2_batch and colabfold_batch's multimer complexes predictions are
             now in default reranked by iptmscore*0.8+ptmscore*0.2 instead of ptmscore
  16Aug2021: WARNING - MMseqs2 API is undergoing upgrade, you may see error messages.
  17Aug2021: If you see any errors, please report them.
  17Aug2021: We are still debugging the MSA generation procedure...
  20Aug2021: WARNING - MMseqs2 API is undergoing upgrade, you may see error messages.
             To avoid Google Colab from crashing, for large MSA we did -diff 1000 to get
             1K most diverse sequences. This caused some large MSA to degrade in quality,
             as sequences close to query were being merged to single representive.
             We are working on updating the server (today) to fix this, by making sure
             that both diverse and sequences close to query are included in the final MSA.
             We'll post update here when update is complete.
  21Aug2021  The MSA issues should now be resolved! Please report any errors you see.
             In short, to reduce MSA size we filter (qsc > 0.8, id > 0.95) and take 3K
             most diverse sequences at different qid (sequence identity to query) intervals
             and merge them. More specifically 3K sequences at qid at (0→0.2),(0.2→0.4),
             (0.4→0.6),(0.6→0.8) and (0.8→1). If you submitted your sequence between
             16Aug2021 and 20Aug2021, we recommend submitting again for best results!
  21Aug2021  The use_templates option in AlphaFold2_mmseqs2 is not properly working. We are
             working on fixing this. If you are not using templates, this does not affect the
             the results. Other notebooks that do not use_templates are unaffected.
  21Aug2021  The templates issue is resolved!
  11Nov2021  [AlphaFold2_mmseqs2] now uses Alphafold-multimer for complex (homo/hetero-oligomer) modeling.
             Use [AlphaFold2_advanced] notebook for the old complex prediction logic.
  11Nov2021  ColabFold can be installed locally using pip!
  14Nov2021  Template based predictions works again in the Alphafold2_mmseqs2 notebook.
  14Nov2021  WARNING "Single-sequence" mode in AlphaFold2_mmseqs2 and AlphaFold2_batch was broken
             starting 11Nov2021. The MMseqs2 MSA was being used regardless of selection.
  14Nov2021  "Single-sequence" mode is now fixed.
  20Nov2021  WARNING "AMBER" mode in AlphaFold2_mmseqs2 and AlphaFold2_batch was broken
             starting 11Nov2021. Unrelaxed proteins were returned instead.
  20Nov2021  "AMBER" is fixed thanks to Kevin Pan